abstract
stringlengths 6
6.09k
| id
stringlengths 9
16
| time
int64 725k
738k
|
---|---|---|
We present a novel computational framework for simulating suspensions of
rigid spherical Janus particles in Stokes flow. We show that long-range Janus
particle interactions for a wide array of applications may be resolved using
fast, spectrally accurate boundary integral methods tailored to polydisperse
suspensions of spherical particles. These are incorporated into our rigid body
Stokes platform. Our approach features the use of spherical harmonic expansions
for spectrally accurate integral operator evaluation, complementarity-based
collision resolution, and optimal O(n) scaling with the number of particles
when accelerated via fast summation techniques. We demonstrate the flexibility
of our platform through three key examples of Janus particle systems prominent
in biomedical applications: amphiphilic, bipolar electric and phoretic
particles. We formulate Janus particle interactions in boundary integral form
and showcase characteristic self-assembly and complex collective behavior for
each particle type.
| 2104.14068 | 737,909 |
Self-interacting dark matter (SIDM) models offer one way to reconcile
inconsistencies between observations and predictions from collisionless cold
dark matter (CDM) models on dwarf-galaxy scales. In order to incorporate the
effects of both baryonic and SIDM interactions, we study a suite of
cosmological-baryonic simulations of Milky-Way (MW)-mass galaxies from the
Feedback in Realistic Environments (FIRE-2) project where we vary the SIDM
self-interaction cross-section $\sigma/m$. We compare the shape of the main
dark matter (DM) halo at redshift $z=0$ predicted by SIDM simulations (at
$\sigma/m=0.1$, $1$, and $10$ cm$^2$ g$^{-1}$) with CDM simulations using the
same initial conditions. In the presence of baryonic feedback effects, we find
that SIDM models do not produce the large differences in the inner structure of
MW-mass galaxies predicted by SIDM-only models. However, we do find that the
radius where the shape of the total mass distribution begins to differ from
that of the stellar mass distribution is dependent on $\sigma/m$. This
transition could potentially be used to set limits on the SIDM cross-section in
the MW.
| 2104.14069 | 737,909 |
Absolute Concentration Robustness (ACR) was introduced by Shinar and Feinberg
as a way to define species concentration robustness in mass action dynamical
systems. The idea was to devise a mathematical condition that will ensure
robustness in the function of the biological system being modeled. The
robustness of function rests on what we refer to as empirical robustness -- the
concentration of a variable remains unvarying, when measured in the long run,
across arbitrary initial conditions. While there is a positive correlation
between ACR and empirical robustness, ACR is neither necessary nor sufficient
for empirical robustness, a fact that can be noticed even in simple biochemical
systems. To develop a stronger connection with empirical robustness, we define
dynamic ACR, a property related to dynamics, rather than only to equilibrium
behavior, and one that guarantees convergence to a robust value. We distinguish
between wide basin and narrow basin versions of dynamic ACR, related to the
size of the set of initial values that do not result in convergence to the
robust value. We give numerous examples which help distinguish the various
flavors of ACR as well as clearly illustrate and circumscribe the conditions
that appear in the definitions. We discuss general dynamical systems with ACR
properties as well as parametrized families of dynamical systems related to
reaction networks. We discuss connections between ACR and complex balance, two
notions central to the theory of reaction networks. We give precise conditions
for presence and absence of dynamic ACR in complex balanced systems, which in
turn yields a large body of reaction networks with dynamic ACR.
| 2104.14070 | 737,909 |
Multivariate rapid variation describes decay rates of joint light tails of a
multivariate distribution. We impose a local uniformity condition to control
decay variation of distribution tails along different directions, and using
higher-order tail dependence of copulas, we prove that a rapidly varying
multivariate density implies rapid variation of the joint distribution tails.
As a corollary, rapid variation of skew-elliptical distributions is established
under the assumption that the underlying density generators belong to the
max-domain of attraction of the Gumbel distribution.
| 2104.14071 | 737,909 |
A dimension reduction method based on the "Nonlinear Level set Learning"
(NLL) approach is presented for the pointwise prediction of functions which
have been sparsely sampled. Leveraging geometric information provided by the
Implicit Function Theorem, the proposed algorithm effectively reduces the input
dimension to the theoretical lower bound with minor accuracy loss, providing a
one-dimensional representation of the function which can be used for regression
and sensitivity analysis. Experiments and applications are presented which
compare this modified NLL with the original NLL and the Active Subspaces (AS)
method. While accommodating sparse input data, the proposed algorithm is shown
to train quickly and provide a much more accurate and informative reduction
than either AS or the original NLL on two example functions with
high-dimensional domains, as well as two state-dependent quantities depending
on the solutions to parametric differential equations.
| 2104.14072 | 737,909 |
With populations ageing, the number of people with dementia worldwide is
expected to triple to 152 million by 2050. Seventy percent of cases are due to
Alzheimer's disease (AD) pathology and there is a 10-20 year 'pre-clinical'
period before significant cognitive decline occurs. We urgently need, cost
effective, objective methods to detect AD, and other dementias, at an early
stage. Risk factor modification could prevent 40% of cases and drug trials
would have greater chances of success if participants are recruited at an
earlier stage. Currently, detection of dementia is largely by pen and paper
cognitive tests but these are time consuming and insensitive to pre-clinical
phases. Specialist brain scans and body fluid biomarkers can detect the
earliest stages of dementia but are too invasive or expensive for widespread
use. With the advancement of technology, Artificial Intelligence (AI) shows
promising results in assisting with detection of early-stage dementia. Existing
AI-aided methods and potential future research directions are reviewed and
discussed.
| 2104.14073 | 737,909 |
Bandit algorithms are increasingly used in real world sequential decision
making problems, from online advertising to mobile health. As a result, there
are more datasets collected using bandit algorithms and with that an increased
desire to be able to use these datasets to answer scientific questions like:
Did one type of ad increase the click-through rate more or lead to more
purchases? In which contexts is a mobile health intervention effective?
However, it has been shown that classical statistical approaches, like those
based on the ordinary least squares estimator, fail to provide reliable
confidence intervals when used with bandit data. Recently methods have been
developed to conduct statistical inference using simple models fit to data
collected with multi-armed bandits. However there is a lack of general methods
for conducting statistical inference using more complex models. In this work,
we develop theory justifying the use of M-estimation (Van der Vaart, 2000),
traditionally used with i.i.d data, to provide inferential methods for a large
class of estimators -- including least squares and maximum likelihood
estimators -- but now with data collected with (contextual) bandit algorithms.
To do this we generalize the use of adaptive weights pioneered by Hadad et al.
(2019) and Deshpande et al. (2018). Specifically, in settings in which the data
is collected via a (contextual) bandit algorithm, we prove that certain
adaptively weighted M-estimators are uniformly asymptotically normal and
demonstrate empirically that we can use their asymptotic distribution to
construct reliable confidence regions for a variety of inferential targets.
| 2104.14074 | 737,909 |
A swarm of cooperating UAVs communicating with a distant multiantenna ground
station can leverage MIMO spatial multiplexing to scale the capacity. Due to
the line-of-sight propagation between the swarm and the ground station, the
MIMO channel is highly correlated, leading to limited multiplexing gains. In
this paper, we optimize the UAV positions to attain the maximum MIMO capacity
given by the single user bound. An infinite set of UAV placements that attains
the capacity bound is first derived. Given an initial swarm placement, we
formulate the problem of minimizing the distance traveled by the UAVs to reach
a placement within the capacity maximizing set of positions. An offline
centralized solution to the problem using block coordinate descent is developed
assuming known initial positions of UAVs. We also propose an online distributed
algorithm, where the UAVs iteratively adjust their positions to maximize the
capacity. Our proposed approaches are shown to significantly increase the
capacity at the expense of a bounded translation from the initial UAV
placements. This capacity increase persists when using a massive MIMO ground
station. Using numerical simulations, we show the robustness of our approaches
in a Rician channel under UAV motion disturbances.
| 2104.14075 | 737,909 |
We present three "hard" diagrams of the unknot. They require (at least) three
extra crossings before they can be simplified to the trivial unknot diagram via
Reidemeister moves in $\mathbb{S}^2$. Both examples are constructed by applying
previously proposed methods. The proof of their hardness uses significant
computational resources. We also determine that no small "standard" example of
a hard unknot diagram requires more than one extra crossing for Reidemeister
moves in $\mathbb{S}^2$.
| 2104.14076 | 737,909 |
The BFV-BRST Hamiltonian quantization method is presented for the theories
where the gauge parameters are restricted by differential equations. The
general formalism is exemplified by the Maxwell-like theory of symmetric tensor
field.
| 2104.14077 | 737,909 |
State disturbance by a quantum measurement is at the core of foundational
quantum physics and constitutes a fundamental basis of secure quantum
information processing. While quantifying an information-disturbance relation
has been a long-standing problem, recently verified reversibility of a quantum
measurement requires to refine such a conventional information trade-off toward
a complete picture of information conservation in quantum measurement. Here we
experimentally demonstrate complete trade-off relations among all information
contents, i.e., information gain, disturbance and reversibility in quantum
measurement. By exploring various quantum measurements applied on a photonic
qutrit, we observe that the information of a quantum state is split into three
distinct parts accounting for the extracted, disturbed, and reversible
information. We verify that such different parts of information are in
trade-off relations not only pairwise but also globally all-at-once, and find
that the global trade-off relation is tighter than any of the pairwise
relations. Finally, we realize optimal quantum measurements that inherently
preserve quantum information without loss of information, which offer wider
applications in measurement-based quantum information processing.
| 2104.14078 | 737,909 |
Autonomous vehicles should be able to predict the future states of its
environment and respond appropriately. Specifically, predicting the behavior of
surrounding human drivers is vital for such platforms to share the same road
with humans. Behavior of each of the surrounding vehicles is governed by the
motion of its neighbor vehicles. This paper focuses on predicting the behavior
of the surrounding vehicles of an autonomous vehicle on highways. We are
motivated by improving the prediction accuracy when a surrounding vehicle
performs lane change and highway merging maneuvers. We propose a novel pooling
strategy to capture the inter-dependencies between the neighbor vehicles.
Depending solely on Euclidean trajectory representation, the existing pooling
strategies do not model the context information of the maneuvers intended by a
surrounding vehicle. In contrast, our pooling mechanism employs polar
trajectory representation, vehicles orientation and radial velocity. This
results in an implicitly maneuver-aware pooling operation. We incorporated the
proposed pooling mechanism into a generative encoder-decoder model, and
evaluated our method on the public NGSIM dataset. The results of maneuver-based
trajectory predictions demonstrate the effectiveness of the proposed method
compared with the state-of-the-art approaches. Our "Pooling Toolbox" code is
available at https://github.com/m-hasan-n/pooling.
| 2104.14079 | 737,909 |
A number of gamma-ray bursts (GRBs) exhibit the late simultaneous bumps in
their optical and Xray afterglows around the jet break. Its origin is unclear.
Based on the following two facts, we suggest that this feature may sound a
transition of circum-burst environment from a free-wind medium to a homogeneous
medium. (I) The late bump followed by a steep decay is strongly reminiscent of
the afterglows of GRB 170817A, which is attributed to an off-axis observed
external-forward shock (eFS) propagating in an interstellar medium. (II)
Observations seem to feature a long shallow decay before the late optical bump,
which is different from the afterglow of GRB 170817A. In this paper, we study
the emission of an eFS propagating in a free-to-shocked wind for on/off-axis
observers, where the mass density in the shocked-wind is almost constant. The
late simultaneous bumps/plateaux in the optical and X-ray afterglows are really
found around the jet break for high-viewing-angle observers. Moreover, there is
a long plateau or shallow decay before the late bump in the theoretical
light-curves, which is formed during the eFS propagating in the free-wind. For
low-viewing-angle observers, the above bumps appear only in the situation that
the structured jet has a low characteristic angle and the deceleration radius
of the on-axis jet flow is at around or beyond the free-wind boundary. As
examples, the X-ray and optical afterglows of GRBs 120326A, 120404A, and
100814A are fitted. We find that an off-axis observed eFS in a free-to-shocked
wind can well explain the afterglows in these bursts.
| 2104.14080 | 737,909 |
The objective of this paper is to derive the essential invariance and
contraction properties for the geometric periodic systems, which can be
formulated as a category of differential inclusions, and primarily rendered in
the phase coordinate, or the cycle coordinate. First, we introduce the
geometric averaging method for this category of systems, and also analyze the
accuracy of its averaging approximation. Specifically, we delve into the
details of the geometrically periodic system through the tunnel of considering
the convergence between the system and its geometrically averaging
approximation. Under different corresponding conditions, the approximation on
infinite time intervals can achieve certain accuracies, such that one can use
the stability result of either the original system or the averaging system to
deduce the stability of the other. After that, we employ the graphical
stability to investigate the "pattern stability" with respect to the
phase-based system. Finally, by virtue of the contraction analysis on the
Finsler manifold, the idea of accentuating the periodic pattern incremental
stability and convergence is nurtured in the phase-based differential inclusion
system, and comes to its preliminary fruition in application to biomimetic
mechanic robot control problem.
| 2104.14081 | 737,909 |
Current anchor-free object detectors are quite simple and effective yet lack
accurate label assignment methods, which limits their potential in competing
with classic anchor-based models that are supported by well-designed assignment
methods based on the Intersection-over-Union~(IoU) metric. In this paper, we
present \textbf{Pseudo-Intersection-over-Union~(Pseudo-IoU)}: a simple metric
that brings more standardized and accurate assignment rule into anchor-free
object detection frameworks without any additional computational cost or extra
parameters for training and testing, making it possible to further improve
anchor-free object detection by utilizing training samples of good quality
under effective assignment rules that have been previously applied in
anchor-based methods. By incorporating Pseudo-IoU metric into an end-to-end
single-stage anchor-free object detection framework, we observe consistent
improvements in their performance on general object detection benchmarks such
as PASCAL VOC and MSCOCO. Our method (single-model and single-scale) also
achieves comparable performance to other recent state-of-the-art anchor-free
methods without bells and whistles. Our code is based on mmdetection toolbox
and will be made publicly available at
https://github.com/SHI-Labs/Pseudo-IoU-for-Anchor-Free-Object-Detection.
| 2104.14082 | 737,909 |
In this paper we derive a combinatorial formula for mixed Eulerian numbers in
type $A$ from Peterson Schubert calculus. We also provide a simple computation
for mixed $\Phi$-Eulerian numbers in arbitrary Lie types.
| 2104.14083 | 737,909 |
We investigate the stability properties for a family of equations introduced
by Moffatt to model magnetic relaxation. These models preserve the topology of
magnetic streamlines, contain a cubic nonlinearity, and yet have a favorable
$L^2$ energy structure. We consider the local and global in time well-posedness
of these models and establish a difference between the behavior as $t\to
\infty$ with respect to weak and strong norms.
| 2104.14084 | 737,909 |
This paper presents a novel method, termed Bridge to Answer, to infer correct
answers for questions about a given video by leveraging adequate graph
interactions of heterogeneous crossmodal graphs. To realize this, we learn
question conditioned visual graphs by exploiting the relation between video and
question to enable each visual node using question-to-visual interactions to
encompass both visual and linguistic cues. In addition, we propose bridged
visual-to-visual interactions to incorporate two complementary visual
information on appearance and motion by placing the question graph as an
intermediate bridge. This bridged architecture allows reliable message passing
through compositional semantics of the question to generate an appropriate
answer. As a result, our method can learn the question conditioned visual
representations attributed to appearance and motion that show powerful
capability for video question answering. Extensive experiments prove that the
proposed method provides effective and superior performance than
state-of-the-art methods on several benchmarks.
| 2104.14085 | 737,909 |
Influence competition finds its significance in many applications, such as
marketing, politics and public events like COVID-19. Existing work tends to
believe that the stronger influence will always win and dominate nearly the
whole network, i.e., "winner takes all". However, this finding somewhat
contradicts with our common sense that many competing products are actually
coexistent, e.g., Android vs. iOS. This contradiction naturally raises the
question: will the winner take all?
To answer this question, we make a comprehensive study into influence
competition by identifying two factors frequently overlooked by prior art: (1)
the incomplete observation of real diffusion networks; (2) the existence of
information overload and its impact on user behaviors. To this end, we attempt
to recover possible diffusion links based on user similarities, which are
extracted by embedding users into a latent space. Following this, we further
derive the condition under which users will be overloaded, and formulate the
competing processes where users' behaviors differ before and after information
overload. By establishing the explicit expressions of competing dynamics, we
disclose that information overload acts as the critical "boundary line", before
which the "winner takes all" phenomenon will definitively occur, whereas after
information overload the share of influences gradually stabilizes and is
jointly affected by their initial spreading conditions, influence powers and
the advent of overload. Numerous experiments are conducted to validate our
theoretical results where favorable agreement is found. Our work sheds light on
the intrinsic driving forces behind real-world dynamics, thus providing useful
insights into effective information engineering.
| 2104.14086 | 737,909 |
Serverless computing has emerged as a new paradigm for running short-lived
computations in the cloud. Due to its ability to handle IoT workloads, there
has been considerable interest in running serverless functions at the edge.
However, the constrained nature of the edge and the latency sensitive nature of
workloads result in many challenges for serverless platforms. In this paper, we
present LaSS, a platform that uses model-driven approaches for running
latency-sensitive serverless computations on edge resources. LaSS uses
principled queuing-based methods to determine an appropriate allocation for
each hosted function and auto-scales the allocated resources in response to
workload dynamics. LaSS uses a fair-share allocation approach to guarantee a
minimum of allocated resources to each function in the presence of overload. In
addition, it utilizes resource reclamation methods based on container deflation
and termination to reassign resources from over-provisioned functions to
under-provisioned ones. We implement a prototype of our approach on an
OpenWhisk serverless edge cluster and conduct a detailed experimental
evaluation. Our results show that LaSS can accurately predict the resources
needed for serverless functions in the presence of highly dynamic workloads,
and reprovision container capacity within hundreds of milliseconds while
maintaining fair share allocation guarantees.
| 2104.14087 | 737,909 |
The sixth generation (6G) systems are generally recognized to be established
on ubiquitous Artificial Intelligence (AI) and distributed ledger such as
blockchain. However, the AI training demands tremendous computing resource,
which is limited in most 6G devices. Meanwhile, miners in Proof-of-Work (PoW)
based blockchains devote massive computing power to block mining, and are
widely criticized for the waste of computation. To address this dilemma, we
propose an Evolved-Proof-of-Work (E-PoW) consensus that can integrate the
matrix computations, which are widely existed in AI training, into the process
of brute-force searches in the block mining. Consequently, E-PoW can connect AI
learning and block mining via the multiply used common computing resource.
Experimental results show that E-PoW can salvage by up to 80 percent computing
power from pure block mining for parallel AI training in 6G systems.
| 2104.14088 | 737,909 |
Intelligent agents powered by AI planning assist people in complex scenarios,
such as managing teams of semi-autonomous vehicles. However, AI planning models
may be incomplete, leading to plans that do not adequately meet the stated
objectives, especially in unpredicted situations. Humans, who are apt at
identifying and adapting to unusual situations, may be able to assist planning
agents in these situations by encoding their knowledge into a planner at
run-time. We investigate whether people can collaborate with agents by
providing their knowledge to an agent using linear temporal logic (LTL) at
run-time without changing the agent's domain model. We presented 24
participants with baseline plans for situations in which a planner had
limitations, and asked the participants for workarounds for these limitations.
We encoded these workarounds as LTL constraints. Results show that
participants' constraints improved the expected return of the plans by 10% ($p
< 0.05$) relative to baseline plans, demonstrating that human insight can be
used in collaborative planning for resilience. However, participants used more
declarative than control constraints over time, but declarative constraints
produced plans less similar to the expectation of the participants, which could
lead to potential trust issues.
| 2104.14089 | 737,909 |
Inverse problems consist of recovering a signal from a collection of noisy
measurements. These problems can often be cast as feasibility problems;
however, additional regularization is typically necessary to ensure accurate
and stable recovery with respect to data perturbations. Hand-chosen analytic
regularization can yield desirable theoretical guarantees, but such approaches
have limited effectiveness recovering signals due to their inability to
leverage large amounts of available data. To this end, this work fuses
data-driven regularization and convex feasibility in a theoretically sound
manner. This is accomplished using feasibility-based fixed point networks
(F-FPNs). Each F-FPN defines a collection of nonexpansive operators, each of
which is the composition of a projection-based operator and a data-driven
regularization operator. Fixed point iteration is used to compute fixed points
of these operators, and weights of the operators are tuned so that the fixed
points closely represent available data. Numerical examples demonstrate
performance increases by F-FPNs when compared to standard TV-based recovery
methods for CT reconstruction and a comparable neural network based on
algorithm unrolling.
| 2104.14090 | 737,909 |
Estimation of population size using incomplete lists (also called the
capture-recapture problem) has a long history across many biological and social
sciences. For example, human rights and other groups often construct partial
and overlapping lists of victims of armed conflicts, with the hope of using
this information to estimate the total number of victims. Earlier statistical
methods for this setup either use potentially restrictive parametric
assumptions, or else rely on typically suboptimal plug-in-type nonparametric
estimators; however, both approaches can lead to substantial bias, the former
via model misspecification and the latter via smoothing. Under an identifying
assumption that two lists are conditionally independent given measured
covariate information, we make several contributions. First we derive the
nonparametric efficiency bound for estimating the capture probability, which
indicates the best possible performance of any estimator, and sheds light on
the statistical limits of capture-recapture methods. Then we present a new
estimator, and study its finite-sample properties, showing that it has a double
robustness property new to capture-recapture, and that it is near-optimal in a
non-asymptotic sense, under relatively mild nonparametric conditions. Next, we
give a method for constructing confidence intervals for total population size
from generic capture probability estimators, and prove non-asymptotic
near-validity. Finally, we study our methods in simulations, and apply them to
estimate the number of killings and disappearances attributable to different
groups in Peru during its internal armed conflict between 1980 and 2000.
| 2104.14091 | 737,909 |
In this paper, we recall hypergeometric functions $\mathscr{F}^{\rm
Dw}_{a_1,\cdots,a_s}(t),$ $\mathscr{F}^{(\sigma)}_{a_1,\cdots,a_s}(t)$,
$\widehat{\mathscr{F}}^{(\sigma)}
_{a,\cdots,a}(t)$ and their transformation formulas. Then we prove that one
of transformation formula implies another.
| 2104.14092 | 737,909 |
Proton-rich nuclei possess unique properties in the nuclear chart. Due to the
presence of both continuum coupling and Coulomb interaction, phenomena such as
halos, Thomas-Ehrman shift, and proton emissions can occur. Experimental data
are difficult to be obtained therein, so that theoretical calculations are
needed to understand nuclei at drip-lines and guide experimentalists for that
matter. In particular, the $^{16}$Ne and $^{18}$Mg isotopes are supposed to be
one-proton and/or two-proton emitting nuclei, but associated experimental data
are either incomplete or even unavailable. Consequently, we performed Gamow
shell model calculations of carbon isotones bearing $A=15\text{-}18$.
Isospin-symmetry breaking occurring in carbon isotones and isotopes is also
discussed. It is hereby shown that the mixed effects of continuum coupling and
Coulomb interaction at drip-lines generate complex patterns in isospin
multiplets. Added to that, it is possible to determine the one-proton and
two-proton widths of $^{16}$Ne and $^{18}$Mg. Obtained decay patterns are in
agreement with those obtained in previous experimental and theoretical works.
Moreover, up to the knowledge of authors, this is the first theoretical
calculation of binding energy and partial decay widths of $^{18}$Mg in a
configuration interaction picture.
| 2104.14093 | 737,909 |
Information flow control type systems statically restrict the propagation of
sensitive data to ensure end-to-end confidentiality. The property to be shown
is noninterference, asserting that an attacker cannot infer any secrets from
made observations. Session types delimit the kinds of observations that can be
made along a communication channel by imposing a protocol of message exchange.
These protocols govern the exchange along a single channel and leave
unconstrained the propagation along adjacent channels. This paper contributes
an information flow control type system for linear session types. The type
system stands in close correspondence with intuitionistic linear logic.
Intuitionistic linear logic typing ensures that process configurations form a
tree such that client processes are parent nodes and provider processes child
nodes. To control the propagation of secret messages, the type system is
enriched with secrecy levels and arranges these levels to be aligned with the
configuration tree. Two levels are associated with every process: the maximal
secrecy denoting the process' security clearance and the running secrecy
denoting the highest level of secret information obtained so far. The
computational semantics naturally stratifies process configurations such that
higher-secrecy processes are parents of lower-secrecy ones, an invariant
enforced by typing. Noninterference is stated in terms of a logical relation
that is indexed by the secrecy-level-enriched session types. The logical
relation contributes a novel development of logical relations for session typed
languages as it considers open configurations, allowing for more nuanced
equivalence statement.
| 2104.14094 | 737,909 |
Symbolic Mathematical tasks such as integration often require multiple
well-defined steps and understanding of sub-tasks to reach a solution. To
understand Transformers' abilities in such tasks in a fine-grained manner, we
deviate from traditional end-to-end settings, and explore a step-wise
polynomial simplification task. Polynomials can be written in a simple normal
form as a sum of monomials which are ordered in a lexicographic order. For a
polynomial which is not necessarily in this normal form, a sequence of
simplification steps is applied to reach the fully simplified (i.e., in the
normal form) polynomial. We propose a synthetic Polynomial dataset generation
algorithm that generates polynomials with unique proof steps. Through varying
coefficient configurations, input representation, proof granularity, and
extensive hyper-parameter tuning, we observe that Transformers consistently
struggle with numeric multiplication. We explore two ways to mitigate this:
Curriculum Learning and a Symbolic Calculator approach (where the numeric
operations are offloaded to a calculator). Both approaches provide significant
gains over the vanilla Transformers-based baseline.
| 2104.14095 | 737,909 |
Recently, inspired by quantum annealing, many solvers specialized for
unconstrained binary quadratic programming problems have been developed. For
further improvement and application of these solvers, it is important to
clarify the differences in their performance for various types of problems. In
this study, the performance of four quadratic unconstrained binary optimization
problem solvers, namely D-Wave Hybrid Solver Service (HSS), Toshiba Simulated
Bifurcation Machine (SBM), Fujitsu DigitalAnnealer (DA), and simulated
annealing on a personal computer, was benchmarked. The problems used for
benchmarking were instances of real problems in MQLib, instances of the
SAT-UNSAT phase transition point of random not-all-equal 3-SAT(NAE 3-SAT), and
the Ising spin glass Sherrington-Kirkpatrick (SK) model. Concerning MQLib
instances, the HSS performance ranked first; for NAE 3-SAT, DA performance
ranked first; and regarding the SK model, SBM performance ranked first. These
results may help understand the strengths and weaknesses of these solvers.
| 2104.14096 | 737,909 |
Recent study predicts that structural disorder, serving as a bridge
connecting a crystalline material to an amorphous material, can induce a
topological insulator from a trivial phase. However, to experimentally observe
such a topological phase transition is very challenging due to the difficulty
in controlling structural disorder in a quantum material. Given experimental
realization of randomly positioned Rydberg atoms, such a system is naturally
suited to studying structural disorder induced topological phase transitions
and topological amorphous phases. Motivated by the development, we study
topological phases in an experimentally accessible one-dimensional amorphous
Rydberg atom chain with random atom configurations. In the single-particle
level, we find symmetry-protected topological amorphous insulators and a
structural disorder induced topological phase transition, indicating that
Rydberg atoms provide an ideal platform to experimentally observe the
phenomenon using state-of-the-art technologies. Furthermore, we predict the
existence of a gapless symmetry-protected topological phase of interacting
bosons in the experimentally accessible system. The resultant many-body
topological amorphous phase is characterized by a $\mathbb{Z}_2$ invariant and
the density distribution.
| 2104.14097 | 737,909 |
Boolean Skolem function synthesis concerns synthesizing outputs as Boolean
functions of inputs such that a relational specification between inputs and
outputs is satisfied. This problem, also known as Boolean functional synthesis,
has several applications, including design of safe controllers for autonomous
systems, certified QBF solving, cryptanalysis etc. Recently, complexity
theoretic hardness results have been shown for the problem, although several
algorithms proposed in the literature are known to work well in practice. This
dichotomy between theoretical hardness and practical efficacy has motivated the
research into normal forms or representations of input specifications that
permit efficient synthesis, thus explaining perhaps the efficacy of these
algorithms.
In this paper we go one step beyond this and ask if there exists a normal
form representation that can in fact precisely characterize "efficient"
synthesis. We present a normal form called SAUNF that precisely characterizes
tractable synthesis in the following sense: a specification is polynomial time
synthesizable iff it can be compiled to SAUNF in polynomial time. Additionally,
a specification admits a polynomial-sized functional solution iff there exists
a semantically equivalent polynomial-sized SAUNF representation. SAUNF is
exponentially more succinct than well-established normal forms like BDDs and
DNNFs, used in the context of AI problems, and strictly subsumes other more
recently proposed forms like SynNNF. It enjoys compositional properties that
are similar to those of DNNF. Thus, SAUNF provides the right trade-off in
knowledge representation for Boolean functional synthesis.
| 2104.14098 | 737,909 |
We study the "twisted" Poincar\'e duality of smooth Poisson manifolds, and
show that, if the modular symmetry is semisimple, that is, the modular vector
is diagonalizable, there is a mixed complex associated to the Poisson complex
which, combining with the twisted Poincar\'e duality, gives a
Batalin-Vilkovisky algebra structure on the Poisson cohomology, and a gravity
algebra structure on the negative cyclic Poisson homology. This generalizes the
previous results obtained by Xu et al for unimodular Poisson algebras. We also
show that these two algebraic structures are preserved under Kontsevich's
deformation quantization, and in the case of polynomial algebras they are also
preserved by Koszul duality.
| 2104.14099 | 737,909 |
Inspired by an interesting counterexample to the cosmic no-hair conjecture
found in a supergravity-motivated model recently, we propose a multi-field
extension, in which two scalar fields are allowed to non-minimally couple to
two vector fields, respectively. This model is shown to admit an exact Bianchi
type I power-law solution. Furthermore, stability analysis based on the
dynamical system method is performed to show that this anisotropic solution is
indeed stable and attractive if both scalar fields are canonical. Nevertheless,
if one of the two scalar fields is phantom then the corresponding anisotropic
power-law inflation turns unstable as expected.
| 2104.14100 | 737,909 |
We consider least-squares problems with quadratic regularization and propose
novel sketching-based iterative methods with an adaptive sketch size. The
sketch size can be as small as the effective dimension of the data matrix to
guarantee linear convergence. However, a major difficulty in choosing the
sketch size in terms of the effective dimension lies in the fact that the
latter is usually unknown in practice. Current sketching-based solvers for
regularized least-squares fall short on addressing this issue. Our main
contribution is to propose adaptive versions of standard sketching-based
iterative solvers, namely, the iterative Hessian sketch and the preconditioned
conjugate gradient method, that do not require a priori estimation of the
effective dimension. We propose an adaptive mechanism to control the sketch
size according to the progress made in each step of the iterative solver. If
enough progress is not made, the sketch size increases to improve the
convergence rate. We prove that the adaptive sketch size scales at most in
terms of the effective dimension, and that our adaptive methods are guaranteed
to converge linearly. Consequently, our adaptive methods improve the
state-of-the-art complexity for solving dense, ill-conditioned least-squares
problems. Importantly, we illustrate numerically on several synthetic and real
datasets that our method is extremely efficient and is often significantly
faster than standard least-squares solvers such as a direct factorization based
solver, the conjugate gradient method and its preconditioned variants.
| 2104.14101 | 737,909 |
Recent advances in natural language processing and computer vision have led
to AI models that interpret simple scenes at human levels. Yet, we do not have
a complete understanding of how humans and AI models differ in their
interpretation of more complex scenes. We created a dataset of complex scenes
that contained human behaviors and social interactions. AI and humans had to
describe the scenes with a sentence. We used a quantitative metric of
similarity between scene descriptions of the AI/human and ground truth of five
other human descriptions of each scene. Results show that the machine/human
agreement scene descriptions are much lower than human/human agreement for our
complex scenes. Using an experimental manipulation that occludes different
spatial regions of the scenes, we assessed how machines and humans vary in
utilizing regions of images to understand the scenes. Together, our results are
a first step toward understanding how machines fall short of human visual
reasoning with complex scenes depicting human behaviors.
| 2104.14102 | 737,909 |
This paper summarizes the progress in developing a rugged, low-cost,
automated ground cone robot network capable of traffic delineation at
lane-level precision. A holonomic omnidirectional base with a traffic
delineator was developed to allow flexibility in initialization. RTK GPS was
utilized to reduce minimum position error to 2 centimeters. Due to recent
developments, the cost of the platform is now less than $1,600. To minimize the
effects of GPS-denied environments, wheel encoders and an Extended Kalman
Filter were implemented to maintain lane-level accuracy during operation and a
maximum error of 1.97 meters through 50 meters with little to no GPS signal.
Future work includes increasing the operational speed of the platforms,
incorporating lanelet information for path planning, and cross-platform
estimation.
| 2104.14103 | 737,909 |
Nonlinear models for pattern evolution by ion beam sputtering on a material
surface present an ongoing opportunity for new numerical simulations. A
numerical analysis of the evolution of preexisting patterns is proposed to
investigate surface dynamics, based on a 2D anisotropic damped
Kuramoto-Sivashinsky equation, with periodic boundary conditions. A
finite-difference semi-implicit time splitting scheme is employed on the
discretization of the governing equation. Simulations were conducted with
realistic coefficients related to physical parameters (anisotropies, beam
orientation, diffusion). The stability of the numerical scheme is analyzed with
time step and grid spacing tests for the pattern evolution, and the Method of
Manufactured Solutions has been used to verify the proposed scheme. Ripples and
hexagonal patterns were obtained from a monomodal initial condition for certain
values of the damping coefficient, while spatiotemporal chaos appeared for
lower values. The anisotropy effects on pattern formation were studied, varying
the angle of incidence of the ion beam with respect to the irradiated surface.
Analytical discussions are based on linear and weakly nonlinear analysis.
| 2104.14104 | 737,909 |
How to design silicon-based quantum wires with figure of merit ($ZT$) larger
than three is under hot pursuit due to the advantage of low cost and the
availability of matured fabrication technique. Quantum wires consisting of
finite three dimensional quantum dot (QD) arrays coupled to electrodes are
proposed to realize high efficient thermoelectric devices with optimized power
factors. The transmission coefficient of 3D QD arrays can exhibit 3D, 2D, 1D
and 0D topological distribution functions by tailoring the interdot coupling
strengths. Such topological effects on the thermoelectric properties are
revealed. The 1D topological distribution function shows the maximum power
factor and the best $ZT$ value. We have demonstrated that 3D silicon QD array
nanowires with diameters below $20~nm$ and length $250~nm$ show high potential
to achieve $ZT\ge 3$ near room temperature.
| 2104.14105 | 737,909 |
This paper develops a distributed collaborative localization algorithm based
on an extended kalman filter. This algorithm incorporates Ultra-Wideband (UWB)
measurements for vehicle to vehicle ranging, and shows improvements in
localization accuracy where GPS typically falls short. The algorithm was first
tested in a newly created open-source simulation environment that emulates
various numbers of vehicles and sensors while simultaneously testing multiple
localization algorithms. Predicted error distributions for various algorithms
are quickly producible using the Monte-Carlo method and optimization techniques
within MatLab. The simulation results were validated experimentally in an
outdoor, urban environment. Improvements of localization accuracy over a
typical extended kalman filter ranged from 2.9% to 9.3% over 180 meter test
runs. When GPS was denied, these improvements increased up to 83.3% over a
standard kalman filter. In both simulation and experimentally, the DCL
algorithm was shown to be a good approximation of a full state filter, while
reducing required communication between vehicles. These results are promising
in showing the efficacy of adding UWB ranging sensors to cars for collaborative
and landmark localization, especially in GPS-denied environments. In the
future, additional moving vehicles with additional tags will be tested in other
challenging GPS denied environments.
| 2104.14106 | 737,909 |
Convolution is one of the basic building blocks of CNN architectures. Despite
its common use, standard convolution has two main shortcomings:
Content-agnostic and Computation-heavy. Dynamic filters are content-adaptive,
while further increasing the computational overhead. Depth-wise convolution is
a lightweight variant, but it usually leads to a drop in CNN performance or
requires a larger number of channels. In this work, we propose the Decoupled
Dynamic Filter (DDF) that can simultaneously tackle both of these shortcomings.
Inspired by recent advances in attention, DDF decouples a depth-wise dynamic
filter into spatial and channel dynamic filters. This decomposition
considerably reduces the number of parameters and limits computational costs to
the same level as depth-wise convolution. Meanwhile, we observe a significant
boost in performance when replacing standard convolution with DDF in
classification networks. ResNet50 / 101 get improved by 1.9% and 1.3% on the
top-1 accuracy, while their computational costs are reduced by nearly half.
Experiments on the detection and joint upsampling networks also demonstrate the
superior performance of the DDF upsampling variant (DDF-Up) in comparison with
standard convolution and specialized content-adaptive layers.
| 2104.14107 | 737,909 |
Among the most promising terahertz (THz) radiation devices, gyrotrons can
generate powerful THz-wave radiation in an open resonant structure.
Unfortunately, such an oscillation using high-Q axial mode has been
theoretically and experimentally demonstrated to suffer from strong ohmic
losses. In this paper, a solution to such a challenging problem is to include a
narrow belt of lossy section in the interaction circuit to stably constitute
the traveling wave interaction (high-order-axial-mode, HOAM), and employ a
down-tapered magnetic field to amplify the forward-wave component. A scheme
based on the traveling-wave interaction concept is proposed to strengthen
electron beam-wave interaction efficiency and simultaneously reduce the ohmic
loss in a 1-THz third harmonic gyrotron, which is promising for further
advancement of high-power continuous-wave operation.
| 2104.14108 | 737,909 |
Regional facial image synthesis conditioned on semantic mask has achieved
great success using generative adversarial networks. However, the appearance of
different regions may be inconsistent with each other when conducting regional
image editing. In this paper, we focus on the problem of harmonized regional
style transfer and manipulation for facial images. The proposed approach
supports regional style transfer and manipulation at the same time. A
multi-scale encoder and style mapping networks are proposed in our work. The
encoder is responsible for extracting regional styles of real faces. Style
mapping networks generate styles from random samples for all facial regions. As
the key part of our work, we propose a multi-region style attention module to
adapt the multiple regional style embeddings from a reference image to a target
image for generating harmonious and plausible results. Furthermore, we propose
a new metric "harmony score" and conduct experiments in a challenging setting:
three widely used face datasets are involved and we test the model by
transferring the regional facial appearance between datasets. Images in
different datasets are usually quite different, which makes the inconsistency
between target and reference regions more obvious. Results show that our model
can generate reliable style transfer and multi-modal manipulation results
compared with SOTAs. Furthermore, we show two face editing applications using
the proposed approach.
| 2104.14109 | 737,909 |
What are the necessary and sufficient conditions for a proposition to be
called a requirement? In Requirements Engineering research, a proposition is a
requirement if and only if specific grammatical and/or communication conditions
hold. I offer an alternative, that a proposition is a requirement if and only
if specific contractual, economic, and engineering relationships hold. I
introduce and define the concept of "Requirements Contract" which defines these
conditions. I argue that seeing requirements as propositions governed by
specific types of contracts leads to new and interesting questions for the
field, and relates requirements engineering to such topics as economic
incentives, interest alignment, principal agent problem, and decision-making
with incomplete information.
| 2104.14110 | 737,909 |
We investigate one-dimensional three-body systems composed of two identical
bosons and one imbalanced atom (impurity) with two-body and three-body
zero-range interactions. For the case in the absence of three-body interaction,
we give a complete phase diagram of the number of three-body bound states in
the whole region of mass ratio via the direct calculation of the
Skornyakov-Ter-Martirosyan equations. We demonstrate that other low-lying
three-body bound states emerge when the mass of the impurity particle is not
equal to another two identical particles. We can obtain not only the binding
energies but also the corresponding wave functions. When the mass of impurity
atom is vary large, there are at most three three-body bound states. We then
study the effect of three-body zero-range interaction and unveil that it can
induces one more three-body bound state at a certain region of coupling
strength ratio under a fixed mass ratio.
| 2104.14111 | 737,909 |
The effective photon-quark-antiquark ($\gamma q \overline{q}$) vertex
function is evaluated at finite temperature in presence of an arbitrary
external magnetic field using the 2-flavour gauged Nambu--Jona-Lasinio (NJL)
model in the mean field approximation. The lowest order diagram contributing to
the magnetic form factor and the anomalous magnetic moment (AMM) of the quarks
is calculated at finite temperature and external magnetic field using the
imaginary time formalism of finite temperature field theory and the Schwinger
proper time formalism. The Schwinger propagator including all the Landau levels
with non-zero AMM of the dressed quarks is considered while calculating the
loop diagram. Using sharp as well as smooth three momentum cutoff, we
regularize the UV divergences arising from the vertex function and the
parameters of our model are chosen to reproduce the well known phenomenological
quantities at zero temperature and zero magnetic field, such as pion-decay
constant ($f_\pi$), vacuum quark condensate, vacuum pion mass ($m_\pi$) as well
as the magnetic moments of proton and neutron. We then study the temperature
and magnetic field dependence of the AMM and constituent mass of the quark. We
found that, the AMM as well as the constituent quark mass are large at the
chiral symmetry broken phase in the low temperature region. Around the
pseudo-chiral phase transition they decrease rapidly and at high temperatures
both of them approach vanishingly small values in the symmetry restored phase.
| 2104.14112 | 737,909 |
The goal of this paper is to characterize Gaussian-Process optimization in
the setting where the function domain is large relative to the number of
admissible function evaluations, i.e., where it is impossible to find the
global optimum. We provide upper bounds on the suboptimality (Bayesian simple
regret) of the solution found by optimization strategies that are closely
related to the widely used expected improvement (EI) and upper confidence bound
(UCB) algorithms. These regret bounds illuminate the relationship between the
number of evaluations, the domain size (i.e. cardinality of finite domains /
Lipschitz constant of the covariance function in continuous domains), and the
optimality of the retrieved function value. In particular, they show that even
when the number of evaluations is far too small to find the global optimum, we
can find nontrivial function values (e.g. values that achieve a certain ratio
with the optimal value).
| 2104.14113 | 737,909 |
Academic administrators and funding agencies must predict the publication
productivity of research groups and individuals to assess authors' abilities.
However, such prediction remains an elusive task due to the randomness of
individual research and the diversity of authors' productivity patterns. We
applied two kinds of approaches to this prediction task: deep neural network
learning and model-based approaches. We found that a neural network cannot give
a good long-term prediction for groups, while the model-based approaches cannot
provide short-term predictions for individuals. We proposed a model that
integrates the advantages of both data-driven and model-based approaches, and
the effectiveness of this method was validated by applying it to a high-quality
dblp dataset, demonstrating that the proposed model outperforms the tested
data-driven and model-based approaches.
| 2104.14114 | 737,909 |
Existing blind image quality assessment (BIQA) methods are mostly designed in
a disposable way and cannot evolve with unseen distortions adaptively, which
greatly limits the deployment and application of BIQA models in real-world
scenarios. To address this problem, we propose a novel Lifelong blind Image
Quality Assessment (LIQA) approach, targeting to achieve the lifelong learning
of BIQA. Without accessing to previous training data, our proposed LIQA can not
only learn new distortions, but also mitigate the catastrophic forgetting of
seen distortions. Specifically, we adopt the Split-and-Merge distillation
strategy to train a single-head network that makes task-agnostic predictions.
In the split stage, we first employ a distortion-specific generator to obtain
the pseudo features of each seen distortion. Then, we use an auxiliary
multi-head regression network to generate the predicted quality of each seen
distortion. In the merge stage, we replay the pseudo features paired with
pseudo labels to distill the knowledge of multiple heads, which can build the
final regressed single head. Experimental results demonstrate that the proposed
LIQA method can handle the continuous shifts of different distortion types and
even datasets. More importantly, our LIQA model can achieve stable performance
even if the task sequence is long.
| 2104.14115 | 737,909 |
Since the outbreak of Coronavirus Disease 2019 (COVID-19), most of the
impacted patients have been diagnosed with high fever, dry cough, and soar
throat leading to severe pneumonia. Hence, to date, the diagnosis of COVID-19
from lung imaging is proved to be a major evidence for early diagnosis of the
disease. Although nucleic acid detection using real-time reverse-transcriptase
polymerase chain reaction (rRT-PCR) remains a gold standard for the detection
of COVID-19, the proposed approach focuses on the automated diagnosis and
prognosis of the disease from a non-contrast chest computed tomography (CT)scan
for timely diagnosis and triage of the patient. The prognosis covers the
quantification and assessment of the disease to help hospitals with the
management and planning of crucial resources, such as medical staff,
ventilators and intensive care units (ICUs) capacity. The approach utilises
deep learning techniques for automated quantification of the severity of
COVID-19 disease via measuring the area of multiple rounded ground-glass
opacities (GGO) and consolidations in the periphery (CP) of the lungs and
accumulating them to form a severity score. The severity of the disease can be
correlated with the medicines prescribed during the triage to assess the
effectiveness of the treatment. The proposed approach shows promising results
where the classification model achieved 93% accuracy on hold-out data.
| 2104.14116 | 737,909 |
To achieve the low latency, high throughput, and energy efficiency benefits
of Spiking Neural Networks (SNNs), reducing the memory and compute requirements
when running on a neuromorphic hardware is an important step. Neuromorphic
architecture allows massively parallel computation with variable and local
bit-precisions. However, how different bit-precisions should be allocated to
different layers or connections of the network is not trivial. In this work, we
demonstrate how a layer-wise Hessian trace analysis can measure the sensitivity
of the loss to any perturbation of the layer's weights, and this can be used to
guide the allocation of a layer-specific bit-precision when quantizing an SNN.
In addition, current gradient based methods of SNN training use a complex
neuron model with multiple state variables, which is not ideal for compute and
memory efficiency. To address this challenge, we present a simplified neuron
model that reduces the number of state variables by 4-fold while still being
compatible with gradient based training. We find that the impact on model
accuracy when using a layer-wise bit-precision correlated well with that
layer's Hessian trace. The accuracy of the optimal quantized network only
dropped by 0.2%, yet the network size was reduced by 58%. This reduces memory
usage and allows fixed-point arithmetic with simpler digital circuits to be
used, increasing the overall throughput and energy efficiency.
| 2104.14117 | 737,909 |
Despite the impressive progress achieved in robust grasp detection, robots
are not skilled in sophisticated grasping tasks (e.g. search and grasp a
specific object in clutter). Such tasks involve not only grasping, but
comprehensive perception of the visual world (e.g. the relationship between
objects). Recently, the advanced deep learning techniques provide a promising
way for understanding the high-level visual concepts. It encourages robotic
researchers to explore solutions for such hard and complicated fields. However,
deep learning usually means data-hungry. The lack of data severely limits the
performance of deep-learning-based algorithms. In this paper, we present a new
dataset named \regrad to sustain the modeling of relationships among objects
and grasps. We collect the annotations of object poses, segmentations, grasps,
and relationships in each image for comprehensive perception of grasping. Our
dataset is collected in both forms of 2D images and 3D point clouds. Moreover,
since all the data are generated automatically, users are free to import their
own object models for the generation of as many data as they want. We have
released our dataset and codes. A video that demonstrates the process of data
generation is also available.
| 2104.14118 | 737,909 |
In this paper, we introduce a technique to enhance the computational
efficiency of solution algorithms for high-dimensional discrete
simulation-based optimization problems. The technique is based on innovative
adaptive partitioning strategies that partition the feasible region using
solutions that has already been simulated as well as prior knowledge of the
problem of interesting. We integrate the proposed strategies with the Empirical
Stochastic Branch-and-Bound framework proposed by Xu and Nelson (2013). This
combination leads to a general-purpose discrete simulation-based optimization
algorithm that is both globally convergent and has good small sample
(finite-time) performance. The proposed general-purpose discrete
simulation-based optimization algorithm is validated on a synthetic discrete
simulation-based optimization problem and is then used to address a real-world
car-sharing fleet assignment problem. Experiment results show that the proposed
strategy can increase the algorithm efficiency significantly.
| 2104.14119 | 737,909 |
$J/\psi$ production in p-p ultra-peripheral collisions through the elastic
and inelastic photoproduction processes, where the virtual photons emitted from
the projectile interact with the target, are studied. The comparisions between
the exact treatment results and the ones of equivalent photon approximation are
expressed as $Q^{2}$ (virtuality of photon), $z$ and $p_{T}$ distributions, and
the total cross sections are also estimated. The method developed by Martin and
Ryskin is employed to avoid double counting when the different production
mechanism are considered simultaneously. The numerical results indicate that,
the equivalent photon approximation can be only applied to the coherent or
elastic electromagnetic process, the improper choice of $Q^{2}_{\mathrm{max}}$
and $y_{\mathrm{max}}$ will cause obvious errors. And the exact treatment is
needed to deal accurately with the $J/\psi$ photoproduction.
| 2104.14120 | 737,909 |
One of the difficulties of conversion rate (CVR) prediction is that the
conversions can delay and take place long after the clicks. The delayed
feedback poses a challenge: fresh data are beneficial to continuous training
but may not have complete label information at the time they are ingested into
the training pipeline. To balance model freshness and label certainty, previous
methods set a short waiting window or even do not wait for the conversion
signal. If conversion happens outside the waiting window, this sample will be
duplicated and ingested into the training pipeline with a positive label.
However, these methods have some issues. First, they assume the observed
feature distribution remains the same as the actual distribution. But this
assumption does not hold due to the ingestion of duplicated samples. Second,
the certainty of the conversion action only comes from the positives. But the
positives are scarce as conversions are sparse in commercial systems. These
issues induce bias during the modeling of delayed feedback. In this paper, we
propose DElayed FEedback modeling with Real negatives (DEFER) method to address
these issues. The proposed method ingests real negative samples into the
training pipeline. The ingestion of real negatives ensures the observed feature
distribution is equivalent to the actual distribution, thus reducing the bias.
The ingestion of real negatives also brings more certainty information of the
conversion. To correct the distribution shift, DEFER employs importance
sampling to weigh the loss function. Experimental results on industrial
datasets validate the superiority of DEFER. DEFER have been deployed in the
display advertising system of Alibaba, obtaining over 6.0% improvement on CVR
in several scenarios. The code and data in this paper are now open-sourced
{https://github.com/gusuperstar/defer.git}.
| 2104.14121 | 737,909 |
This study investigates the structure of Arf rings. From the perspective of
ring extensions, a decomposition of integrally closed ideals is given. Using
this, we present a kind of their prime ideal decomposition in Arf rings, and
determine their structure in the case where both R and the integral closure of
R are local rings.
| 2104.14122 | 737,909 |
We describe a method to select the nodes in Graph datasets for training so
that the model trained on the points selected will be be better than the ones
if we select other points for the purpose of training. This is a very important
aspect as the process of labelling the points is often a costly affair. The
usual Active Learning methods are good but the penalty involved with these
methods is that, we need to re-train the model after selecting the nodes in
each iteration of Active Learning cycle. We come up with a method which use the
concept of Graph Centrality to select the nodes for labeling and training
initially and the training is needed to perform only once. We have tested this
idea on three graph datasets - Cora, Citeseer and Pubmed- and the results are
really encouraging.
| 2104.14123 | 737,909 |
"Lightweight convolutional neural networks" is an important research topic in
the field of embedded vision. To implement image recognition tasks on a
resource-limited hardware platform, it is necessary to reduce the memory size
and the computational cost. The contribution of this paper is stated as
follows. First, we propose an algorithm to process a specific network
architecture (Condensation-Net) without increasing the maximum memory storage
for feature maps. The architecture for virtual feature maps saves 26.5% of
memory bandwidth by calculating the results of cross-channel pooling before
storing the feature map into the memory. Second, we show that cross-channel
pooling can improve the accuracy of object detection tasks, such as face
detection, because it increases the number of filter weights. Compared with
Tiny-YOLOv2, the improvement of accuracy is 2.0% for quantized networks and
1.5% for full-precision networks when the false-positive rate is 0.1. Last but
not the least, the analysis results show that the overhead to support the
cross-channel pooling with the proposed hardware architecture is negligible
small. The extra memory cost to support Condensation-Net is 0.2% of the total
size, and the extra gate count is only 1.0% of the total size.
| 2104.14124 | 737,909 |
In order to handle modern convolutional neural networks (CNNs) efficiently, a
hardware architecture of CNN inference accelerator is proposed to handle
depthwise convolutions and regular convolutions, which are both essential
building blocks for embedded-computer-vision algorithms. Different from related
works, the proposed architecture can support filter kernels with different
sizes with high flexibility since it does not require extra costs for
intra-kernel parallelism, and it can generate convolution results faster than
the architecture of the related works. The experimental results show the
importance of supporting depthwise convolutions and dilated convolutions with
the proposed hardware architecture. In addition to depthwise convolutions with
large-kernels, a new structure called DDC layer, which includes the combination
of depthwise convolutions and dilated convolutions, is also analyzed in this
paper. For face detection, the computational costs decrease by 30%, and the
model size decreases by 20% when the DDC layers are applied to the network. For
image classification, the accuracy is increased by 1% by simply replacing $3
\times 3$ filters with $5 \times 5$ filters in depthwise convolutions.
| 2104.14125 | 737,909 |
The field of view (FOV) of convolutional neural networks is highly related to
the accuracy of inference. Dilated convolutions are known as an effective
solution to the problems which require large FOVs. However, for general-purpose
hardware or dedicated hardware, it usually takes extra time to handle dilated
convolutions compared with standard convolutions. In this paper, we propose a
network module, Cascaded and Separable Structure of Dilated (CASSOD)
Convolution, and a special hardware system to handle the CASSOD networks
efficiently. A CASSOD-Net includes multiple cascaded $2 \times 2$ dilated
filters, which can be used to replace the traditional $3 \times 3$ dilated
filters without decreasing the accuracy of inference. Two example applications,
face detection and image segmentation, are tested with dilated convolutions and
the proposed CASSOD modules. The new network for face detection achieves higher
accuracy than the previous work with only 47% of filter weights in the dilated
convolution layers of the context module. Moreover, the proposed hardware
system can accelerate the computations of dilated convolutions, and it is 2.78
times faster than traditional hardware systems when the filter size is $3
\times 3$.
| 2104.14126 | 737,909 |
It has recently been demonstrated that various topological states, including
Dirac, Weyl, nodal-line, and triple-point semimetal phases, can emerge in
antiferromagnetic (AFM) half-Heusler compounds. However, how to determine the
AFM structure and to distinguish different topological phases from transport
behaviors remains unknown. We show that, due to the presence of combined
time-reversal and fractional translation symmetry, the recently proposed
second-order nonlinear Hall effect can be used to characterize different
topological phases with various AFM configurations. Guided by the symmetry
analysis, we obtain the expressions of the Berry curvature dipole for different
AFM configurations. Based on the effective model, we explicitly calculate the
Berry curvature dipole, which is found to be vanishingly small for the
triple-point semimetal phase, and large in the Weyl semimetal phase. Our
results not only put forward an effective method for the identification of
magnetic orders and topological phases in AFM half-Heusler materials, but also
suggest these materials as a versatile platform for engineering the non-linear
Hall effect.
| 2104.14127 | 737,909 |
Timoshenko's theory for bending vibrations of a beam has been extensively
studied since its development nearly one hundred years ago. Unfortunately there
are not many analytical results. The results above the critical frequency
inclusive haeve been tested only recently. Here an analytical expression for
the solutions of the Timoshenko equation for free-free boundary conditions,
below the critical frequency, is obtained. The analytical results are compared
with recent experimental results reported of aluminum and brass beams The
agreement is excellent, with an error of less than 3%, for the aluminum beam
and of 5.5% for the brass beam. Some exact results are also given for
frequencies above the critical frequency.
| 2104.14128 | 737,909 |
The increasing size of neural network models has been critical for
improvements in their accuracy, but device memory is not growing at the same
rate. This creates fundamental challenges for training neural networks within
limited memory environments. In this work, we propose ActNN, a memory-efficient
training framework that stores randomly quantized activations for back
propagation. We prove the convergence of ActNN for general network
architectures, and we characterize the impact of quantization on the
convergence via an exact expression for the gradient variance. Using our
theory, we propose novel mixed-precision quantization strategies that exploit
the activation's heterogeneity across feature dimensions, samples, and layers.
These techniques can be readily applied to existing dynamic graph frameworks,
such as PyTorch, simply by substituting the layers. We evaluate ActNN on
mainstream computer vision models for classification, detection, and
segmentation tasks. On all these tasks, ActNN compresses the activation to 2
bits on average, with negligible accuracy loss. ActNN reduces the memory
footprint of the activation by 12x, and it enables training with a 6.6x to 14x
larger batch size.
| 2104.14129 | 737,909 |
Recent years, analysis dictionary learning (ADL) and its applications for
classification have been well developed, due to its flexible projective ability
and low classification complexity. With the learned analysis dictionary, test
samples can be transformed into a sparse subspace for classification
efficiently. However, the underling locality of sample data has rarely been
explored in analysis dictionary to enhance the discriminative capability of the
classifier. In this paper, we propose a novel locality constrained analysis
dictionary learning model with a synthesis K-SVD algorithm (SK-LADL). It
considers the intrinsic geometric properties by imposing graph regularization
to uncover the geometric structure for the image data. Through the learned
analysis dictionary, we transform the image to a new and compact space where
the manifold assumption can be further guaranteed. thus, the local geometrical
structure of images can be preserved in sparse representation coefficients.
Moreover, the SK-LADL model is iteratively solved by the synthesis K-SVD and
gradient technique. Experimental results on image classification validate the
performance superiority of our SK-LADL model.
| 2104.14130 | 737,909 |
Event perception tasks such as recognizing and localizing actions in
streaming videos are essential for tackling visual understanding tasks.
Progress has primarily been driven by the use of large-scale, annotated
training data in a supervised manner. In this work, we tackle the problem of
learning \textit{actor-centered} representations through the notion of
continual hierarchical predictive learning to localize actions in streaming
videos without any training annotations. Inspired by cognitive theories of
event perception, we propose a novel, self-supervised framework driven by the
notion of hierarchical predictive learning to construct actor-centered features
by attention-based contextualization. Extensive experiments on three benchmark
datasets show that the approach can learn robust representations for localizing
actions using only one epoch of training, i.e., we train the model continually
in streaming fashion - one frame at a time, with a single pass through training
videos. We show that the proposed approach outperforms unsupervised and weakly
supervised baselines while offering competitive performance to fully supervised
approaches. Finally, we show that the proposed model can generalize to
out-of-domain data without significant loss in performance without any
finetuning for both the recognition and localization tasks.
| 2104.14131 | 737,909 |
Neural Architecture Search (NAS) is a popular method for automatically
designing optimized architectures for high-performance deep learning. In this
approach, it is common to use bilevel optimization where one optimizes the
model weights over the training data (lower-level problem) and various
hyperparameters such as the configuration of the architecture over the
validation data (upper-level problem). This paper explores the statistical
aspects of such problems with train-validation splits. In practice, the
lower-level problem is often overparameterized and can easily achieve zero
loss. Thus, a-priori it seems impossible to distinguish the right
hyperparameters based on training loss alone which motivates a better
understanding of the role of train-validation split. To this aim this work
establishes the following results. (1) We show that refined properties of the
validation loss such as risk and hyper-gradients are indicative of those of the
true test loss. This reveals that the upper-level problem helps select the most
generalizable model and prevent overfitting with a near-minimal validation
sample size. Importantly, this is established for continuous spaces -- which
are highly relevant for popular differentiable search schemes. (2) We establish
generalization bounds for NAS problems with an emphasis on an activation search
problem. When optimized with gradient-descent, we show that the
train-validation procedure returns the best (model, architecture) pair even if
all architectures can perfectly fit the training data to achieve zero error.
(3) Finally, we highlight rigorous connections between NAS, multiple kernel
learning, and low-rank matrix learning. The latter leads to novel algorithmic
insights where the solution of the upper problem can be accurately learned via
efficient spectral methods to achieve near-minimal risk.
| 2104.14132 | 737,909 |
Recently, much progress in natural language processing has been driven by
deep contextualized representations pretrained on large corpora. Typically, the
fine-tuning on these pretrained models for a specific downstream task is based
on single-view learning, which is however inadequate as a sentence can be
interpreted differently from different perspectives. Therefore, in this work,
we propose a text-to-text multi-view learning framework by incorporating an
additional view -- the text generation view -- into a typical single-view
passage ranking model. Empirically, the proposed approach is of help to the
ranking performance compared to its single-view counterpart. Ablation studies
are also reported in the paper.
| 2104.14133 | 737,909 |
Linear time-varying (LTV) systems are widely used for modeling real-world
dynamical systems due to their generality and simplicity. Providing stability
guarantees for LTV systems is one of the central problems in control theory.
However, existing approaches that guarantee stability typically lead to
significantly sub-optimal cumulative control cost in online settings where only
current or short-term system information is available. In this work, we propose
an efficient online control algorithm, COvariance Constrained Online Linear
Quadratic (COCO-LQ) control, that guarantees input-to-state stability for a
large class of LTV systems while also minimizing the control cost. The proposed
method incorporates a state covariance constraint into the semi-definite
programming (SDP) formulation of the LQ optimal controller. We empirically
demonstrate the performance of COCO-LQ in both synthetic experiments and a
power system frequency control example.
| 2104.14134 | 737,909 |
Weakly supervised temporal action localization aims to detect and localize
actions in untrimmed videos with only video-level labels during training.
However, without frame-level annotations, it is challenging to achieve
localization completeness and relieve background interference. In this paper,
we present an Action Unit Memory Network (AUMN) for weakly supervised temporal
action localization, which can mitigate the above two challenges by learning an
action unit memory bank. In the proposed AUMN, two attention modules are
designed to update the memory bank adaptively and learn action units specific
classifiers. Furthermore, three effective mechanisms (diversity, homogeneity
and sparsity) are designed to guide the updating of the memory network. To the
best of our knowledge, this is the first work to explicitly model the action
units with a memory network. Extensive experimental results on two standard
benchmarks (THUMOS14 and ActivityNet) demonstrate that our AUMN performs
favorably against state-of-the-art methods. Specifically, the average mAP of
IoU thresholds from 0.1 to 0.5 on the THUMOS14 dataset is significantly
improved from 47.0% to 52.1%.
| 2104.14135 | 737,909 |
In this paper, we study the two phase flow problem with surface tension in
the ideal incompressible magnetohydrodynamics. We first prove the local
well-posedness of the two phase flow problem with surface tension, then
demonstrate that as surface tension tends to zero, the solution of the two
phase flow problem with surface tension converges to the solution of the two
phase flow problem without surface tension.
| 2104.14136 | 737,909 |
We study bilinear rough singular integral operators $\mathcal{L}_{\Omega}$
associated with a function $\Omega$ on the sphere $\mathbb{S}^{2n-1}$.
In the recent work of Grafakos, He, and Slav\'ikov\'a (Math. Ann. 376:
431-455, 2020), they showed that $\mathcal{L}_{\Omega}$ is bounded from
$L^2\times L^2$ to $L^1$, provided that $\Omega\in L^q(\mathbb{S}^{2n-1})$ for
$4/3<q\le \infty$ with mean value zero. We generalize their result to the
boundedness for all Banach points. We actually prove $L^{p_1}\times L^{p_2}\to
L^p$ estimates for $\mathcal{L}_{\Omega}$ under the assumption
$$\Omega\in L^q(\mathbb{S}^{2n-1}) \quad \text{ for
}~\max{\Big(\;\frac{4}{3}\;,\; \frac{p}{2p-1} \;\Big)<q\le \infty}$$ where
$1<p_1,p_2<\infty$ with $1/p=1/p_1+1/p_2$. Our result improves that of
Grafakos, He, and Honz\'ik (Adv. Math. 326: 54-78, 2018), in which the more
restrictive condition $\Omega\in L^{\infty}(\mathbb{S}^{2n-1})$ is required for
the $L^{p_1}\times L^{p_2}\to L^p$ boundedness.
| 2104.14137 | 737,909 |
In this paper we consider reinforcement learning tasks with progressive
rewards; that is, tasks where the rewards tend to increase in magnitude over
time. We hypothesise that this property may be problematic for value-based deep
reinforcement learning agents, particularly if the agent must first succeed in
relatively unrewarding regions of the task in order to reach more rewarding
regions. To address this issue, we propose Spectral DQN, which decomposes the
reward into frequencies such that the high frequencies only activate when large
rewards are found. This allows the training loss to be balanced so that it
gives more even weighting across small and large reward regions. In two domains
with extreme reward progressivity, where standard value-based methods struggle
significantly, Spectral DQN is able to make much farther progress. Moreover,
when evaluated on a set of six standard Atari games that do not overtly favour
the approach, Spectral DQN remains more than competitive: While it
underperforms one of the benchmarks in a single game, it comfortably surpasses
the benchmarks in three games. These results demonstrate that the approach is
not overfit to its target problem, and suggest that Spectral DQN may have
advantages beyond addressing reward progressivity.
| 2104.14138 | 737,909 |
Baryon number is an accidental symmetry in the standard model, while
Peccei-Quinn symmetry is hypothetical symmetry which is introduced to solve the
strong CP problem. We study the possible connections between Peccei-Quinn
symmetry and baryon number symmetry. In this framework, an axion is identified
as the Nambu-Goldstone boson of baryon number violation. As a result,
characteristic baryon number violating processes are predicted. We developed
the general method to determine the baryon number and lepton number of new
scalar in the axion model.
| 2104.14139 | 737,909 |
One of the fundamental properties of an intermediate polar is the dynamical
nature of the accretion flow as it encounters the white dwarf's magnetosphere.
Many works have presumed a dichotomy between disk-fed accretion, in which the
WD accretes from a Keplerian disk, and stream-fed accretion, in which the
matter stream from the donor star directly impacts the WD's magnetosphere
without forming a disk. However, there is also a third, poorly understood
regime in which the accretion flow consists of a torus of diamagnetic blobs
that encircles the WD. This mode of accretion is expected to exist at
mass-transfer rates below those observed during disk-fed accretion, but above
those observed during pure stream-fed accretion. We invoke the diamagnetic-blob
regime to explain the exceptional TESS light curve of the intermediate polar TX
Col, which transitioned into and out of states of enhanced accretion during
Cycles 1 and 3. Power-spectral analysis reveals that the accretion was
principally stream-fed. However, when the mass-transfer rate spiked,
large-amplitude quasi-periodic oscillations (QPOs) abruptly appeared and
dominated the light curve for weeks. The QPOs have two striking properties:
they appear in a stream-fed geometry at elevated accretion rates, and they
occur preferentially within a well-defined range of frequencies (~10-25 cycles
per day). We propose that during episodes of enhanced accretion, a torus of
diamagnetic blobs forms near the binary's circularization radius and that the
QPOs are beats between the white dwarf's spin frequency and unstable blob
orbits within the WD's magnetosphere. We discuss how such a torus could be a
critical step in producing an accretion disk in a formerly diskless system.
| 2104.14140 | 737,909 |
In this paper we analyze $(-1)$- curves in the blown up projective space
$\mathbb{P}^r$ in general points. The notion of $(-1)$-curves was analyzed in
the early days of mirror symmetry by Kontsevich with the motivation of counting
curves on a Calabi-Yau. In dimension two, Nagata studied planar $(-1)$-curves
in order to construct counterexample to Hilbert's 14th problem.
In this paper, we first analyze the Coxeter systems and the Weyl group action
on curves, to identify infinitely many $(-1)$-curves on the blown-up
$\mathbb{P}^r$ when the number of points is at least $r+5$. We introduce a
bilinear form on a space of curves, that extends the intersection product in
dimension $2$, and a unique symmetric Weyl-invariant class (that we will refer
to as the anticanonical curve class). For Mori Dream Spaces we prove that
$(-1)$-curves can be defined arithmetically by the linear and quadratic
invariants determined by the bilinear form, while the set of $(0)$- and
$(1)$-Weyl lines determine the cone of effective divisors in blown-up
$\mathbb{P}^r$ with $r+3$ points.
| 2104.14141 | 737,909 |
Topological photonics and its topological edge state which can suppress
scattering and immune defects set off a research boom. Recently, the quantum
valley Hall effect (QVHE) with large valley Chern number and its multimode
topological transmission have been realized, which greatly improve the mode
density of the topological waveguide and its coupling efficiency with other
photonic devices. The multifrequency QVHE and its topological transmission have
been realized to increase the transmission capacity of topological waveguide,
but multifrequency and multimode QVHE have not been realized simultaneously. In
this Letter, the valley photonic crystal (VPC) is constructed with the
Stampfli-triangle photonic crystal (STPC), and its degeneracies in the
low-frequency and high-frequency bands are broken simultaneously to realize the
multifrequency and multimode QVHE. The multifrequency and multimode topological
transmission is realized through the U-shaped waveguide constructed with two
VPCs with opposite valley Chern numbers. According to the bulk-edge
correspondence principle, the Chern number is equal to the number of
topological edge states or topological waveguide modes. Therefore, we can
determine the valley Chern number of the VPC by the number of topological edge
states or topological waveguide modes, further determine the realization of
large valley Chern number. These results provide new ideas for high-efficiency
and high-capacity optical transmission and communication devices and their
integration, and broaden the application range of topological edge states.
| 2104.14142 | 737,909 |
In this paper, starting with an arbitrary graph $G$, we give a construction
of a graph $[G]$ with Cohen-Macaulay binomial edge ideal. We have extended this
construction for clutters also. We also discuss unmixed and Cohen-Macaulay
properties of binomial edge ideals of subgraphs.
| 2104.14143 | 737,909 |
A new analytical framework consisting of two phenomena: single sample and
multiple samples, is proposed to deal with the identification problem of
Boolean control networks (BCNs) systematically and comprehensively. Under this
framework, the existing works on identification can be categorized as special
cases of these two phenomena. Several effective criteria for determining the
identifiability and the corresponding identification algorithms are proposed.
Three important results are derived: (1) If a BN is observable, it is uniquely
identifiable; (2) If a BCN is O1-observable, it is uniquely identifiable, where
O1-observability is the most general form of the existing observability terms;
(3) A BN or BCN may be identifiable, but not observable. In addition, remarks
present some challenging future research and contain a preliminary attempt
about how to identify unobservable systems.
| 2104.14144 | 737,909 |
Primary mechanism governed the emerge of near-room-temperature
superconductivity in superhydrides is widely accepted to be the electron-phonon
interaction. If so, the temperature dependent resistance, R(T), in these
materials should be obey the Bloch-Gr\"uneisen equation, where the power-law
exponent, p, should be equal to exact integer value of p=5. From other hand,
there is a well-established theoretical result that the pure electron-magnon
interaction should be manifested by p=3, and p=2 is the value for pure
electron-electron interaction. Here we analysed the temperature dependent
resistance, R(T), for high-entropy alloy (ScZrNb)0.65[RhPd]0.35 and
highly-compressed boron and superhydrides H3S, LaHx, PrH9 and BaH12. In result,
we showed that high-entropy alloy (ScZrNb)0.65[RhPd]0.35 exhibited pure
electron-phonon mediated superconductor with p = 4.9. Unexpectedly we revealed
that all studied superhydrides exhibit 1.8 < p < 3.2. This implies that it is
unlikely that the electron-phonon interaction is the primary mechanism for the
Cooper pairs formation in highly-compressed superhydrides and alternative
pairing mechanisms, for instance, the electron-magnon, the electron-polaron,
the electron-electron or other, should be considered as the origin for the
emerge of near-room-temperature superconductivity in these compounds.
| 2104.14145 | 737,909 |
The question whether a partition $\mathcal{P}$ and a hierarchy $\mathcal{H}$
or a tree-like split system $\mathfrak{S}$ are compatible naturally arises in a
wide range of classification problems. In the setting of phylogenetic trees,
one asks whether the sets of $\mathcal{P}$ coincide with leaf sets of connected
components obtained by deleting some edges from the tree $T$ that represents
$\mathcal{H}$ or $\mathfrak{S}$, respectively. More generally, we ask whether a
refinement $T^*$ of $T$ exists such that $T^*$ and $\mathcal{P}$ are
compatible. We report several characterizations for (refinements of)
hierarchies and split systems that are compatible with (sets of) partitions. In
addition, we provide a linear-time algorithm to check whether refinements of
trees and a given partition are compatible. The latter problem becomes
NP-complete but fixed-parameter tractable if a set of partitions is considered
instead of a single partition. We finally explore the close relationship of the
concept of compatibility and so-called Fitch maps.
| 2104.14146 | 737,909 |
Many quantum materials of interest, ex., bilayer graphene, possess a number
of closely spaced but not fully degenerate bands near the Fermi level, where
the coupling to the far detuned remote bands can induce Berry curvatures of the
non-Abelian character in this active multiple-band manifold for transport
effects. Under finite electric fields, non-adiabatic interband transition
processes are expected to play significant roles in the associated Hall
conduction. Here through an exemplified study on the valley Hall conduction in
AB-stacked bilayer graphene, we show that the contribution arising from
non-adiabatic transitions around the bands near the Fermi energy to the Hall
current is not only quantitatively about an order-of-magnitude larger than the
contribution due to adiabatic inter-manifold transition with the non-Abelian
Berry curvatures. Due to the trigonal warping, the former also displays an
anisotropic response to the orientation of the applied electric field that is
qualitatively distinct from that of the latter. We further show that these
anisotropic responses also reveal the essential differences between the
diagonal and off-diagonal elements of the non-Abelian Berry curvature matrix in
terms of their contributions to the Hall currents. We provide a physically
intuitive understanding of the origin of distinct anisotropic features from
different Hall current contributions, in terms of band occupations and
interband coherence. This then points to the generalization beyond the specific
example of bilayer graphenes.
| 2104.14147 | 737,909 |
A line-of-sight towards the Galactic Center (GC) offers the largest number of
potentially habitable systems of any direction in the sky. The Breakthrough
Listen program is undertaking the most sensitive and deepest targeted SETI
surveys towards the GC. Here, we outline our observing strategies with Robert
C. Byrd Green Bank Telescope (GBT) and Parkes telescope to conduct 600 hours of
deep observations across 0.7--93 GHz. We report preliminary results from our
survey for ETI beacons across 1--8 GHz with 7.0 and 11.2 hours of observations
with Parkes and GBT, respectively. With our narrowband drifting signal search,
we were able to place meaningful constraints on ETI transmitters across 1--4
GHz and 3.9--8 GHz with EIRP limits of $\geq$4$\times$10$^{18}$ W among 60
million stars and $\geq$5$\times$10$^{17}$ W among half a million stars,
respectively. For the first time, we were able to constrain the existence of
artificially dispersed transient signals across 3.9--8 GHz with EIRP
$\geq$1$\times$10$^{14}$ W/Hz with a repetition period $\leq$4.3 hours. We also
searched our 11.2 hours of deep observations of the GC and its surrounding
region for Fast Radio Burst-like magnetars with the DM up to 5000 pc cm$^{-3}$
with maximum pulse widths up to 90 ms at 6 GHz. We detected several hundred
transient bursts from SGR J1745$-$2900, but did not detect any new transient
burst with the peak luminosity limit across our observed band of
$\geq$10$^{31}$ erg s$^{-1}$ and burst-rate of $\geq$0.23 burst-hr$^{-1}$.
These limits are comparable to bright transient emission seen from other
Galactic radio-loud magnetars, constraining their presence at the GC.
| 2104.14148 | 737,909 |
In the paper we study algebraic property of the monoid
$\mathbf{I}\mathbb{N}_{\infty}^{\textbf{g}[j]}$ of cofinite partial isometries
of the set of positive integers $\mathbb{N}$ with the bounded finite noise $j$.
We extend the Eberhart and Selden results for the closure of the bicyclic
monoid onto $\mathbf{I}\mathbb{N}_{\infty}^{\textbf{g}[j]}$ for any positive
integer $j$. In particular we show that for any positive integer $j$ every
Hausdorff shift-continuous topology $\tau$ on
$\mathbf{I}\mathbb{N}_{\infty}^{\textbf{g}[j]}$ is discrete and if
$\mathbf{I}\mathbb{N}_{\infty}^{\textbf{\emph{g}}[j]}$ is a proper dense
subsemigroup of a Hausdorff semitopological semigroup $S$, then $S\setminus
\mathbf{I}\mathbb{N}_{\infty}^{\textbf{\emph{g}}[j]}$ is a closed ideal of $S$,
and moreover if $S$ is a topological inverse semigroup then $S\setminus
\mathbf{I}\mathbb{N}_{\infty}^{\textbf{\emph{g}}[j]}$ is a topological group.
Also we describe the algebraic and topological structure of the closure of the
monoid $\mathbf{I}\mathbb{N}_{\infty}^{\textbf{\emph{g}}[j]}$ in a locally
compact topological inverse semigroup.
| 2104.14149 | 737,909 |
Extracting patterns and useful information from Natural Language datasets is
a challenging task, especially when dealing with data written in a language
different from English, like Italian. Machine and Deep Learning, together with
Natural Language Processing (NLP) techniques have widely spread and improved
lately, providing a plethora of useful methods to address both Supervised and
Unsupervised problems on textual information. We propose RECKONition, a
NLP-based system for Industrial Accidents at Work Prevention. RECKONition,
which is meant to provide Natural Language Understanding, Clustering and
Inference, is the result of a joint partnership with the Italian National
Institute for Insurance against Accidents at Work (INAIL). The obtained results
showed the ability to process textual data written in Italian describing
industrial accidents dynamics and consequences.
| 2104.14150 | 737,909 |
The main goal of this paper is to determine the asymptotic behavior of the
number $X_n$ of cut-vertices in random planar maps with $n$ edges. It is shown
that $X_n/n \to c$ in probability (for some explicit $c>0$). For so-called
subcritical classes of planar maps (like outerplanar maps) we obtain a central
limit theorem, too. Interestingly the combinatorics behind this seemingly
simple problem is quite involved.
| 2104.14151 | 737,909 |
We study the ground state properties of the doped Hubbard model with strong
interactions on honeycomb lattice by the Density Matrix Renormalization Group
(DMRG) method. At half-filling, due to the absence of minus sign problem, it is
now well established by large-scale Quantum Monte Carlo calculations that a
Dirac semi-metal to anti-ferromagnetic Mott insulator transition occurs with
the increase of the interaction strength $U$ for the Hubbard model on honeycomb
lattice. However, an understanding of the fate of the anti-ferromagnetic Mott
insulator when holes are doped into the system is still lacking. In this work,
by calculating the local spin and charge density for width-4 cylinders with
DMRG, we discover a half-filled stripe order in the doped Hubbard model on
honeycomb lattice. We also perform complementary large-scale mean-field
calculations with renormalized interaction strength. We observe half-filled
stripe order and find stripe states with filling close to one half are nearly
degenerate in energy.
| 2104.14152 | 737,909 |
In this article we present a novel discrete-time design approach which
reduces the deteriorating effects of sampling on stability and performance in
digitally controlled nonlinear mechanical systems. The method is motivated by
recent results for linear systems, where feedback imposes closed-loop behavior
that exactly represents the symplectic discretization of a desired target
system. In the nonlinear case, both the second order accurate representation of
the sampling process and the definition of the target dynamics stem from the
application of the implicit midpoint rule. The implicit nature of the resulting
state feedback requires the numerical solution of an in general nonlinear
system of algebraic equations in every sampling interval. For an implementation
with pure position feedback, the velocities/momenta have to be approximated in
the sampling instants, which gives a clear interpretation of our approach in
terms of the St\"ormer-Verlet integration scheme on a staggered grid. We
present discrete-time versions of impedance or energy shaping plus damping
injection control as well as computed torque tracking control. Both the
Hamiltonian and the Lagrangian perspective are adopted. Besides a linear
example to introduce the concept, the simulations with a planar two link robot
model illustrate the performance and stability gain compared to the discrete
implementations of continuous-time control laws. A short analysis of
computation times shows the real-time capability of our method.
| 2104.14153 | 737,909 |
Stellar atmospheric parameters (effective temperature, luminosity
classifications, and metallicity) estimates for some 24 million stars
(including over 19 million dwarfs and 5 million giants) are determined from the
stellar colors of SMSS DR2 and Gaia EDR3, based on training datasets with
available spectroscopic measurements from previous high/medium/low-resolution
spectroscopic surveys. The number of stars with photometric-metallicity
estimates is 4--5 times larger than that collected by the current largest
spectroscopic survey to date -- LAMOST -- over the course of the past decade.
External checks indicate that the precision of the photometric-metallicity
estimates are quite high, comparable to or slightly better than that derived
from spectroscopy, with typical values around 0.05--0.10 dex for [Fe/H] $>
-1.0$, 0.10--0.20 dex for $-2.0 <$ [Fe/H]$ \le -1.0$ and 0.20--0.25dex for
[Fe/H] $\le -2.0$, and include estimates for stars as metal-poor as [Fe/H]
$\sim -3.5$, substantially lower than previous photometric techniques.
Photometric-metallicity estimates are obtained for an unprecedented number of
metal-poor stars, including a total of over three million metal-poor (MP;
[Fe/H] $\le -1.0$) stars, over half a million very metal-poor (VMP; [Fe/H] $\le
-2.0)$ stars, and over 25,000 extremely metal-poor (EMP; [Fe/H] $\le -3.0$)
stars. From either parallax measurements from Gaia EDR3 or
metallicity-dependent color-absolute magnitude fiducials, distances are
determined for over 20 million stars in our sample. For the over 18 million
sample stars with accurate absolute magnitude estimates from Gaia parallaxes,
stellar ages are estimated by comparing with theoretical isochrones.
Astrometric information is provided for the stars in our catalog, along with
radial velocities for ~10% of our sample stars, taken from completed or ongoing
large-scale spectroscopic surveys.
| 2104.14154 | 737,909 |
The architecture of a coarse-grained reconfigurable array (CGRA) processing
element (PE) has a significant effect on the performance and energy efficiency
of an application running on the CGRA. This paper presents an automated
approach for generating specialized PE architectures for an application or an
application domain. Frequent subgraphs mined from a set of applications are
merged to form a PE architecture specialized to that application domain. For
the image processing and machine learning domains, we generate specialized PEs
that are up to 10.5x more energy efficient and consume 9.1x less area than a
baseline PE.
| 2104.14155 | 737,909 |
Highly automated driving functions currently often rely on a-priori knowledge
from maps for planning and prediction in complex scenarios like cities. This
makes map-relative localization an essential skill. In this paper, we address
the problem of localization with automotive-grade radars, using a real-time
graph-based SLAM approach. The system uses landmarks and odometry information
as an abstraction layer. This way, besides radars, all kind of different sensor
modalities including cameras and lidars can contribute. A single, semantic
landmark map is used and maintained for all sensors. We implemented our
approach using C++ and thoroughly tested it on data obtained with our test
vehicles, comprising cars and trucks. Test scenarios include inner cities and
industrial areas like container terminals. The experiments presented in this
paper suggest that the approach is able to provide a precise and stable pose in
structured environments, using radar data alone. The fusion of additional
sensor information from cameras or lidars further boost performance, providing
reliable semantic information needed for automated mapping.
| 2104.14156 | 737,909 |
Electromagnetically induced transparency (EIT) cooling has established itself
as one of the most widely used cooling schemes for trapped ions during the past
twenty years. Compared to its alternatives, EIT cooling possesses important
advantages such as a tunable effective linewidth, a very low steady state
phonon occupation, and applicability for multiple ions. However, existing
analytic expression for the steady state phonon occupation of EIT cooling is
limited to the zeroth order of the Lamb-Dicke parameter. Here we extend such
calculations and present the explicit expression to the second order of the
Lamb-Dicke parameter. We discuss several implications of our refined formula
and are able to resolve certain mysteries in existing results.
| 2104.14157 | 737,909 |
The regular (Bardeen)-AdS (BAdS) black hole (BH) in the extended phase space
is taken as an example for investigating the BH phase transition grade from
both macroscopic and microscopic points of view. The equation of state and
thermodynamic quantities of this BH are obtained. It is found that the BAdS BH
phase space in the extended phase space should be a second-order phase
transition near the critical point by verifying the Ehrenfest's equation, and
the possibility of its first-order phase transition can be ruled out by the
entropy continuity and the heat capacity mutation. The critical exponents from
the microscopic structure are analytically and numerically presented with the
Landau continuous phase transition theory by introducing a microscopic
order-parameter.
| 2104.14158 | 737,909 |
Autonomous vehicles face tremendous challenges while interacting with human
drivers in different kinds of scenarios. Developing control methods with safety
guarantees while performing interactions with uncertainty is an ongoing
research goal. In this paper, we present a real-time safe control framework
using bi-level optimization with Control Barrier Function (CBF) that enables an
autonomous ego vehicle to interact with human-driven cars in ramp merging
scenarios with a consistent safety guarantee. In order to explicitly address
motion uncertainty, we propose a novel extension of control barrier functions
to a probabilistic setting with provable chance-constrained safety and analyze
the feasibility of our control design. The formulated bi-level optimization
framework entails first choosing the ego vehicle's optimal driving style in
terms of safety and primary objective, and then minimally modifying a nominal
controller in the context of quadratic programming subject to the probabilistic
safety constraints. This allows for adaptation to different driving strategies
with a formally provable feasibility guarantee for the ego vehicle's safe
controller. Experimental results are provided to demonstrate the effectiveness
of our proposed approach.
| 2104.14159 | 737,909 |
We study the ground state of the doped Hubbard model on the honeycomb lattice
in the small doping and strongly interacting region. The nature of the ground
state by doping holes into the anti-ferromagnetic Mott insulating states on the
honeycomb lattice remains a long-standing unsolved issue, even though
tremendous efforts have been spent to investigate this challenging problem. An
accurate determination of the ground state in this system is crucial to
understand the interplay between the topology of Fermi surface and strong
correlation effect. In this work, we employ two complementary,
state-of-the-art, many-body computational methods -- constrained path (CP)
auxiliary-field quantum Monte Carlo (AFQMC) with self-consistent constraint and
density matrix renormalization group (DMRG) methods. Systematic and detailed
cross-validations are performed between these two methods for narrow systems
where DMRG can produce reliable results. AFQMC are then utilized to study wider
systems to investigate the thermodynamic limit properties. The ground state is
found to be a half-filled stripe state in the small doping and strongly
interacting region. The pairing correlation shows $d$-wave symmetry locally,
but decays exponentially with the distance between two pairs.
| 2104.14160 | 737,909 |
In this paper, channel estimation techniques and phase shift design for
intelligent reflecting surface (IRS)-empowered single-user multiple-input
multiple-output (SU-MIMO) systems are proposed. Among four channel estimation
techniques developed in the paper, the two novel ones, single-path approximated
channel (SPAC) and selective emphasis on rank-one matrices (SEROM), have low
training overhead to enable practical IRS-empowered SU-MIMO systems. SPAC is
mainly based on parameter estimation by approximating IRS-related channels as
dominant single-path channels. SEROM exploits IRS phase shifts as well as
training signals for channel estimation and easily adjusts its training
overhead. A closed-form solution for IRS phase shift design is also developed
to maximize spectral efficiency where the solution only requires basic linear
operations. Numerical results show that SPAC and SEROM combined with the
proposed IRS phase shift design achieve high spectral efficiency even with low
training overhead compared to existing methods.
| 2104.14161 | 737,909 |
We describe a transformation rule for Bergman kernels under a proper
holomorphic mapping which is factored by automorphisms $G$, where the group $G$
is a finite pseudoreflection group or conjugate to a finite pseudoreflection
group. Explicit formulae for Bergman kernels of several domains have been
deduced to demonstrate the adequacy of the described transformation rule.
| 2104.14162 | 737,909 |
Line-graph (LG) lattices are known for having topological flat bands (FBs)
from the destructive interference of Bloch wavefunctions encoded in lattice
symmetry. Here, we develop an atomic/molecular orbital design principle for the
existence of FBs in non-LG lattices. Using a generic tight-binding model, we
demonstrate that the underlying wavefunction symmetry of FBs in a LG lattice
can be transformed into the atomic/molecular orbital symmetry in a non-LG
lattice. We show such orbital-designed topological FBs in three common 2D
non-LG, square, trigonal, and hexagonal lattices, where the chosen orbitals
faithfully reproduce the corresponding localized FB plaquette states of
checkerboard, Kagome, and diatomic-Kagome lattices, respectively. Fundamentally
our theory enriches the FB physics; practically the proposed orbital design
principle is expected to significantly expand the scope of FB materials, since
most materials have multiple atomic/molecular orbitals at each lattice site,
rather than a single s orbital mandated in graph theory.
| 2104.14163 | 737,909 |
The hidden $\mathbb{Z}_2$ symmetry of the asymmetric quantum Rabi model
(AQRM) has recently been revealed via a systematic construction of the
underlying symmetry operator. Based on the AQRM result, we propose an ansatz
for the general form of the symmetry operators for AQRM-related models.
Applying this ansatz we obtain the symmetry operator for three models: the
anisotropic AQRM, the asymmetric Rabi-Stark model (ARSM) and the anisotropic
ARSM.
| 2104.14164 | 737,909 |
The anomalous Hall effect is caused by magnetic textures such as skyrmions.
We derive an analytical formula of the Hall conductivity on the surface of a
topological insulator up to third order in magnetization,
$\boldsymbol{M}(\boldsymbol{x})$, based on a perturbative approach. We identify
the magnetic textures that contribute to the Hall conductivity up to third
order in magnetization and second order in spatial differentiation. We treat
magnetization as a perturbation to calculate the Hall conductivity for each
magnetic texture based on the linear response theory. Furthermore, we estimate
the skyrmion-induced Hall conductivity and confirm that it depends on the shape
of skyrmions, such as Bloch-type or N\'eel-type skyrmions. The results of this
study can be applied not only to conventional skyrmion systems but also to more
general magnetic structures.
| 2104.14165 | 737,909 |
The Cauchy problem for the Hardy-H\'enon parabolic equation is studied in the
critical and subcritical regime in weighted Lebesgue spaces on the Euclidean
space $\mathbb{R}^d$. Well-posedness for singular initial data and existence of
non-radial forward self-similar solution of the problem are previously shown
only for the Hardy and Fujita cases ($\gamma\le 0$) in earlier works. The
weighted spaces enable us to treat the potential $|x|^{\gamma}$ as an increase
or decrease of the weight, thereby we can prove well-posedness to the problem
for all $\gamma$ with $-\min\{2,d\}<\gamma$ including the H\'enon case
($\gamma>0$). As a byproduct of the well-posedness, the self-similar solutions
to the problem are also constructed for all $\gamma$ without restrictions. A
non-existence result of local solution for supercritical data is also shown.
Therefore our critical exponent $s_c$ turns out to be optimal in regards to the
solvability.
| 2104.14166 | 737,909 |
Topological flat band (TFB) has been proposed theoretically in various
lattice models, to exhibit a rich spectrum of intriguing physical behaviors.
However, the experimental demonstration of flat band (FB) properties has been
severely hindered by the lack of materials realization. Here, by screening
materials from a first-principles materials database, we identify a group of 2D
materials with TFBs near the Fermi level, covering most of the known line-graph
and generalized line-graph FB lattice models. These include the Kagome
sublattice of O in TiO2 yielding a spin-unpolarized TFB, and that of V in
ferromagnetic V3F8 yielding a spin-polarized TFB. Monolayer Nb3TeCl7 and its
counterparts from element substitution are found to be breathing-Kagome-lattice
crystals. The family of monolayer III2VI3 compounds exhibit a TFB representing
the coloring-triangle lattice model. ReF3, MnF3 and MnBr3 are all predicted to
be diatomic-Kagome-lattice crystals, with TFB transitions induced by atomic
substitution. Finally, HgF2, CdF2 and ZnF2 are discovered to host dual TFBs in
the diamond-octagon lattice. Our findings pave the way to further experimental
exploration of eluding FB materials and properties.
| 2104.14167 | 737,909 |