title
stringlengths 1
280
| abstract
stringlengths 7
5.09k
|
---|---|
PiCoGen2: Piano cover generation with transfer learning approach and
weakly aligned data | Piano cover generation aims to create a piano cover from a pop song. Existing
approaches mainly employ supervised learning and the training demands
strongly-aligned and paired song-to-piano data, which is built by remapping
piano notes to song audio. This would, however, result in the loss of piano
information and accordingly cause inconsistencies between the original and
remapped piano versions. To overcome this limitation, we propose a transfer
learning approach that pre-trains our model on piano-only data and fine-tunes
it on weakly-aligned paired data constructed without note remapping. During
pre-training, to guide the model to learn piano composition concepts instead of
merely transcribing audio, we use an existing lead sheet transcription model as
the encoder to extract high-level features from the piano recordings. The
pre-trained model is then fine-tuned on the paired song-piano data to transfer
the learned composition knowledge to the pop song domain. Our evaluation shows
that this training strategy enables our model, named PiCoGen2, to attain
high-quality results, outperforming baselines on both objective and subjective
metrics across five pop genres.
|
Single-Pixel Fluorescent Diffraction Tomography | Optical diffraction tomography is an indispensable tool for studying objects
in three-dimensions due to its ability to accurately reconstruct scattering
objects. Until now this technique has been limited to coherent light because
spatial phase information is required to solve the inverse scattering problem.
We introduce a method that extends optical diffraction tomography to imaging
spatially incoherent contrast mechanisms such as fluorescent emission. Our
strategy mimics the coherent scattering process with two spatially coherent
illumination beams. The interferometric illumination pattern encodes spatial
phase in temporal variations of the fluorescent emission, thereby allowing
incoherent fluorescent emission to mimic the behavior of coherent illumination.
The temporal variations permit recovery of the propagation phase, and thus the
spatial distribution of incoherent fluorescent emission can be recovered with
an inverse scattering model.
|
Curvature-Aware Derivative-Free Optimization | The paper discusses derivative-free optimization (DFO), which involves
minimizing a function without access to gradients or directional derivatives,
only function evaluations. Classical DFO methods, which mimic gradient-based
methods, such as Nelder-Mead and direct search have limited scalability for
high-dimensional problems. Zeroth-order methods have been gaining popularity
due to the demands of large-scale machine learning applications, and the paper
focuses on the selection of the step size $\alpha_k$ in these methods. The
proposed approach, called Curvature-Aware Random Search (CARS), uses first- and
second-order finite difference approximations to compute a candidate
$\alpha_{+}$. We prove that for strongly convex objective functions, CARS
converges linearly provided that the search direction is drawn from a
distribution satisfying very mild conditions. We also present a Cubic
Regularized variant of CARS, named CARS-CR, which converges in a rate of
$\mathcal{O}(k^{-1})$ without the assumption of strong convexity. Numerical
experiments show that CARS and CARS-CR match or exceed the state-of-the-arts on
benchmark problem sets.
|
Ad hoc Cloud Computing: From Concept to Realization | This paper presents the first complete, integrated and end-to-end solution
for ad hoc cloud computing environments. Ad hoc clouds harvest resources from
existing sporadically available, non-exclusive (i.e. primarily used for some
other purpose) and unreliable infrastructures. In this paper we discuss the
problems ad hoc cloud computing solves and outline our architecture which is
based on BOINC.
|
Cluster-based Contrastive Disentangling for Generalized Zero-Shot
Learning | Generalized Zero-Shot Learning (GZSL) aims to recognize both seen and unseen
classes by training only the seen classes, in which the instances of unseen
classes tend to be biased towards the seen class. In this paper, we propose a
Cluster-based Contrastive Disentangling (CCD) method to improve GZSL by
alleviating the semantic gap and domain shift problems. Specifically, we first
cluster the batch data to form several sets containing similar classes. Then,
we disentangle the visual features into semantic-unspecific and
semantic-matched variables, and further disentangle the semantic-matched
variables into class-shared and class-unique variables according to the
clustering results. The disentangled learning module with random swapping and
semantic-visual alignment bridges the semantic gap. Moreover, we introduce
contrastive learning on semantic-matched and class-unique variables to learn
high intra-set and intra-class similarity, as well as inter-set and inter-class
discriminability. Then, the generated visual features conform to the underlying
characteristics of general images and have strong discriminative information,
which alleviates the domain shift problem well. We evaluate our proposed method
on four datasets and achieve state-of-the-art results in both conventional and
generalized settings.
|
Model Predictive Control and Reinforcement Learning: A Unified Framework
Based on Dynamic Programming | In this paper we describe a new conceptual framework that connects
approximate Dynamic Programming (DP), Model Predictive Control (MPC), and
Reinforcement Learning (RL). This framework centers around two algorithms,
which are designed largely independently of each other and operate in synergy
through the powerful mechanism of Newton's method. We call them the off-line
training and the on-line play algorithms. The names are borrowed from some of
the major successes of RL involving games; primary examples are the recent
(2017) AlphaZero program (which plays chess, [SHS17], [SSS17]), and the
similarly structured and earlier (1990s) TD-Gammon program (which plays
backgammon, [Tes94], [Tes95], [TeG96]). In these game contexts, the off-line
training algorithm is the method used to teach the program how to evaluate
positions and to generate good moves at any given position, while the on-line
play algorithm is the method used to play in real time against human or
computer opponents.
Significantly, the synergy between off-line training and on-line play also
underlies MPC (as well as other major classes of sequential decision problems),
and indeed the MPC design architecture is very similar to the one of AlphaZero
and TD-Gammon. This conceptual insight provides a vehicle for bridging the
cultural gap between RL and MPC, and sheds new light on some fundamental issues
in MPC. These include the enhancement of stability properties through rollout,
the treatment of uncertainty through the use of certainty equivalence, the
resilience of MPC in adaptive control settings that involve changing system
parameters, and the insights provided by the superlinear performance bounds
implied by Newton's method.
|
First-Order vs. Second-Order Encodings for LTLf-to-Automata Translation | Translating formulas of Linear Temporal Logic (LTL) over finite traces, or
LTLf, to symbolic Deterministic Finite Automata (DFA) plays an important role
not only in LTLf synthesis, but also in synthesis for Safety LTL formulas. The
translation is enabled by using MONA, a powerful tool for symbolic, BDD-based,
DFA construction from logic specifications. Recent works used a first-order
encoding of LTLf formulas to translate LTLf to First Order Logic (FOL), which
is then fed to MONA to get the symbolic DFA. This encoding was shown to perform
well, but other encodings have not been studied. Specifically, the natural
question of whether second-order encoding, which has significantly simpler
quantificational structure, can outperform first-order encoding remained open.
In this paper we address this challenge and study second-order encodings for
LTLf formulas. We first introduce a specific MSO encoding that captures the
semantics of LTLf in a natural way and prove its correctness. We then explore
is a Compact MSO encoding, which benefits from automata-theoretic minimization,
thus suggesting a possible practical advantage. To that end, we propose a
formalization of symbolic DFA in second-order logic, thus developing a novel
connection between BDDs and MSO. We then show by empirical evaluations that the
first-order encoding does perform better than both second-order encodings. The
conclusion is that first-order encoding is a better choice than second-order
encoding in LTLf-to-Automata translation.
|
Deep Learning Approach in Automatic Iceberg - Ship Detection with SAR
Remote Sensing Data | Deep Learning is gaining traction with geophysics community to understand
subsurface structures, such as fault detection or salt body in seismic data.
This study describes using deep learning method for iceberg or ship recognition
with synthetic aperture radar (SAR) data. Drifting icebergs pose a potential
threat to activities offshore around the Arctic, including for both ship
navigation and oil rigs. Advancement of satellite imagery using
weather-independent cross-polarized radar has enabled us to monitor and
delineate icebergs and ships, however a human component is needed to classify
the images. Here we present Transfer Learning, a convolutional neural network
(CNN) designed to work with a limited training data and features, while
demonstrating its effectiveness in this problem. Key aspect of the approach is
data augmentation and stacking of multiple outputs, resulted in a significant
boost in accuracy (logarithmic score of 0.1463). This algorithm has been tested
through participation at the Statoil/C-Core Kaggle competition.
|
Region-Aware Face Swapping | This paper presents a novel Region-Aware Face Swapping (RAFSwap) network to
achieve identity-consistent harmonious high-resolution face generation in a
local-global manner: \textbf{1)} Local Facial Region-Aware (FRA) branch
augments local identity-relevant features by introducing the Transformer to
effectively model misaligned cross-scale semantic interaction. \textbf{2)}
Global Source Feature-Adaptive (SFA) branch further complements global
identity-relevant cues for generating identity-consistent swapped faces.
Besides, we propose a \textit{Face Mask Predictor} (FMP) module incorporated
with StyleGAN2 to predict identity-relevant soft facial masks in an
unsupervised manner that is more practical for generating harmonious
high-resolution faces. Abundant experiments qualitatively and quantitatively
demonstrate the superiority of our method for generating more
identity-consistent high-resolution swapped faces over SOTA methods, \eg,
obtaining 96.70 ID retrieval that outperforms SOTA MegaFS by 5.87$\uparrow$.
|
How wireless queues benefit from motion: an analysis of the continuum
between zero and infinite mobility | This paper considers the time evolution of a queue that is embedded in a
Poisson point process of moving wireless interferers. The queue is driven by an
external arrival process and is subject to a time-varying service process that
is a function of the SINR that it sees. Static configurations of interferers
result in an infinite queue workload with positive probability. In contrast, a
generic stability condition is established for the queue in the case where
interferers possess any non-zero mobility that results in displacements that
are both independent across interferers and oblivious to interferer positions.
The proof leverages the mixing property of the Poisson point process. The
effect of an increase in mobility on queueing metrics is also studied. Convex
ordering tools are used to establish that faster moving interferers result in a
queue workload that is smaller for the increasing-convex stochastic order. As a
corollary, mean workload and mean delay decrease as network mobility increases.
This stochastic ordering as a function of mobility is explained by establishing
positive correlations between SINR level-crossing events at different time
points, and by determining the autocorrelation function for interference and
observing that it decreases with increasing mobility. System behaviour is
empirically analyzed using discrete-event simulation and the performance of
various mobility models is evaluated using heavy-traffic approximations.
|
Minimum Viable Model Estimates for Machine Learning Projects | Prioritization of machine learning projects requires estimates of both the
potential ROI of the business case and the technical difficulty of building a
model with the required characteristics. In this work we present a technique
for estimating the minimum required performance characteristics of a predictive
model given a set of information about how it will be used. This technique will
result in robust, objective comparisons between potential projects. The
resulting estimates will allow data scientists and managers to evaluate whether
a proposed machine learning project is likely to succeed before any modelling
needs to be done.
The technique has been implemented into the open source application MinViME
(Minimum Viable Model Estimator) which can be installed via the PyPI python
package management system, or downloaded directly from the GitHub repository.
Available at https://github.com/john-hawkins/MinViME
|
Meta Architecture Search | Neural Architecture Search (NAS) has been quite successful in constructing
state-of-the-art models on a variety of tasks. Unfortunately, the computational
cost can make it difficult to scale. In this paper, we make the first attempt
to study Meta Architecture Search which aims at learning a task-agnostic
representation that can be used to speed up the process of architecture search
on a large number of tasks. We propose the Bayesian Meta Architecture SEarch
(BASE) framework which takes advantage of a Bayesian formulation of the
architecture search problem to learn over an entire set of tasks
simultaneously. We show that on Imagenet classification, we can find a model
that achieves 25.7% top-1 error and 8.1% top-5 error by adapting the
architecture in less than an hour from an 8 GPU days pretrained meta-network.
By learning a good prior for NAS, our method dramatically decreases the
required computation cost while achieving comparable performance to current
state-of-the-art methods - even finding competitive models for unseen datasets
with very quick adaptation. We believe our framework will open up new
possibilities for efficient and massively scalable architecture search research
across multiple tasks.
|
Denoising LM: Pushing the Limits of Error Correction Models for Speech
Recognition | Language models (LMs) have long been used to improve results of automatic
speech recognition (ASR) systems, but they are unaware of the errors that ASR
systems make. Error correction models are designed to fix ASR errors, however,
they showed little improvement over traditional LMs mainly due to the lack of
supervised training data. In this paper, we present Denoising LM (DLM), which
is a $\textit{scaled}$ error correction model trained with vast amounts of
synthetic data, significantly exceeding prior attempts meanwhile achieving new
state-of-the-art ASR performance. We use text-to-speech (TTS) systems to
synthesize audio, which is fed into an ASR system to produce noisy hypotheses,
which are then paired with the original texts to train the DLM. DLM has several
$\textit{key ingredients}$: (i) up-scaled model and data; (ii) usage of
multi-speaker TTS systems; (iii) combination of multiple noise augmentation
strategies; and (iv) new decoding techniques. With a Transformer-CTC ASR, DLM
achieves 1.5% word error rate (WER) on $\textit{test-clean}$ and 3.3% WER on
$\textit{test-other}$ on Librispeech, which to our knowledge are the best
reported numbers in the setting where no external audio data are used and even
match self-supervised methods which use external audio data. Furthermore, a
single DLM is applicable to different ASRs, and greatly surpassing the
performance of conventional LM based beam-search rescoring. These results
indicate that properly investigated error correction models have the potential
to replace conventional LMs, holding the key to a new level of accuracy in ASR
systems.
|
Toroidal AutoEncoder | Enforcing distributions of latent variables in neural networks is an active
subject. It is vital in all kinds of generative models, where we want to be
able to interpolate between points in the latent space, or sample from it.
Modern generative AutoEncoders (AE) like WAE, SWAE, CWAE add a regularizer to
the standard (deterministic) AE, which allows to enforce Gaussian distribution
in the latent space. Enforcing different distributions, especially
topologically nontrivial, might bring some new interesting possibilities, but
this subject seems unexplored so far.
This article proposes a new approach to enforce uniform distribution on
d-dimensional torus. We introduce a circular spring loss, which enforces
minibatch points to be equally spaced and satisfy cyclic boundary conditions.
As example of application we propose multiple-path morphing. Minimal distance
geodesic between two points in uniform distribution on latent space of angles
becomes a line, however, torus topology allows us to choose such lines in
alternative ways, going through different edges of $[-\pi,\pi]^d$.
Further applications to explore can be for example trying to learn real-life
topologically nontrivial spaces of features, like rotations to automatically
recognize 2D rotation of an object in picture by training on relative angles,
or even 3D rotations by additionally using spherical features - this way
morphing should be close to object rotation.
|
Basic tasks of sentiment analysis | Subjectivity detection is the task of identifying objective and subjective
sentences. Objective sentences are those which do not exhibit any sentiment.
So, it is desired for a sentiment analysis engine to find and separate the
objective sentences for further analysis, e.g., polarity detection. In
subjective sentences, opinions can often be expressed on one or multiple
topics. Aspect extraction is a subtask of sentiment analysis that consists in
identifying opinion targets in opinionated text, i.e., in detecting the
specific aspects of a product or service the opinion holder is either praising
or complaining about.
|
A Novel Transmission Scheme for the $K$-user Broadcast Channel with
Delayed CSIT | The state-dependent $K$-user memoryless Broadcast Channel~(BC) with state
feedback is investigated. We propose a novel transmission scheme and derive its
corresponding achievable rate region, which, compared to some general schemes
that deal with feedback, has the advantage of being relatively simple and thus
is easy to evaluate. In particular, it is shown that the capacity region of the
symmetric erasure BC with an arbitrary input alphabet size is achievable with
the proposed scheme. For the fading Gaussian BC, we derive a symmetric
achievable rate as a function of the signal-to-noise ratio~(SNR) and a small
set of parameters. Besides achieving the optimal degrees of freedom at high
SNR, the proposed scheme is shown, through numerical results, to outperform
existing schemes from the literature in the finite SNR regime.
|
Squeezing, trisqueezing, and quadsqueezing in a spin-oscillator system | Quantum harmonic oscillators model a wide variety of phenomena ranging from
electromagnetic fields to vibrations of atoms in molecules. Their excitations
can be represented by bosons such as photons, single particles of light, or
phonons, the quanta of vibrational energy. Linear interactions that only create
and annihilate single bosons can generate coherent states of light or motion.
Introducing nth-order nonlinear interactions, that instead involve n bosons,
leads to increasingly complex quantum behaviour. For example, second-order
interactions enable squeezing, used to enhance the precision of measurements
beyond classical limits, while higher-order interactions create non-Gaussian
states essential for continuous-variable quantum computation. However,
generating nonlinear interactions is challenging, typically requiring
higher-order derivatives of the driving field or specialized hardware. Hybrid
systems, where linear interactions couple an oscillator to an additional spin,
offer a solution and are readily available across many platforms. Here, using
the spin of a single trapped ion coupled to its motion, we employ two linear
interactions to demonstrate up to fourth-order bosonic interactions; we focus
on generalised squeezing interactions and demonstrate squeezing, trisqueezing,
and quadsqueezing. We characterise these interactions, including their spin
dependence, and reconstruct the Wigner function of the resulting states. We
also discuss the scaling of the interaction strength, where we drive the
quadsqueezing interaction more than 100 times faster than using conventional
techniques. Our method presents no fundamental limit in the interaction order n
and applies to any platform supporting spin-dependent linear interactions.
Strong higher-order nonlinear interactions unlock the study of fundamental
quantum optics, quantum simulation, and computation in a hitherto unexplored
regime.
|
Neuroprosthetic decoder training as imitation learning | Neuroprosthetic brain-computer interfaces function via an algorithm which
decodes neural activity of the user into movements of an end effector, such as
a cursor or robotic arm. In practice, the decoder is often learned by updating
its parameters while the user performs a task. When the user's intention is not
directly observable, recent methods have demonstrated value in training the
decoder against a surrogate for the user's intended movement. We describe how
training a decoder in this way is a novel variant of an imitation learning
problem, where an oracle or expert is employed for supervised training in lieu
of direct observations, which are not available. Specifically, we describe how
a generic imitation learning meta-algorithm, dataset aggregation (DAgger, [1]),
can be adapted to train a generic brain-computer interface. By deriving
existing learning algorithms for brain-computer interfaces in this framework,
we provide a novel analysis of regret (an important metric of learning
efficacy) for brain-computer interfaces. This analysis allows us to
characterize the space of algorithmic variants and bounds on their regret
rates. Existing approaches for decoder learning have been performed in the
cursor control setting, but the available design principles for these decoders
are such that it has been impossible to scale them to naturalistic settings.
Leveraging our findings, we then offer an algorithm that combines imitation
learning with optimal control, which should allow for training of arbitrary
effectors for which optimal control can generate goal-oriented control. We
demonstrate this novel and general BCI algorithm with simulated neuroprosthetic
control of a 26 degree-of-freedom model of an arm, a sophisticated and
realistic end effector.
|
A 20 Gbps PAM4 Data Transmitter ASIC for Particle Physics Experiments | We present the design and test results of a novel data transmitter ASIC
operating up to 20.48 Gbps with 4-level Pulse-Amplitude-Modulation (PAM4) for
particle physics experiments. This ASIC, named GBS20, is fabricated in a 65 nm
CMOS technology. Two serializers share a 5.12 GHz Phase Locked Loop (PLL)
clock. The outputs from the serializers are combined into a PAM4 signal that
directly drives a Vertical-Cavity-Surface-Emitting-Laser (VCSEL). The input
data channels, each at 1.28 Gbps, are scrambled with an internal 27-1
Pseudo-Random Binary Sequence (PRBS), which also serves as a frame aligner.
GBS20 is tested to work at 10.24 and 20.48 Gbps with a VCSEL-based
Transmitter-Optical-Subassembly (TOSA). The power consumption of GBS20 is below
238 mW and reduced to 164 mW in the low-power mode.
|
A Conformer-based Waveform-domain Neural Acoustic Echo Canceller
Optimized for ASR Accuracy | Acoustic Echo Cancellation (AEC) is essential for accurate recognition of
queries spoken to a smart speaker that is playing out audio. Previous work has
shown that a neural AEC model operating on log-mel spectral features (denoted
"logmel" hereafter) can greatly improve Automatic Speech Recognition (ASR)
accuracy when optimized with an auxiliary loss utilizing a pre-trained ASR
model encoder. In this paper, we develop a conformer-based waveform-domain
neural AEC model inspired by the "TasNet" architecture. The model is trained by
jointly optimizing Negative Scale-Invariant SNR (SISNR) and ASR losses on a
large speech dataset. On a realistic rerecorded test set, we find that
cascading a linear adaptive AEC and a waveform-domain neural AEC is very
effective, giving 56-59% word error rate (WER) reduction over the linear AEC
alone. On this test set, the 1.6M parameter waveform-domain neural AEC also
improves over a larger 6.5M parameter logmel-domain neural AEC model by 20-29%
in easy to moderate conditions. By operating on smaller frames, the waveform
neural model is able to perform better at smaller sizes and is better suited
for applications where memory is limited.
|
On the Tradeoff Region of Secure Exact-Repair Regenerating Codes | We consider the $(n,k,d,\ell)$ secure exact-repair regenerating code problem,
which generalizes the $(n,k,d)$ exact-repair regenerating code problem with the
additional constraint that the stored file needs to be kept
information-theoretically secure against an eavesdropper, who can access the
data transmitted to regenerate a total of $\ell$ different failed nodes. For
all known results on this problem, the achievable tradeoff regions between the
normalized storage capacity and repair bandwidth have a single corner point,
achieved by a scheme proposed by Shah, Rashmi and Kumar (the SRK point). Since
the achievable tradeoff regions of the exact-repair regenerating code problem
without any secrecy constraints are known to have multiple corner points in
general, these existing results suggest a phase-change-like behavior, i.e.,
enforcing a secrecy constraint ($\ell\geq 1$) immediately reduces the tradeoff
region to one with a single corner point. In this work, we first show that when
the secrecy parameter $\ell$ is sufficiently large, the SRK point is indeed the
only corner point of the tradeoff region. However, when $\ell$ is small, we
show that the tradeoff region can in fact have multiple corner points. In
particular, we establish a precise characterization of the tradeoff region for
the $(7,6,6,1)$ problem, which has exactly two corner points. Thus, a smooth
transition, instead of a phase-change-type of transition, should be expected as
the secrecy constraint is gradually strengthened.
|
Approximation and FPT Algorithms for Finding DM-Irreducible Spanning
Subgraphs | Finding a minimum-weight strongly connected spanning subgraph of an
edge-weighted directed graph is equivalent to the weighted version of the
well-known strong connectivity augmentation problem. This problem is NP-hard,
and a simple $2$-approximation algorithm was proposed by Frederickson and
J\'aj\'a (1981); surprisingly, it still achieves the best known approximation
ratio in general. Also, Bang-Jensen and Yeo (2008) showed that the unweighted
problem is FPT (fixed-parameter tractable) parameterized by the difference from
a trivial upper bound of the optimal value. In this paper, we consider a
generalization related to the Dulmage--Mendelsohn decompositions of bipartite
graphs instead of the strong connectivity of directed graphs, and extend these
approximation and FPT results to the generalized setting.
|
SciPy 1.0--Fundamental Algorithms for Scientific Computing in Python | SciPy is an open source scientific computing library for the Python
programming language. SciPy 1.0 was released in late 2017, about 16 years after
the original version 0.1 release. SciPy has become a de facto standard for
leveraging scientific algorithms in the Python programming language, with more
than 600 unique code contributors, thousands of dependent packages, over
100,000 dependent repositories, and millions of downloads per year. This
includes usage of SciPy in almost half of all machine learning projects on
GitHub, and usage by high profile projects including LIGO gravitational wave
analysis and creation of the first-ever image of a black hole (M87). The
library includes functionality spanning clustering, Fourier transforms,
integration, interpolation, file I/O, linear algebra, image processing,
orthogonal distance regression, minimization algorithms, signal processing,
sparse matrix handling, computational geometry, and statistics. In this work,
we provide an overview of the capabilities and development practices of the
SciPy library and highlight some recent technical developments.
|
Model-corrected learned primal-dual models for fast limited-view
photoacoustic tomography | Learned iterative reconstructions hold great promise to accelerate
tomographic imaging with empirical robustness to model perturbations.
Nevertheless, an adoption for photoacoustic tomography is hindered by the need
to repeatedly evaluate the computational expensive forward model. Computational
feasibility can be obtained by the use of fast approximate models, but a need
to compensate model errors arises. In this work we advance the methodological
and theoretical basis for model corrections in learned image reconstructions by
embedding the model correction in a learned primal-dual framework. Here, the
model correction is jointly learned in data space coupled with a learned
updating operator in image space within an unrolled end-to-end learned
iterative reconstruction approach. The proposed formulation allows an extension
to a primal-dual deep equilibrium model providing fixed-point convergence as
well as reduced memory requirements for training. We provide theoretical and
empirical insights into the proposed models with numerical validation in a
realistic 2D limited-view setting. The model-corrected learned primal-dual
methods show excellent reconstruction quality with fast inference times and
thus providing a methodological basis for real-time capable and scalable
iterative reconstructions in photoacoustic tomography.
|
CAAP: Class-Dependent Automatic Data Augmentation Based On Adaptive
Policies For Time Series | Data Augmentation is a common technique used to enhance the performance of
deep learning models by expanding the training dataset. Automatic Data
Augmentation (ADA) methods are getting popular because of their capacity to
generate policies for various datasets. However, existing ADA methods primarily
focused on overall performance improvement, neglecting the problem of
class-dependent bias that leads to performance reduction in specific classes.
This bias poses significant challenges when deploying models in real-world
applications. Furthermore, ADA for time series remains an underexplored domain,
highlighting the need for advancements in this field. In particular, applying
ADA techniques to vital signals like an electrocardiogram (ECG) is a compelling
example due to its potential in medical domains such as heart disease
diagnostics.
We propose a novel deep learning-based approach called Class-dependent
Automatic Adaptive Policies (CAAP) framework to overcome the notable
class-dependent bias problem while maintaining the overall improvement in
time-series data augmentation. Specifically, we utilize the policy network to
generate effective sample-wise policies with balanced difficulty through class
and feature information extraction. Second, we design the augmentation
probability regulation method to minimize class-dependent bias. Third, we
introduce the information region concepts into the ADA framework to preserve
essential regions in the sample. Through a series of experiments on real-world
ECG datasets, we demonstrate that CAAP outperforms representative methods in
achieving lower class-dependent bias combined with superior overall
performance. These results highlight the reliability of CAAP as a promising ADA
method for time series modeling that fits for the demands of real-world
applications.
|
Impact of spatial auditory navigation on user experience during
augmented outdoor navigation tasks | The auditory sense of humans is important when it comes to navigation. The
importance is especially high in cases when an object of interest is visually
partly or fully covered. Interactions with users of technology are mainly
focused on the visual domain of navigation tasks. This paper presents the
results of a literature review and user study exploring the impact of spatial
auditory navigation on user experience during an augmented outdoor navigation
task. For the user test, participants used an augmented reality app guiding
them to different locations with different digital augmentation. We conclude
that the utilization of the auditory sense is yet still underrepresented in
augmented reality applications. In the future, more usage scenarios for
audio-augmented reality such as navigation will enhance user experience and
interaction quality.
|
Coding for Additive White Noise Channels with Feedback Corrupted by
Uniform Quantization or Bounded Noise | We present simple coding strategies, which are variants of the
Schalkwijk-Kailath scheme, for communicating reliably over additive white noise
channels in the presence of corrupted feedback. More specifically, we consider
a framework comprising an additive white forward channel and a backward link
which is used for feedback. We consider two types of corruption mechanisms in
the backward link. The first is quantization noise, i.e., the encoder receives
the quantized values of the past outputs of the forward channel. The
quantization is uniform, memoryless and time invariant (that is,
symbol-by-symbol scalar quantization), with bounded quantization error. The
second corruption mechanism is an arbitrarily distributed additive bounded
noise in the backward link. Here we allow symbol-by-symbol encoding at the
input to the backward channel. We propose simple explicit schemes that
guarantee positive information rate, in bits per channel use, with positive
error exponent. If the forward channel is additive white Gaussian then our
schemes achieve capacity, in the limit of diminishing amplitude of the noise
components at the backward link, while guaranteeing that the probability of
error converges to zero as a doubly exponential function of the block length.
Furthermore, if the forward channel is additive white Gaussian and the backward
link consists of an additive bounded noise channel, with signal-to-noise ratio
(SNR) constrained symbol-by-symbol encoding, then our schemes are also
capacity-achieving in the limit of high SNR.
|
Calibration of the GERDA experiment | The GERmanium Detector Array (GERDA) collaboration searched for neutrinoless
double-$\beta$ decay in $^{76}$Ge with an array of about 40 high-purity
isotopically-enriched germanium detectors. The experimental signature of the
decay is a monoenergetic signal at Q$_{\beta\beta}$ = 2039.061(7)keV in the
measured summed energy spectrum of the two emitted electrons. Both the energy
reconstruction and resolution of the germanium detectors are crucial to
separate a potential signal from various backgrounds, such as
neutrino-accompanied double-$\beta$ decays allowed by the Standard Model. The
energy resolution and stability were determined and monitored as a function of
time using data from regular $^{228}$Th calibrations. In this work, we describe
the calibration process and associated data analysis of the full GERDA dataset,
tailored to preserve the excellent resolution of the individual germanium
detectors when combining data over several years.
|
A comprehensive and biophysically detailed computational model of the
whole human heart electromechanics | While ventricular electromechanics is extensively studied, four-chamber heart
models have only been addressed recently; most of these works however neglect
atrial contraction. Indeed, as atria are characterized by a complex physiology
influenced by the ventricular function, developing computational models able to
capture the physiological atrial function and atrioventricular interaction is
very challenging. In this paper, we propose a biophysically detailed
electromechanical model of the whole human heart that considers both atrial and
ventricular contraction. Our model includes: i) an anatomically accurate
whole-heart geometry; ii) a comprehensive myocardial fiber architecture; iii) a
biophysically detailed microscale model for the active force generation; iv) a
0D closed-loop model of the circulatory system; v) the fundamental interactions
among the different core models; vi) specific constitutive laws and model
parameters for each cardiac region. Concerning the numerical discretization, we
propose an efficient segregated-intergrid-staggered scheme and we employ
recently developed stabilization techniques that are crucial to obtain a stable
formulation in a four-chamber scenario. We are able to reproduce the healthy
cardiac function for all the heart chambers, in terms of pressure-volume loops,
time evolution of pressures, volumes and fluxes, and three-dimensional cardiac
deformation, with unprecedented matching (to the best of our knowledge) with
the expected physiology. We also show the importance of considering atrial
contraction, fibers-stretch-rate feedback and suitable stabilization
techniques, by comparing the results obtained with and without these features
in the model. The proposed model represents the state-of-the-art
electromechanical model of the iHEART ERC project and is a fundamental step
toward the building of physics-based digital twins of the human heart.
|
Grape: Knowledge Graph Enhanced Passage Reader for Open-domain Question
Answering | A common thread of open-domain question answering (QA) models employs a
retriever-reader pipeline that first retrieves a handful of relevant passages
from Wikipedia and then peruses the passages to produce an answer. However,
even state-of-the-art readers fail to capture the complex relationships between
entities appearing in questions and retrieved passages, leading to answers that
contradict the facts. In light of this, we propose a novel knowledge Graph
enhanced passage reader, namely Grape, to improve the reader performance for
open-domain QA. Specifically, for each pair of question and retrieved passage,
we first construct a localized bipartite graph, attributed to entity embeddings
extracted from the intermediate layer of the reader model. Then, a graph neural
network learns relational knowledge while fusing graph and contextual
representations into the hidden states of the reader model. Experiments on
three open-domain QA benchmarks show Grape can improve the state-of-the-art
performance by up to 2.2 exact match score with a negligible overhead increase,
with the same retriever and retrieved passages. Our code is publicly available
at https://github.com/jumxglhf/GRAPE.
|
ParaDiS: Parallelly Distributable Slimmable Neural Networks | When several limited power devices are available, one of the most efficient
ways to make profit of these resources, while reducing the processing latency
and communication load, is to run in parallel several neural sub-networks and
to fuse the result at the end of processing. However, such a combination of
sub-networks must be trained specifically for each particular configuration of
devices (characterized by number of devices and their capacities) which may
vary over different model deployments and even within the same deployment. In
this work we introduce parallelly distributable slimmable (ParaDiS) neural
networks that are splittable in parallel among various device configurations
without retraining. While inspired by slimmable networks allowing instant
adaptation to resources on just one device, ParaDiS networks consist of several
multi-device distributable configurations or switches that strongly share the
parameters between them. We evaluate ParaDiS framework on MobileNet v1 and
ResNet-50 architectures on ImageNet classification task and WDSR architecture
for image super-resolution task. We show that ParaDiS switches achieve similar
or better accuracy than the individual models, i.e., distributed models of the
same structure trained individually. Moreover, we show that, as compared to
universally slimmable networks that are not distributable, the accuracy of
distributable ParaDiS switches either does not drop at all or drops by a
maximum of 1 % only in the worst cases. Finally, once distributed over several
devices, ParaDiS outperforms greatly slimmable models.
|
Chemical-protein Interaction Extraction via Gaussian Probability
Distribution and External Biomedical Knowledge | Motivation: The biomedical literature contains a wealth of chemical-protein
interactions (CPIs). Automatically extracting CPIs described in biomedical
literature is essential for drug discovery, precision medicine, as well as
basic biomedical research. Most existing methods focus only on the sentence
sequence to identify these CPIs. However, the local structure of sentences and
external biomedical knowledge also contain valuable information. Effective use
of such information may improve the performance of CPI extraction. Results: In
this paper, we propose a novel neural network-based approach to improve CPI
extraction. Specifically, the approach first employs BERT to generate
high-quality contextual representations of the title sequence, instance
sequence, and knowledge sequence. Then, the Gaussian probability distribution
is introduced to capture the local structure of the instance. Meanwhile, the
attention mechanism is applied to fuse the title information and biomedical
knowledge, respectively. Finally, the related representations are concatenated
and fed into the softmax function to extract CPIs. We evaluate our proposed
model on the CHEMPROT corpus. Our proposed model is superior in performance as
compared with other state-of-the-art models. The experimental results show that
the Gaussian probability distribution and external knowledge are complementary
to each other. Integrating them can effectively improve the CPI extraction
performance. Furthermore, the Gaussian probability distribution can effectively
improve the extraction performance of sentences with overlapping relations in
biomedical relation extraction tasks. Availability: Data and code are available
at https://github.com/CongSun-dlut/CPI_extraction. Contact: yangzh@dlut.edu.cn,
wangleibihami@gmail.com Supplementary information: Supplementary data are
available at Bioinformatics online.
|
Spatial Tactile Brain-Computer Interface Paradigm Applying Vibration
Stimuli to Large Areas of User's Back | We aim at an augmentation of communication abilities of amyotrophic lateral
sclerosis (ALS) patients by creating a brain-computer interface (BCI) which can
control a computer or other device by using only brain activity. As a method,
we use a stimulus-driven BCI based on vibration stimuli delivered via a gaming
pad to the user's back. We identify P300 responses from brain activity data in
response to the vibration stimuli. The user's intentions are classified
according to the P300 responses recorded in the EEG. From the results of the
psychophysical and online BCI experiments, we are able to classify the P300
responses very accurately, which proves the effectiveness of the proposed
method.
|
Continuing Progress on a Lattice QCD Software Infrastructure | We report on the progress of the software effort in the QCD Application Area
of SciDAC. In particular, we discuss how the software developed under SciDAC
enabled the aggressive exploitation of leadership computers, and we report on
progress in the area of QCD software for multi-core architectures.
|
Recursive Introspection: Teaching Language Model Agents How to
Self-Improve | A central piece in enabling intelligent agentic behavior in foundation models
is to make them capable of introspecting upon their behavior, reasoning, and
correcting their mistakes as more computation or interaction is available. Even
the strongest proprietary large language models (LLMs) do not quite exhibit the
ability of continually improving their responses sequentially, even in
scenarios where they are explicitly told that they are making a mistake. In
this paper, we develop RISE: Recursive IntroSpEction, an approach for
fine-tuning LLMs to introduce this capability, despite prior work hypothesizing
that this capability may not be possible to attain. Our approach prescribes an
iterative fine-tuning procedure, which attempts to teach the model how to alter
its response after having executed previously unsuccessful attempts to solve a
hard test-time problem, with optionally additional environment feedback. RISE
poses fine-tuning for a single-turn prompt as solving a multi-turn Markov
decision process (MDP), where the initial state is the prompt. Inspired by
principles in online imitation learning and reinforcement learning, we propose
strategies for multi-turn data collection and training so as to imbue an LLM
with the capability to recursively detect and correct its previous mistakes in
subsequent iterations. Our experiments show that RISE enables Llama2, Llama3,
and Mistral models to improve themselves with more turns on math reasoning
tasks, outperforming several single-turn strategies given an equal amount of
inference-time computation. We also find that RISE scales well, often attaining
larger benefits with more capable models. Our analysis shows that RISE makes
meaningful improvements to responses to arrive at the correct solution for
challenging prompts, without disrupting one-turn abilities as a result of
expressing more complex distributions.
|
Personalized Visited-POI Assignment to Individual Raw GPS Trajectories | Knowledge discovery from GPS trajectory data is an important topic in several
scientific areas, including data mining, human behavior analysis, and user
modeling. This paper proposes a task that assigns personalized visited-POIs.
Its goal is to estimate fine-grained and pre-defined locations (i.e., points of
interest (POI)) that are actually visited by users and assign visited-location
information to the corresponding span of their (personal) GPS trajectories. We
also introduce a novel algorithm to solve this assignment task. First, we
exhaustively extract stay-points as candidates for significant locations using
a variant of a conventional stay-point extraction method. Then we select
significant locations and simultaneously assign visited-POIs to them by
considering various aspects, which we formulate in integer linear programming.
Experimental results conducted on an actual user dataset show that our method
achieves higher accuracy in the visited-POI assignment task than the various
cascaded procedures of conventional methods.
|
A Computational Approach to Aspectual Composition | In this paper, I argue, contrary to the prevailing opinion in the linguistics
and philosophy literature, that a sortal approach to aspectual composition can
indeed be explanatory. In support of this view, I develop a synthesis of
competing proposals by Hinrichs, Krifka and Jackendoff which takes Jackendoff's
cross-cutting sortal distinctions as its point of departure. To show that the
account is well-suited for computational purposes, I also sketch an implemented
calculus of eventualities which yields many of the desired inferences. Further
details on both the model-theoretic semantics and the implementation can be
found in (White, 1994).
|
A multi-term solution of the space-time Boltzmann equation for electrons
in gaseous and liquid Argon | In a recent paper [1] the scattering and transport of excess electrons in
liquid argon in the hydrodynamic regime was investigated, generalizing the
seminal works of Lekner and Cohen [2,3] with modern scattering theory
techniques and kinetic theory. In this paper, the discussion is extended to the
non-hydrodynamic regime through the development of a full multi-term space-time
solution of Boltzmann's equation for electron transport in gases and liquids
using a novel operator-splitting method. A Green's function formalism is
considered that enables flexible adaptation to various experimental systems.
The spatio-temporal evolution of electrons in liquids in the hydrodynamic
regime is studied for a benchmark model Percus-Yevick liquid as well as for
liquid argon. The temporal evolution of Franck-Hertz oscillations are observed
for liquids, with striking differences in the spatio-temporal development of
the velocity distribution function components between the uncorrelated gas and
true liquid approximations in argon. Transport properties calculated from the
non-hydrodynamic theory in the long time limit, and under steady-state Townsend
conditions, are benchmarked against hydrodynamic transport coefficients.
|
Learning to Represent Programs with Heterogeneous Graphs | Program source code contains complex structure information, which can be
represented in structured data forms like trees or graphs. To acquire the
structural information in source code, most existing researches use abstract
syntax trees (AST). A group of works add additional edges to ASTs to convert
source code into graphs and use graph neural networks to learn representations
for program graphs. Although these works provide additional control or data
flow information to ASTs for downstream tasks, they neglect an important aspect
of structure information in AST itself: the different types of nodes and edges.
In ASTs, different nodes contain different kinds of information like variables
or control flow, and the relation between a node and all its children can also
be different.
To address the information of node and edge types, we bring the idea of
heterogeneous graphs to learning on source code and present a new formula of
building heterogeneous program graphs from ASTs with additional type
information for nodes and edges. We use the ASDL grammar of programming
language to define the node and edge types of program graphs. Then we use
heterogeneous graph neural networks to learn on these graphs. We evaluate our
approach on two tasks: code comment generation and method naming. Both tasks
require reasoning on the semantics of complete code snippets. Experiment
results show that our approach outperforms baseline models, including
homogeneous graph-based models, showing that leveraging the type information of
nodes and edges in program graphs can help in learning program semantics.
|
Robust Yet Efficient Conformal Prediction Sets | Conformal prediction (CP) can convert any model's output into prediction sets
guaranteed to include the true label with any user-specified probability.
However, same as the model itself, CP is vulnerable to adversarial test
examples (evasion) and perturbed calibration data (poisoning). We derive
provably robust sets by bounding the worst-case change in conformity scores.
Our tighter bounds lead to more efficient sets. We cover both continuous and
discrete (sparse) data and our guarantees work both for evasion and poisoning
attacks (on both features and labels).
|
Faster Sorting Networks for $17$, $19$ and $20$ Inputs | We present new parallel sorting networks for $17$ to $20$ inputs. For $17,
19,$ and $20$ inputs these new networks are faster (i.e., they require less
computation steps) than the previously known best networks. Therefore, we
improve upon the known upper bounds for minimal depth sorting networks on $17,
19,$ and $20$ channels. The networks were obtained using a combination of
hand-crafted first layers and a SAT encoding of sorting networks.
|
Flexible Non-interactive Short-term Implicit Certificate Generation for
VANETs | A leading industry standard for secure and trusted communication in vehicular
ad-hoc networks (VANETs) is the Security Credential Management System (SCMS).
It uses anonymous certificates, functioning as pseudonyms, to preserve the
privacy of vehicles. With the rapid development of advanced applications in
VANETs, such as crowdsensing and federated learning, vehicles need to
communicate with each other or infrastructures more frequently, leading to a
higher demand for pseudonyms. However, the current approach of certificate
provisioning in SCMS is not able to fully support pseudonyms, due to storage
limitation, cost of connectivity establishment, and communication overhead of
certificate downloading. To tackle this challenge, we propose a non-interactive
approach for SCMS, allowing vehicles themselves to generate short-term key
pairs and anonymous implicit certificates. Our evaluation and comparison with
previous work show that our solution not only effectively reduces the
communication cost, but also grants vehicles greater flexibility in certificate
generation and use. On the technical side, to the best of our knowledge, this
is the first work which (1) applies sanitizable signature for non-interactive
anonymous certificate generation, and (2) is specifically designed for SCMS,
which opens up possibilities for extensions and applications in industry.
|
Wavelet-based Heat Kernel Derivatives: Towards Informative Localized
Shape Analysis | In this paper, we propose a new construction for the Mexican hat wavelets on
shapes with applications to partial shape matching. Our approach takes its main
inspiration from the well-established methodology of diffusion wavelets. This
novel construction allows us to rapidly compute a multiscale family of Mexican
hat wavelet functions, by approximating the derivative of the heat kernel. We
demonstrate that it leads to a family of functions that inherit many attractive
properties of the heat kernel (e.g., a local support, ability to recover
isometries from a single point, efficient computation). Due to its natural
ability to encode high-frequency details on a shape, the proposed method
reconstructs and transfers $\delta$-functions more accurately than the
Laplace-Beltrami eigenfunction basis and other related bases. Finally, we apply
our method to the challenging problems of partial and large-scale shape
matching. An extensive comparison to the state-of-the-art shows that it is
comparable in performance, while both simpler and much faster than competing
approaches.
|
Inner approximation algorithm for solving linear multiobjective
optimization problems | Benson's outer approximation algorithm and its variants are the most
frequently used methods for solving linear multiobjective optimization
problems. These algorithms have two intertwined components: one-dimensional
linear optimization one one hand, and a combinatorial part closely related to
vertex numeration on the other. Their separation provides a deeper insight into
Benson's algorithm, and points toward a dual approach. Two skeletal algorithms
are defined which focus on the combinatorial part. Using different
single-objective optimization problems - called oracle calls - yield different
algorithms, such as a sequential convex hull algorithm, another version of
Benson's algorithm with the theoretically best possible iteration count, the
dual algorithm of Ehrgott, L\"ohne and Shao, and the new algorithm. The new
algorithm has several advantages. First, the corresponding one-dimensional
optimization problem uses the original constraints without adding any extra
variables or constraints. Second, its iteration count meets the theoretically
best possible one. As a dual algorithm, it is sequential: in each iteration it
produces an extremal solution, thus can be aborted when a satisfactory solution
is found. The Pareto front can be "probed" or "scanned" from several directions
at any moment without adversely affecting the efficiency. Finally, it is well
suited to handle highly degenerate problems where there are many linear
dependencies among the constraints. On problems with ten or more objectives the
implementation shows a significant increase in efficiency compared to Bensolve
- due to the reduced number of iterations and the improved combinatorial
handling.
|
In-Context Sharpness as Alerts: An Inner Representation Perspective for
Hallucination Mitigation | Large language models (LLMs) frequently hallucinate and produce factual
errors, yet our understanding of why they make these errors remains limited. In
this study, we delve into the underlying mechanisms of LLM hallucinations from
the perspective of inner representations, and discover a salient pattern
associated with hallucinations: correct generations tend to have sharper
context activations in the hidden states of the in-context tokens, compared to
the incorrect ones. Leveraging this insight, we propose an entropy-based metric
to quantify the ``sharpness'' among the in-context hidden states and
incorporate it into the decoding process to formulate a constrained decoding
approach. Experiments on various knowledge-seeking and hallucination benchmarks
demonstrate our approach's consistent effectiveness, for example, achieving up
to an 8.6 point improvement on TruthfulQA. We believe this study can improve
our understanding of hallucinations and serve as a practical solution for
hallucination mitigation.
|
Imitation Learning based Alternative Multi-Agent Proximal Policy
Optimization for Well-Formed Swarm-Oriented Pursuit Avoidance | Multi-Robot System (MRS) has garnered widespread research interest and
fostered tremendous interesting applications, especially in cooperative control
fields. Yet little light has been shed on the compound ability of formation,
monitoring and defence in decentralized large-scale MRS for pursuit avoidance,
which puts stringent requirements on the capability of coordination and
adaptability. In this paper, we put forward a decentralized Imitation learning
based Alternative Multi-Agent Proximal Policy Optimization (IA-MAPPO) algorithm
to provide a flexible and communication-economic solution to execute the
pursuit avoidance task in well-formed swarm. In particular, a
policy-distillation based MAPPO executor is firstly devised to capably
accomplish and swiftly switch between multiple formations in a centralized
manner. Furthermore, we utilize imitation learning to decentralize the
formation controller, so as to reduce the communication overheads and enhance
the scalability. Afterwards, alternative training is leveraged to compensate
the performance loss incurred by decentralization. The simulation results
validate the effectiveness of IA-MAPPO and extensive ablation experiments
further show the performance comparable to a centralized solution with
significant decrease in communication overheads.
|
Deep Complementary Joint Model for Complex Scene Registration and
Few-shot Segmentation on Medical Images | Deep learning-based medical image registration and segmentation joint models
utilize the complementarity (augmentation data or weakly supervised data from
registration, region constraints from segmentation) to bring mutual improvement
in complex scene and few-shot situation. However, further adoption of the joint
models are hindered: 1) the diversity of augmentation data is reduced limiting
the further enhancement of segmentation, 2) misaligned regions in weakly
supervised data disturb the training process, 3) lack of label-based region
constraints in few-shot situation limits the registration performance. We
propose a novel Deep Complementary Joint Model (DeepRS) for complex scene
registration and few-shot segmentation. We embed a perturbation factor in the
registration to increase the activity of deformation thus maintaining the
augmentation data diversity. We take a pixel-wise discriminator to extract
alignment confidence maps which highlight aligned regions in weakly supervised
data so the misaligned regions' disturbance will be suppressed via weighting.
The outputs from segmentation model are utilized to implement deep-based region
constraints thus relieving the label requirements and bringing fine
registration. Extensive experiments on the CT dataset of MM-WHS 2017 Challenge
show great advantages of our DeepRS that outperforms the existing
state-of-the-art models.
|
Autonomous Spacecraft Navigation Based on Pulsar Timing Information | We discuss the possibility of an autonomous navigation system for spacecraft
that is based on pulsar timing data. Pulsars are rapidly rotating neutron stars
that are observable as variable celestial sources of electromagnetic radiation.
Their periodic signals have timing stabilities comparable to atomic clocks and
provide characteristic temporal signatures that can be used as natural
navigation beacons, quite similar to the use of GPS satellites for navigation
on Earth. By comparing pulse arrival times measured on-board the spacecraft
with predicted pulse arrivals at some reference location, the spacecraft
position can be determined autonomously with accuracies on the order of 5
kilometres. For a spacecraft at a distance of 10 astronomical units from Earth
(e.g., Earth-Saturn), this means an improvement by a factor of 8 compared to
conventional methods. Therefore this new technology is an alternative to
standard navigation based on radio tracking by ground stations, without the
disadvantages of uncertainty increasing with distance from Earth and the
dependence on ground control.
|
Differential Evolution with Better and Nearest Option for Function
Optimization | Differential evolution(DE) is a conventional algorithm with fast convergence
speed. However, DE may be trapped in local optimal solution easily. Many
researchers devote themselves to improving DE. In our previously work, whale
swarm algorithm have shown its strong searching performance due to its niching
based mutation strategy. Based on this fact, we propose a new DE algorithm
called DE with Better and Nearest option (NbDE). In order to evaluate the
performance of NbDE, NbDE is compared with several meta-heuristic algorithms on
nine classical benchmark test functions with different dimensions. The results
show that NbDE outperforms other algorithms in convergence speed and accuracy.
|
There Are No Post-Quantum Weakly Pseudo-Free Families in Any Nontrivial
Variety of Expanded Groups | Let $\Omega$ be a finite set of finitary operation symbols and let $\mathfrak
V$ be a nontrivial variety of $\Omega$-algebras. Assume that for some set
$\Gamma\subseteq\Omega$ of group operation symbols, all $\Omega$-algebras in
$\mathfrak V$ are groups under the operations associated with the symbols in
$\Gamma$. In other words, $\mathfrak V$ is assumed to be a nontrivial variety
of expanded groups. In particular, $\mathfrak V$ can be a nontrivial variety of
groups or rings. Our main result is that there are no post-quantum weakly
pseudo-free families in $\mathfrak V$, even in the worst-case setting and/or
the black-box model. In this paper, we restrict ourselves to families
$(H_d\mathbin|d\in D)$ of computational and black-box $\Omega$-algebras (where
$D\subseteq\{0,1\}^*$) such that for every $d\in D$, each element of $H_d$ is
represented by a unique bit string of length polynomial in the length of $d$.
In our main result, we use straight-line programs to represent nontrivial
relations between elements of $\Omega$-algebras. Note that under certain
conditions, this result depends on the classification of finite simple groups.
Also, we define and study some types of weak pseudo-freeness for families of
computational and black-box $\Omega$-algebras.
|
Understanding the Impact of On-chip Communication on DNN Accelerator
Performance | Deep Neural Networks have flourished at an unprecedented pace in recent
years. They have achieved outstanding accuracy in fields such as computer
vision, natural language processing, medicine or economics. Specifically,
Convolutional Neural Networks (CNN) are particularly suited to object
recognition or identification tasks. This, however, comes at a high
computational cost, prompting the use of specialized GPU architectures or even
ASICs to achieve high speeds and energy efficiency. ASIC accelerators
streamline the execution of certain dataflows amenable to CNN computation that
imply the constant movement of large amounts of data, thereby turning on-chip
communication into a critical function within the accelerator. This paper
studies the communication flows within CNN inference accelerators of edge
devices, with the aim to justify current and future decisions in the design of
the on-chip networks that interconnect their processing elements. Leveraging
this analysis, we then qualitatively discuss the potential impact of
introducing the novel paradigm of wireless on-chip network in this context.
|
Texture-Based Input Feature Selection for Action Recognition | The performance of video action recognition has been significantly boosted by
using motion representations within a two-stream Convolutional Neural Network
(CNN) architecture. However, there are a few challenging problems in action
recognition in real scenarios, e.g., the variations in viewpoints and poses,
and the changes in backgrounds. The domain discrepancy between the training
data and the test data causes the performance drop. To improve the model
robustness, we propose a novel method to determine the task-irrelevant content
in inputs which increases the domain discrepancy. The method is based on a
human parsing model (HP model) which jointly conducts dense correspondence
labelling and semantic part segmentation. The predictions from the HP model
also function as re-rendering the human regions in each video using the same
set of textures to make humans appearances in all classes be the same. A
revised dataset is generated for training and testing and makes the action
recognition model exhibit invariance to the irrelevant content in the inputs.
Moreover, the predictions from the HP model are used to enrich the inputs to
the AR model during both training and testing. Experimental results show that
our proposed model is superior to existing models for action recognition on the
HMDB-51 dataset and the Penn Action dataset.
|
Privacy-preserving Scanpath Comparison for Pervasive Eye Tracking | As eye tracking becomes pervasive with screen-based devices and head-mounted
displays, privacy concerns regarding eye-tracking data have escalated. While
state-of-the-art approaches for privacy-preserving eye tracking mostly involve
differential privacy and empirical data manipulations, previous research has
not focused on methods for scanpaths. We introduce a novel privacy-preserving
scanpath comparison protocol designed for the widely used Needleman-Wunsch
algorithm, a generalized version of the edit distance algorithm. Particularly,
by incorporating the Paillier homomorphic encryption scheme, our protocol
ensures that no private information is revealed. Furthermore, we introduce a
random processing strategy and a multi-layered masking method to obfuscate the
values while preserving the original order of encrypted editing operation
costs. This minimizes communication overhead, requiring a single communication
round for each iteration of the Needleman-Wunsch process. We demonstrate the
efficiency and applicability of our protocol on three publicly available
datasets with comprehensive computational performance analyses and make our
source code publicly accessible.
|
Verification of PCP-Related Computational Reductions in Coq | We formally verify several computational reductions concerning the Post
correspondence problem (PCP) using the proof assistant Coq. Our verifications
include a reduction of a string rewriting problem generalising the halting
problem for Turing machines to PCP, and reductions of PCP to the intersection
problem and the palindrome problem for context-free grammars. Interestingly,
rigorous correctness proofs for some of the reductions are missing in the
literature.
|
Fast clique minor generation in Chimera qubit connectivity graphs | The current generation of D-Wave quantum annealing processor is designed to
minimize the energy of an Ising spin configuration whose pairwise interactions
lie on the edges of a {\em Chimera} graph $\mathcal C_{M,N,L}$. In order to
solve an Ising spin problem with arbitrary pairwise interaction structure, the
corresponding graph must be minor-embedded into a Chimera graph. We define a
combinatorial class of {\em native clique minors} in Chimera graphs with vertex
images of uniform, near minimal size, and provide a polynomial-time algorithm
that finds a maximum native clique minor in a given induced subgraph of a
Chimera graph. These minors allow improvement over recent work and have
immediate practical applications in the field of quantum annealing.
|
Approximating Robot Configuration Spaces with few Convex Sets using
Clique Covers of Visibility Graphs | Many computations in robotics can be dramatically accelerated if the robot
configuration space is described as a collection of simple sets. For example,
recently developed motion planners rely on a convex decomposition of the free
space to design collision-free trajectories using fast convex optimization. In
this work, we present an efficient method for approximately covering complex
configuration spaces with a small number of polytopes. The approach constructs
a visibility graph using sampling and generates a clique cover of this graph to
find clusters of samples that have mutual line of sight. These clusters are
then inflated into large, full-dimensional, polytopes. We evaluate our method
on a variety of robotic systems and show that it consistently covers larger
portions of free configuration space, with fewer polytopes, and in a fraction
of the time compared to previous methods.
|
An Optical Trap for Collisional Studies on Cold Fermionic Potassium | We report on trapping of fermionic 40K atoms in a red-detuned standing-wave
optical trap, loaded from a magneto-optical trap. Typically, 10^6 atoms are
loaded at a density of 10^12 cm^-3 and a temperature of 65 microK, and trapped
for more than 1 s. The optical trap appears to be the proper environment for
performing collisional measurements on the cold atomic sample. In particular we
measure the elastic collisional rate by detecting the rethermalization
following an intentional parametric heating of the atomic sample. We also
measure the inelastic two-body collisional rates for unpolarized atoms in the
ground hyperfine states, through detection of trap losses.
|
EMO: Emote Portrait Alive -- Generating Expressive Portrait Videos with
Audio2Video Diffusion Model under Weak Conditions | In this work, we tackle the challenge of enhancing the realism and
expressiveness in talking head video generation by focusing on the dynamic and
nuanced relationship between audio cues and facial movements. We identify the
limitations of traditional techniques that often fail to capture the full
spectrum of human expressions and the uniqueness of individual facial styles.
To address these issues, we propose EMO, a novel framework that utilizes a
direct audio-to-video synthesis approach, bypassing the need for intermediate
3D models or facial landmarks. Our method ensures seamless frame transitions
and consistent identity preservation throughout the video, resulting in highly
expressive and lifelike animations. Experimental results demonsrate that EMO is
able to produce not only convincing speaking videos but also singing videos in
various styles, significantly outperforming existing state-of-the-art
methodologies in terms of expressiveness and realism.
|
Discrete solution of the electrokinetic equations | We present a robust scheme for solving the electrokinetic equations. This
goal is achieved by combining the lattice-Boltzmann method (LB) with a discrete
solution of the convection-diffusion equation for the different charged and
neutral species that compose the fluid. The method is based on identifying the
elementary fluxes between nodes, which ensures the absence of spurious fluxes
in equilibrium. We show how the model is suitable to study electro-osmotic
flows. As an illustration, we show that, by introducing appropriate dynamic
rules in the presence of solid interfaces, we can compute the sedimentation
velocity (and hence the sedimentation potential) of a charged sphere. Our
approach does not assume linearization of the Poisson-Boltzmann equation and
allows us for a wide variation of the Peclet number.
|
Reassembling the English novel, 1789-1919 | The absence of an exhaustive bibliography of novels published in the British
Isles and Ireland during the 19th century blocks several lines of research in
sociologically-inclined literary history and book history. Without a detailed
account of novelistic production, it is difficult to characterize, for example,
the population of individuals who pursued careers as novelists. This paper
contributes to efforts to develop such an account by estimating yearly rates of
new novel publication in the British Isles and Ireland between 1789 and 1919.
This period witnessed, in aggregate, the publication of between 40,000 and
63,000 previously unpublished novels. The number of new novels published each
year counts as essential information for researchers interested in
understanding the development of the text industry between 1789 and 1919.
|
Optimal Few-GHW Linear Codes and Their Subcode Support Weight
Distributions | Few-weight codes have been constructed and studied for many years, since
their fascinating relations to finite geometries, strongly regular graphs and
Boolean functions. Simplex codes are one-weight Griesmer $[\frac{q^k-1}{q-1},k
,q^{k-1}]_q$-linear codes and they meet all Griesmer bounds of the generalized
Hamming weights of linear codes. All the subcodes with dimension $r$ of a
$[\frac{q^k-1}{q-1},k ,q^{k-1}]_q$-simplex code have the same subcode support
weight $\frac{q^{k-r}(q^r-1)}{q-1}$ for $1\leq r\leq k$. In this paper, we
construct linear codes meeting the Griesmer bound of the $r$-generalized
Hamming weight, such codes do not meet the Griesmer bound of the
$j$-generalized Hamming weight for $1\leq j<r$. Moreover these codes have only
few subcode support weights. The weight distribution and the subcode support
weight distributions of these distance-optimal codes are determined. Linear
codes constructed in this paper are natural generalizations of distance-optimal
few-weight codes.
|
Non-Transferable Utility Coalitional Games via Mixed-Integer Linear
Constraints | Coalitional games serve the purpose of modeling payoff distribution problems
in scenarios where agents can collaborate by forming coalitions in order to
obtain higher worths than by acting in isolation. In the classical Transferable
Utility (TU) setting, coalition worths can be freely distributed amongst
agents. However, in several application scenarios, this is not the case and the
Non-Transferable Utility setting (NTU) must be considered, where additional
application-oriented constraints are imposed on the possible worth
distributions. In this paper, an approach to define NTU games is proposed which
is based on describing allowed distributions via a set of mixed-integer linear
constraints applied to an underlying TU game. It is shown that such games allow
non-transferable conditions on worth distributions to be specified in a natural
and succinct way. The properties and the relationships among the most prominent
solution concepts for NTU games that hold when they are applied on
(mixed-integer) constrained games are investigated. Finally, a thorough
analysis is carried out to assess the impact of issuing constraints on the
computational complexity of some of these solution concepts.
|
Augment on Manifold: Mixup Regularization with UMAP | Data augmentation techniques play an important role in enhancing the
performance of deep learning models. Despite their proven benefits in computer
vision tasks, their application in the other domains remains limited. This
paper proposes a Mixup regularization scheme, referred to as UMAP Mixup,
designed for ``on-manifold" automated data augmentation for deep learning
predictive models. The proposed approach ensures that the Mixup operations
result in synthesized samples that lie on the data manifold of the features and
labels by utilizing a dimensionality reduction technique known as uniform
manifold approximation and projection. Evaluations across diverse regression
tasks show that UMAP Mixup is competitive with or outperforms other Mixup
variants, show promise for its potential as an effective tool for enhancing the
generalization performance of deep learning models.
|
Modeling film flows down a fibre influenced by nozzle geometry | We study the effects of nozzle geometry on the dynamics of thin fluid films
flowing down a vertical cylindrical fibre. Recent experiments show that varying
the nozzle diameter can lead to different flow regimes and droplet
characteristics in the film. Using a weighted residual modeling approach, we
develop a system of coupled equations that account for inertia, surface tension
effects, gravity, and a film stabilization mechanism to describe both
near-nozzle fluid structures and downstream bead dynamics. We report good
agreement between the predicted droplet properties and the experimental data.
|
Thutmose Tagger: Single-pass neural model for Inverse Text Normalization | Inverse text normalization (ITN) is an essential post-processing step in
automatic speech recognition (ASR). It converts numbers, dates, abbreviations,
and other semiotic classes from the spoken form generated by ASR to their
written forms. One can consider ITN as a Machine Translation task and use
neural sequence-to-sequence models to solve it. Unfortunately, such neural
models are prone to hallucinations that could lead to unacceptable errors. To
mitigate this issue, we propose a single-pass token classifier model that
regards ITN as a tagging task. The model assigns a replacement fragment to
every input token or marks it for deletion or copying without changes. We
present a dataset preparation method based on the granular alignment of ITN
examples. The proposed model is less prone to hallucination errors. The model
is trained on the Google Text Normalization dataset and achieves
state-of-the-art sentence accuracy on both English and Russian test sets.
One-to-one correspondence between tags and input words improves the
interpretability of the model's predictions, simplifies debugging, and allows
for post-processing corrections. The model is simpler than sequence-to-sequence
models and easier to optimize in production settings. The model and the code to
prepare the dataset is published as part of NeMo project.
|
Complex motion of precipitation bands | Formation and dynamics of an Al(OH)_3 precipitation ring is studied by
diffusing NaOH into a gel containing AlCl_3. Limited feeding of the outer
electrolyte (NaOH) is found to yield an intricate ring-dynamics which involves
stopping and reversal of the direction of motion of the precipitation ring, and
evolution into stationary multi-ring structures. A model of the ring-dynamics
is developed by combining a phase separation scenario for the precipitation
with the redissolution (complex formation) of the precipitate in the excess of
the outer electrolyte.
|
A 4th-Order Particle-in-Cell Method with Phase-Space Remapping for the
Vlasov-Poisson Equation | Numerical solutions to the Vlasov-Poisson system of equations have important
applications to both plasma physics and cosmology. In this paper, we present a
new Particle-in-Cell (PIC) method for solving this system that is 4th-order
accurate in both space and time. Our method is a high-order extension of one
presented previously [B. Wang, G. Miller, and P. Colella, SIAM J. Sci. Comput.,
33 (2011), pp. 3509--3537]. It treats all of the stages of the standard PIC
update - charge deposition, force interpolation, the field solve, and the
particle push - with 4th-order accuracy, and includes a 6th-order accurate
phase-space remapping step for controlling particle noise. We demonstrate the
convergence of our method on a series of one- and two- dimensional
electrostatic plasma test problems, comparing its accuracy to that of a
2nd-order method. As expected, the 4th-order method can achieve comparable
accuracy to the 2nd-order method with many fewer resolution elements.
|
Reduced-order modeling of two-dimensional turbulent Rayleigh-B\'enard
flow by hybrid quantum-classical reservoir computing | Two hybrid quantum-classical reservoir computing models are presented to
reproduce low-order statistical properties of a two-dimensional turbulent
Rayleigh-B\'enard convection flow at a Rayleigh number Ra=1e+5 and a Prandtl
number Pr=10. These properties comprise the mean vertical profiles of the root
mean square velocity and temperature and the turbulent convective heat flux.
Both quantum algorithms differ by the arrangement of the circuit layers of the
quantum reservoir, in particular the entanglement layers. The second of the two
quantum circuit architectures, denoted as H2, enables a complete execution of
the reservoir update inside the quantum circuit without the usage of external
memory. Their performance is compared with that of a classical reservoir
computing model. Therefore, all three models have to learn the nonlinear and
chaotic dynamics of the turbulent flow at hand in a lower-dimensional latent
data space which is spanned by the time-dependent expansion coefficients of the
16 most energetic Proper Orthogonal Decomposition (POD) modes. These training
data are generated by a POD snapshot analysis from direct numerical simulations
of the original turbulent flow. All reservoir computing models are operated in
the reconstruction mode. We analyse different measures of the reconstruction
error in dependence on the hyperparameters which are specific for the quantum
cases or shared with the classical counterpart, such as the reservoir size and
the leaking rate. We show that both quantum algorithms are able to reconstruct
the essential statistical properties of the turbulent convection flow
successfully with similar performance compared to the classical reservoir
network. Most importantly, the quantum reservoirs are by a factor of 4 to 8
smaller in comparison to the classical case.
|
Security Evaluation for Block Scrambling-Based Image Encryption
Including JPEG Distortion against Jigsaw Puzzle Solver Attacks | Encryption-then-Compression (EtC) systems have been considered for the
user-controllable privacy protection of social media like Twitter. The aim of
this paper is to evaluate the security of block scrambling-based encryption
schemes, which have been proposed to construct EtC systems. Even though this
scheme has enough key spaces against brute-force attacks, each block in
encrypted images has almost the same correlation as that of original images.
Therefore, it is required to consider the security from different viewpoints
from number theory-based encryption methods with provable security such as RSA
and AES. In this paper, we evaluate the security of encrypted images including
JPEG distortion by using automatic jigsaw puzzle solvers.
|
Less is More: ClipBERT for Video-and-Language Learning via Sparse
Sampling | The canonical approach to video-and-language learning (e.g., video question
answering) dictates a neural model to learn from offline-extracted dense video
features from vision models and text features from language models. These
feature extractors are trained independently and usually on tasks different
from the target domains, rendering these fixed features sub-optimal for
downstream tasks. Moreover, due to the high computational overload of dense
video features, it is often difficult (or infeasible) to plug feature
extractors directly into existing approaches for easy finetuning. To provide a
remedy to this dilemma, we propose a generic framework ClipBERT that enables
affordable end-to-end learning for video-and-language tasks, by employing
sparse sampling, where only a single or a few sparsely sampled short clips from
a video are used at each training step. Experiments on text-to-video retrieval
and video question answering on six datasets demonstrate that ClipBERT
outperforms (or is on par with) existing methods that exploit full-length
videos, suggesting that end-to-end learning with just a few sparsely sampled
clips is often more accurate than using densely extracted offline features from
full-length videos, proving the proverbial less-is-more principle. Videos in
the datasets are from considerably different domains and lengths, ranging from
3-second generic domain GIF videos to 180-second YouTube human activity videos,
showing the generalization ability of our approach. Comprehensive ablation
studies and thorough analyses are provided to dissect what factors lead to this
success. Our code is publicly available at https://github.com/jayleicn/ClipBERT
|
Skeptical Deep Learning with Distribution Correction | Recently deep neural networks have been successfully used for various
classification tasks, especially for problems with massive perfectly labeled
training data. However, it is often costly to have large-scale credible labels
in real-world applications. One solution is to make supervised learning robust
with imperfectly labeled input. In this paper, we develop a distribution
correction approach that allows deep neural networks to avoid overfitting
imperfect training data. Specifically, we treat the noisy input as samples from
an incorrect distribution, which will be automatically corrected during our
training process. We test our approach on several classification datasets with
elaborately generated noisy labels. The results show significantly higher
prediction and recovery accuracy with our approach compared to alternative
methods.
|
PCPATCH: software for the topological construction of multigrid
relaxation methods | Effective relaxation methods are necessary for good multigrid convergence.
For many equations, standard Jacobi and Gau{\ss}-Seidel are inadequate, and
more sophisticated space decompositions are required; examples include problems
with semidefinite terms or saddle point structure. In this paper we present a
unifying software abstraction, PCPATCH, for the topological construction of
space decompositions for multigrid relaxation methods. Space decompositions are
specified by collecting topological entities in a mesh (such as all vertices or
faces) and applying a construction rule (such as taking all degrees of freedom
in the cells around each entity). The software is implemented in PETSc and
facilitates the elegant expression of a wide range of schemes merely by varying
solver options at runtime. In turn, this allows for the very rapid development
of fast solvers for difficult problems.
|
AI-enhanced on-the-fly simulation of nonlinear time-resolved spectra | Time-resolved spectroscopy is an important tool for unraveling the minute
details of structural changes of molecules of biological and technological
significance. The nonlinear femtosecond signals detected for such systems must
be interpreted, but it is a challenging task for which theoretical simulations
are often indispensable. Accurate simulations of transient-absorption or
two-dimensional electronic spectra are, however, computationally very
expensive, prohibiting the wider adoption of existing first-principles methods.
Here, we report an AI-enhanced protocol to drastically reduce the computational
cost of simulating nonlinear time-resolved electronic spectra which makes such
simulations affordable for polyatomic molecules of increasing size. The
protocol is based on doorway-window approach for the on-the-fly surface-hopping
simulations. We show its applicability for the prototypical molecule of
pyrazine for which it produces spectra with high precision with respect to ab
initio reference while cutting the computational cost by at least 95% compared
to pure first-principles simulations.
|
Phi-3 Safety Post-Training: Aligning Language Models with a "Break-Fix"
Cycle | Recent innovations in language model training have demonstrated that it is
possible to create highly performant models that are small enough to run on a
smartphone. As these models are deployed in an increasing number of domains, it
is critical to ensure that they are aligned with human preferences and safety
considerations. In this report, we present our methodology for safety aligning
the Phi-3 series of language models. We utilized a "break-fix" cycle,
performing multiple rounds of dataset curation, safety post-training,
benchmarking, red teaming, and vulnerability identification to cover a variety
of harm areas in both single and multi-turn scenarios. Our results indicate
that this approach iteratively improved the performance of the Phi-3 models
across a wide range of responsible AI benchmarks. Finally, we include
additional red teaming strategies and evaluations that were used to test the
safety behavior of Phi-3.5-mini and Phi-3.5-MoE, which were optimized for
multilingual capabilities.
|
Truly Generalizable Radiograph Segmentation with Conditional Domain
Adaptation | Digitization techniques for biomedical images yield different visual patterns
in radiological exams. These differences may hamper the use of data-driven
approaches for inference over these images, such as Deep Neural Networks.
Another noticeable difficulty in this field is the lack of labeled data, even
though in many cases there is an abundance of unlabeled data available.
Therefore an important step in improving the generalization capabilities of
these methods is to perform Unsupervised and Semi-Supervised Domain Adaptation
between different datasets of biomedical images. In order to tackle this
problem, in this work we propose an Unsupervised and Semi-Supervised Domain
Adaptation method for segmentation of biomedical images using Generative
Adversarial Networks for Unsupervised Image Translation. We merge these
unsupervised networks with supervised deep semantic segmentation architectures
in order to create a semi-supervised method capable of learning from both
unlabeled and labeled data, whenever labeling is available. We compare our
method using several domains, datasets, segmentation tasks and traditional
baselines, such as unsupervised distance-based methods and reusing pretrained
models both with and without Fine-tuning. We perform both quantitative and
qualitative analysis of the proposed method and baselines in the distinct
scenarios considered in our experimental evaluation. The proposed method shows
consistently better results than the baselines in scarce labeled data
scenarios, achieving Jaccard values greater than 0.9 and good segmentation
quality in most tasks. Unsupervised Domain Adaptation results were observed to
be close to the Fully Supervised Domain Adaptation used in the traditional
procedure of Fine-tuning pretrained networks.
|
Evidence for a photon mass | The author's work over the past years has indicated that the photon has a
small mass $\sim 10^{-33}eV$. Recent observations from three different
viewpoints -- the time lag in cosmic gamma rays with different frequencies, the
observation of the spectra of blazars and an analysis of the CMB power
supression from the WMAP data -- all vindicate this conclusion and remarkably,
the same value.
|
Approximate Gram-Matrix Interpolation for Wideband Massive MU-MIMO
Systems | Numerous linear and non-linear data-detection and precoding algorithms for
wideband massive multi-user (MU) multiple-input multiple-output (MIMO) wireless
systems that rely on orthogonal frequency-division multiplexing (OFDM) or
single-carrier frequency-division multiple access (SC-FDMA) require the
computation of the Gram matrix for each active subcarrier. Computing the Gram
matrix for each active subcarrier, however, results in excessively high
computational complexity. In this paper, we propose novel, approximate
algorithms that significantly reduce the complexity of Gram-matrix computation
by simultaneously exploiting correlation across subcarriers and channel
hardening. We show analytically that a small fraction of Gram-matrix
computations in combination with approximate interpolation schemes are
sufficient to achieve near-optimal error-rate performance at low computational
complexity in massive MU-MIMO systems. We also demonstrate that the proposed
methods exhibit improved robustness against channel-estimation errors compared
to exact Gram-matrix interpolation algorithms that typically require high
computational complexity.
|
Computational Geometry Column 39 | The resolution of a decades-old open problem is described: polygonal chains
cannot lock in the plane.
|
More Than Meets The Eye: Semi-supervised Learning Under Non-IID Data | A common heuristic in semi-supervised deep learning (SSDL) is to select
unlabelled data based on a notion of semantic similarity to the labelled data.
For example, labelled images of numbers should be paired with unlabelled images
of numbers instead of, say, unlabelled images of cars. We refer to this
practice as semantic data set matching. In this work, we demonstrate the limits
of semantic data set matching. We show that it can sometimes even degrade the
performance for a state of the art SSDL algorithm. We present and make
available a comprehensive simulation sandbox, called non-IID-SSDL, for stress
testing an SSDL algorithm under different degrees of distribution mismatch
between the labelled and unlabelled data sets. In addition, we demonstrate that
simple density based dissimilarity measures in the feature space of a generic
classifier offer a promising and more reliable quantitative matching criterion
to select unlabelled data before SSDL training.
|
Beyond Observed Connections : Link Injection | In this paper, we proposed the \textit{link injection}, a novel method that
helps any differentiable graph machine learning models to go beyond observed
connections from the input data in an end-to-end learning fashion. It finds out
(weak) connections in favor of the current task that is not present in the
input data via a parametric link injection layer. We evaluate our method on
both node classification and link prediction tasks using a series of
state-of-the-art graph convolution networks. Results show that the link
injection helps a variety of models to achieve better performances on both
applications. Further empirical analysis shows a great potential of this method
in efficiently exploiting unseen connections from the injected links.
|
Stability Analysis of Piecewise Affine Systems with Multi-model Model
Predictive Control | Constrained model predictive control (MPC) is a widely used control strategy,
which employs moving horizon-based on-line optimisation to compute the optimum
path of the manipulated variables. Nonlinear MPC can utilize detailed models
but it is computationally expensive; on the other hand linear MPC may not be
adequate. Piecewise affine (PWA) models can describe the underlying nonlinear
dynamics more accurately, therefore they can provide a viable trade-off through
their use in multi-model linear MPC configurations, which avoid integer
programming. However, such schemes may introduce uncertainty affecting the
closed loop stability. In this work, we propose an input to output stability
analysis for closed loop systems, consisting of PWA models, where an observer
and multi-model linear MPC are applied together, under unstructured
uncertainty. Integral quadratic constraints (IQCs) are employed to assess the
robustness of MPC under uncertainty. We create a model pool, by performing
linearisation on selected transient points. All the possible uncertainties and
nonlinearities (including the controller) can be introduced in the framework,
assuming that they admit the appropriate IQCs, whilst the dissipation
inequality can provide necessary conditions incorporating IQCs. We demonstrate
the existence of static multipliers, which can reduce the conservatism of the
stability analysis significantly. The proposed methodology is demonstrated
through two engineering case studies.
|
The motion of two identical masses connected by an ideal string
symmetrically placed over a corner | We introduce a novel, two-mass system that slides up an inclined plane while
its center of mass moves down. The system consists of two identical masses
connected by an ideal string symmetrically placed over a corner-shaped support.
This system is similar to a double-cone that rolls up an inclined set of
V-shaped rails. We find the double-cone's motion easy to demonstrate but
difficult to analyze. Our example here is more straightforward to follow, and
the experimental observations are in good agreement with the theoretical
predictions.
|
Confidence Intervals for Testing Disparate Impact in Fair Learning | We provide the asymptotic distribution of the major indexes used in the
statistical literature to quantify disparate treatment in machine learning. We
aim at promoting the use of confidence intervals when testing the so-called
group disparate impact. We illustrate on some examples the importance of using
confidence intervals and not a single value.
|
Introducing Randomized High Order Fuzzy Cognitive Maps as Reservoir
Computing Models: A Case Study in Solar Energy and Load Forecasting | Fuzzy Cognitive Maps (FCMs) have emerged as an interpretable signed weighted
digraph method consisting of nodes (concepts) and weights which represent the
dependencies among the concepts. Although FCMs have attained considerable
achievements in various time series prediction applications, designing an FCM
model with time-efficient training method is still an open challenge. Thus,
this paper introduces a novel univariate time series forecasting technique,
which is composed of a group of randomized high order FCM models labeled
R-HFCM. The novelty of the proposed R-HFCM model is relevant to merging the
concepts of FCM and Echo State Network (ESN) as an efficient and particular
family of Reservoir Computing (RC) models, where the least squares algorithm is
applied to train the model. From another perspective, the structure of R-HFCM
consists of the input layer, reservoir layer, and output layer in which only
the output layer is trainable while the weights of each sub-reservoir
components are selected randomly and keep constant during the training process.
As case studies, this model considers solar energy forecasting with public data
for Brazilian solar stations as well as Malaysia dataset, which includes hourly
electric load and temperature data of the power supply company of the city of
Johor in Malaysia. The experiment also includes the effect of the map size,
activation function, the presence of bias and the size of the reservoir on the
accuracy of R-HFCM method. The obtained results confirm the outperformance of
the proposed R-HFCM model in comparison to the other methods. This study
provides evidence that FCM can be a new way to implement a reservoir of
dynamics in time series modelling.
|
Source localization using particle filtering on FPGA for robotic
navigation with imprecise binary measurement | Particle filtering is a recursive Bayesian estimation technique that has
gained popularity recently for tracking and localization applications. It uses
Monte Carlo simulation and has proven to be a very reliable technique to model
non-Gaussian and non-linear elements of physical systems. Particle filters
outperform various other traditional filters like Kalman filters in
non-Gaussian and non-linear settings due to their non-analytical and
non-parametric nature. However, a significant drawback of particle filters is
their computational complexity, which inhibits their use in real-time
applications with conventional CPU or DSP based implementation schemes. This
paper proposes a modification to the existing particle filter algorithm and
presents a highspeed and dedicated hardware architecture. The architecture
incorporates pipelining and parallelization in the design to reduce execution
time considerably. The design is validated for a source localization problem
wherein we estimate the position of a source in real-time using the particle
filter algorithm implemented on hardware. The validation setup relies on an
Unmanned Ground Vehicle (UGV) with a photodiode housing on top to sense and
localize a light source. We have prototyped the design using Artix-7
field-programmable gate array (FPGA), and resource utilization for the proposed
system is presented. Further, we show the execution time and estimation
accuracy of the high-speed architecture and observe a significant reduction in
computational time. Our implementation of particle filters on FPGA is scalable
and modular, with a low execution time of about 5.62 us for processing 1024
particles and can be deployed for real-time applications.
|
Phase-separation transitions in asymmetric lipid bilayers | Morphological transitions of phase separation associated with the asymmetry
of lipid composition were investigated using micrometer-sized vesicles of lipid
bilayers made from a lipid mixture. The complete macro-phase-separated
morphology undergoes a transition to a micro-phase-separation-like morphology
via a lorate morphology as a metastable state. The transition leads to the
emergence of monodisperse nanosized domains through repeated domain scission
events. Moreover, we have numerically confirmed the transitions using the
time-dependent Ginzburg-Landau model describing phase separation and the
bending elastic membrane, which is quantitatively consistent with experimental
results by fixing one free parameter. Our findings suggest that the local
spontaneous curvature due to the asymmetric composition plays an essential role
in the thermodynamic stabilization of micro-phase separation in lipid bilayers.
|
HiVLP: Hierarchical Vision-Language Pre-Training for Fast Image-Text
Retrieval | In the past few years, the emergence of vision-language pre-training (VLP)
has brought cross-modal retrieval to a new era. However, due to the latency and
computation demand, it is commonly challenging to apply VLP in a real-time
online retrieval system. To alleviate the defect, this paper proposes a
\textbf{Hi}erarchical \textbf{V}ision-\textbf{}Language \textbf{P}re-Training
(\textbf{HiVLP}) for fast Image-Text Retrieval (ITR). Specifically, we design a
novel hierarchical retrieval objective, which uses the representation of
different dimensions for coarse-to-fine ITR, i.e., using low-dimensional
representation for large-scale coarse retrieval and high-dimensional
representation for small-scale fine retrieval. We evaluate our proposed HiVLP
on two popular image-text retrieval benchmarks, i.e., Flickr30k and COCO.
Extensive experiments demonstrate that our HiVLP not only has fast inference
speed but also can be easily scaled to large-scale ITR scenarios. The detailed
results show that HiVLP is $1,427$$\sim$$120,649\times$ faster than the
fusion-based model UNITER and 2$\sim$5 faster than the fastest embedding-based
model LightingDot in different candidate scenarios. It also achieves about +4.9
AR on COCO and +3.8 AR on Flickr30K than LightingDot and achieves comparable
performance with the state-of-the-art (SOTA) fusion-based model METER.
|
A Social Distancing-Based Facility Location Approach for Combating
COVID-19 | In this paper, we introduce and study the problem of facility location along
with the notion of \emph{`social distancing'}. The input to the problem is the
road network of a city where the nodes are the residential zones, edges are the
road segments connecting the zones along with their respective distance. We
also have the information about the population at each zone, different types of
facilities to be opened and in which number, and their respective demands in
each zone. The goal of the problem is to locate the facilities such that the
people can be served and at the same time the total social distancing is
maximized. We formally call this problem as the \textsc{Social Distancing-Based
Facility Location Problem}. We mathematically quantify social distancing for a
given allocation of facilities and proposed an optimization model. As the
problem is \textsf{NP-Hard}, we propose a simulation-based and heuristic
approach for solving this problem. A detailed analysis of both methods has been
done. We perform an extensive set of experiments with synthetic datasets. From
the results, we observe that the proposed heuristic approach leads to a better
allocation compared to the simulation-based approach.
|
Deliverable navigation for multicriteria step and shoot IMRT treatment
planning | We consider Pareto surface based multi-criteria optimization for step and
shoot IMRT planning. By analyzing two navigation algorithms, we show both
theoretically and in practice that the number of plans needed to form convex
combinations of plans during navigation can be kept small (much less than the
theoretical maximum number needed in general, which is equal to the number of
objectives for on-surface Pareto navigation). Therefore a workable approach for
directly deliverable navigation in this setting is to segment the underlying
Pareto surface plans and then enforce the mild restriction that only a small
number of these plans are active at any time during plan navigation, thus
limiting the total number of segments used in the final plan.
|
Unraveling Privacy Threat Modeling Complexity: Conceptual Privacy
Analysis Layers | Analyzing privacy threats in software products is an essential part of
software development to ensure systems are privacy-respecting; yet it is still
a far from trivial activity. While there have been many advancements in the
past decade, they tend to focus on describing 'what' the threats are. What
isn't entirely clear yet is 'how' to actually find these threats. Privacy is a
complex domain. We propose to use four conceptual layers (feature, ecosystem,
business context, and environment) to capture this privacy complexity. These
layers can be used as a frame to structure and specify the privacy analysis
support in a more tangible and actionable way, thereby improving applicability
of the analysis process.
|
TMCI with Resonator Wakes | Transverse mode-coupling instability (TMCI) with a high-frequency resonator
wake is examined by the Nested Head-Tail Vlasov solver (NHT), where a Gaussian
bunch in a parabolic potential (GP model) is represented by concentric rings in
the longitudinal phase space. It is shown that multiple mode couplings and
decouplings make impossible an unambiguous definition of the threshold, unless
Landau damping is taken into account. To address this problem, instead of a
single instability threshold, an interval of thresholds is suggested, bounded
by the low and high intensity ones. For the broadband impedance model, the high
intensity threshold is shown to follow Zotter's scaling, but smaller by about a
factor of two. The same scaling, this time smaller than Zotter's by a factor of
four, is found for the ABS model (Air Bag Square well).
|
Multiorbital Quantum Impurity Solver for General Interactions and
Hybridizations | We present a numerically exact Inchworm Monte Carlo method for equilibrium
multiorbital quantum impurity problems with general interactions and
hybridizations. We show that the method, originally developed to overcome the
dynamical sign problem in certain real-time propagation problems, can also
overcome the sign problem as a function of temperature for equilibrium quantum
impurity models. This is shown in several cases where the current method of
choice, the continuous-time hybridization expansion, fails due to the sign
problem. Our method therefore enables simulations of impurity problems as they
appear in embedding theories without further approximations, such as the
truncation of the hybridization or interaction structure or a discretization of
the impurity bath with a set of discrete energy levels, and eliminates a
crucial bottleneck in the simulation of ab initio embedding problems.
|
Unfolding the procedure of characterizing recorded ultra low frequency,
kHZ and MHz electromagetic anomalies prior to the L'Aquila earthquake as
pre-seismic ones. Part I | Ultra low frequency, kHz and MHz electromagnetic anomalies were recorded
prior to the L'Aquila catastrophic earthquake that occurred on April 6, 2009.
The main aims of this contribution are: (i) To suggest a procedure for the
designation of detected EM anomalies as seismogenic ones. We do not expect to
be possible to provide a succinct and solid definition of a pre-seismic EM
emission. Instead, we attempt, through a multidisciplinary analysis, to provide
elements of a definition. (ii) To link the detected MHz and kHz EM anomalies
with equivalent last stages of the L'Aquila earthquake preparation process.
(iii) To put forward physically meaningful arguments to support a way of
quantifying the time to global failure and the identification of distinguishing
features beyond which the evolution towards global failure becomes
irreversible. The whole effort is unfolded in two consecutive parts. We clarify
we try to specify not only whether or not a single EM anomaly is pre-seismic in
itself, but mainly whether a combination of kHz, MHz, and ULF EM anomalies can
be characterized as pre-seismic one.
|
BLOFF: A Blockchain based Forensic Model in IoT | In this era of explosive growth in technology, the internet of things (IoT)
has become the game changer when we consider technologies like smart homes and
cities, smart energy, security and surveillance, and healthcare. The numerous
benefits provided by IoT have become attractive technologies for users and
cybercriminals. Cybercriminals of today have the tools and the technology to
deploy millions of sophisticated attacks. These attacks need to be
investigated; this is where digital forensics comes into play. However, it is
not easy to conduct a forensic investigation in IoT systems because of the
heterogeneous nature of the IoT environment. Additionally, forensic
investigators mostly rely on evidence from service providers, a situation that
can lead to evidence contamination. To solve this problem, the authors proposed
a blockchain-based IoT forensic model that prevents the admissibility of
tampered logs into evidence.
|
Capacity-achieving Sparse Superposition Codes via Approximate Message
Passing Decoding | Sparse superposition codes were recently introduced by Barron and Joseph for
reliable communication over the AWGN channel at rates approaching the channel
capacity. The codebook is defined in terms of a Gaussian design matrix, and
codewords are sparse linear combinations of columns of the matrix. In this
paper, we propose an approximate message passing decoder for sparse
superposition codes, whose decoding complexity scales linearly with the size of
the design matrix. The performance of the decoder is rigorously analyzed and it
is shown to asymptotically achieve the AWGN capacity with an appropriate power
allocation. Simulation results are provided to demonstrate the performance of
the decoder at finite blocklengths. We introduce a power allocation scheme to
improve the empirical performance, and demonstrate how the decoding complexity
can be significantly reduced by using Hadamard design matrices.
|
ContourDiff: Unpaired Image Translation with Contour-Guided Diffusion
Models | Accurately translating medical images across different modalities (e.g., CT
to MRI) has numerous downstream clinical and machine learning applications.
While several methods have been proposed to achieve this, they often prioritize
perceptual quality with respect to output domain features over preserving
anatomical fidelity. However, maintaining anatomy during translation is
essential for many tasks, e.g., when leveraging masks from the input domain to
develop a segmentation model with images translated to the output domain. To
address these challenges, we propose ContourDiff, a novel framework that
leverages domain-invariant anatomical contour representations of images. These
representations are simple to extract from images, yet form precise spatial
constraints on their anatomical content. We introduce a diffusion model that
converts contour representations of images from arbitrary input domains into
images in the output domain of interest. By applying the contour as a
constraint at every diffusion sampling step, we ensure the preservation of
anatomical content. We evaluate our method by training a segmentation model on
images translated from CT to MRI with their original CT masks and testing its
performance on real MRIs. Our method outperforms other unpaired image
translation methods by a significant margin, furthermore without the need to
access any input domain information during training.
|
Mpemba Effect, Shechtman's Quasicrystals and Students' Exploring
Activities | In the 1960s, Tanzanian student Erasto Mpemba and his teacher published an
article with the title "Cool" in the journal Physics Education (Mpemba, E. B. -
Osborne, D. G.: Cool?. In: Physics Education, vol.4, 1969, pp. 172-175.). In
this article they claimed that hot water freezes faster than cold water. The
article raised not only a wave of discussions, and other articles about this
topic, but also a whole series of new experiments, which should verify this
apparent thermodynamic absurdity and find an adequate explanation. Here we give
a review with references to explanations and we bring some proposals for
experimental student work in this area. We introduce Mpemba Effect not only as
a paradoxical physics phenomenon, but we shall present a strong educational
message that the Mpemba story brings to the teachers and their students. This
message also creates a bridge between this phenomenon and the discovery for
which the 2011 Nobel Prize in Chemistry was awarded. It leads to critical
adoption of traditional knowledge and encourages resilience in investigative
exploration of new things.
|
A Fast Eigen Solution for Homogeneous Quadratic Minimization with at
most Three Constraints | We propose an eigenvalue based technique to solve the Homogeneous Quadratic
Constrained Quadratic Programming problem (HQCQP) with at most 3 constraints
which arise in many signal processing problems. Semi-Definite Relaxation (SDR)
is the only known approach and is computationally intensive. We study the
performance of the proposed fast eigen approach through simulations in the
context of MIMO relays and show that the solution converges to the solution
obtained using the SDR approach with significant reduction in complexity.
|
The diurnal cycle and temporal trends of surface winds | Winds play an essential role in the climate system. In this study, we analyze
the global pattern of the diurnal cycle of surface (10 m) winds from the ERA5
reanalysis data. We find that over the land and especially over sand dune
regions, the maximal wind speed and wind drift potential (DP) occur during the
hours around midday. However, over the ocean, the wind also peaks at night.
Using the sensible heat flux, we show that the weaker winds over land at night
are due to a nocturnal cooling that decouples upper atmospheric levels and
their associated stronger winds from the surface -- nocturnal cooling is much
smaller over the ocean. We also analyze wind data from more than 400
meteorological stations in the USA and find a similar diurnal trend as in the
reanalysis data. The timing (during the day) of the maximum wind speed has not
varied much over the past 70 years. Yet, the wind speed, wind power, and wind
drift potential exhibit significant increases with time over the ocean and, to
a much lesser degree, over the land and sand dune regions. We compare the USA
and Europe DP and wind speed of the ERA5 to that of meteorological stations and
find that the ERA5 significantly underestimates real winds; however, the
temporal patterns of the two are similar.
|
GSTran: Joint Geometric and Semantic Coherence for Point Cloud
Segmentation | Learning meaningful local and global information remains a challenge in point
cloud segmentation tasks. When utilizing local information, prior studies
indiscriminately aggregates neighbor information from different classes to
update query points, potentially compromising the distinctive feature of query
points. In parallel, inaccurate modeling of long-distance contextual
dependencies when utilizing global information can also impact model
performance. To address these issues, we propose GSTran, a novel transformer
network tailored for the segmentation task. The proposed network mainly
consists of two principal components: a local geometric transformer and a
global semantic transformer. In the local geometric transformer module, we
explicitly calculate the geometric disparity within the local region. This
enables amplifying the affinity with geometrically similar neighbor points
while suppressing the association with other neighbors. In the global semantic
transformer module, we design a multi-head voting strategy. This strategy
evaluates semantic similarity across the entire spatial range, facilitating the
precise capture of contextual dependencies. Experiments on ShapeNetPart and
S3DIS benchmarks demonstrate the effectiveness of the proposed method, showing
its superiority over other algorithms. The code is available at
https://github.com/LAB123-tech/GSTran.
|