text
stringlengths 8
3.91k
| label
int64 0
10
|
---|---|
abstract
| 6 |
abstract. let g be a finite connected simple graph with d vertices and let
pg ⊂ rd be the edge polytope of g. we call pg decomposable if pg decomposes
into integral polytopes pg+ and pg− via a hyperplane. in this paper, we explore
various aspects of decomposition of pg : we give an algorithm deciding the decomposability of pg , we prove that pg is normal if and only if both pg+ and pg−
are normal, and we also study how a condition on the toric ideal of pg (namely,
the ideal being generated by quadratic binomials) behaves under decomposition.
| 0 |
abstract
context context-free grammars are widely used for language prototyping and implementation. they allow
formalizing the syntax of domain-specific or general-purpose programming languages concisely and declaratively. however, the natural and concise way of writing a context-free grammar is often ambiguous. therefore,
grammar formalisms support extensions in the form of declarative disambiguation rules to specify operator
precedence and associativity, solving ambiguities that are caused by the subset of the grammar that corresponds to expressions.
inquiry implementing support for declarative disambiguation within a parser typically comes with one
or more of the following limitations in practice: a lack of parsing performance, or a lack of modularity (i.e.,
disallowing the composition of grammar fragments of potentially different languages). the latter subject
is generally addressed by scannerless generalized parsers. we aim to equip scannerless generalized parsers
with novel disambiguation methods that are inherently performant, without compromising the concerns of
modularity and language composition.
approach in this paper, we present a novel low-overhead implementation technique for disambiguating
deep associativity and priority conflicts in scannerless generalized parsers with lightweight data-dependency.
knowledge ambiguities with respect to operator precedence and associativity arise from combining the
various operators of a language. while shallow conflicts can be resolved efficiently by one-level tree patterns,
deep conflicts require more elaborate techniques, because they can occur arbitrarily nested in a tree. current
state-of-the-art approaches to solving deep priority conflicts come with a severe performance overhead.
grounding we evaluated our new approach against state-of-the-art declarative disambiguation mechanisms. by parsing a corpus of popular open-source repositories written in java and ocaml, we found that our
approach yields speedups of up to 1.73 x over a grammar rewriting technique when parsing programs with
deep priority conflicts—with a modest overhead of 1 % to 2 % when parsing programs without deep conflicts.
importance a recent empirical study shows that deep priority conflicts are indeed wide-spread in realworld programs. the study shows that in a corpus of popular ocaml projects on github, up to 17 % of the
source files contain deep priority conflicts. however, there is no solution in the literature that addresses efficient disambiguation of deep priority conflicts, with support for modular and composable syntax definitions.
acm ccs 2012
software and its engineering → syntax; parsers;
keywords declarative disambiguation, data-dependent grammars, operator precedence, performance,
parsing
| 6 |
abstract model from
∗ this research was supported by eu grants #269921 (brainscales), #237955 (facets-itn), #604102 (human brain
project), the austrian science fund fwf #i753-n23 (pneuma) and the manfred stärk foundation.
| 9 |
abstract
averaging provides an alternative to bandwidth selection for density kernel estimation. we propose a procedure to combine linearly
several kernel estimators of a density obtained from different, possibly
data-driven, bandwidths. the method relies on minimizing an easily
tractable approximation of the integrated square error of the combination. it provides, at a small computational cost, a final solution
that improves on the initial estimators in most cases. the average
estimator is proved to be asymptotically as efficient as the best possible combination (the oracle), with an error term that decreases faster
than the minimax rate obtained with separated learning and validation samples. the performances are tested numerically, with results
that compare favorably to other existing procedures in terms of mean
integrated square errors.
| 10 |
abstract
this paper discusses the conceptual design and proof-of-concept flight demonstration of a novel variable pitch quadrotor
biplane unmanned aerial vehicle concept for payload delivery. the proposed design combines vertical takeoff and landing
(vtol), precise hover capabilities of a quadrotor helicopter and high range, endurance and high forward cruise speed
characteristics of a fixed wing aircraft. the proposed uav is designed for a mission requirement of carrying and delivering
6 kg payload to a destination at 16 km from the point of origin. first, the design of proprotors is carried out using a
physics based modified blade element momentum theory (bemt) analysis, which is validated using experimental data
generated for the purpose. proprotors have conflicting requirement for optimal hover and forward flight performance. next,
the biplane wings are designed using simple lifting line theory. the airframe design is followed by power plant selection
and transmission design. finally, weight estimation is carried out to complete the design process. the proprotor design
with 24◦ preset angle and -24◦ twist is designed based on 70% weightage to forward flight and 30% weightage to hovering
flight conditions. the operating rpm of the proprotors is reduced from 3200 during hover to 2000 during forward flight
to ensure optimal performance during cruise flight. the estimated power consumption during forward flight mode is 64%
less than that required for hover, establishing the benefit of this hybrid concept. a proof-of-concept scaled prototype is
fabricated using commercial-off-the-shelf parts. a pid controller is developed and implemented on the pixhawk board to
enable stable hovering flight and attitude tracking.
keywords
variable pitch, quadrotor tailsitter uav, uav design, blade element theory, payload delivery
| 3 |
abstract— reconstructing the states of the nodes of a dynamical network is a problem of fundamental importance in the
study of neuronal and genetic networks. an underlying related
problem is that of observability, i.e., identifying the conditions
under which such a reconstruction is possible. in this paper
we study observability of complex dynamical networks, where,
we consider the effects of network symmetries on observability.
we present an efficient algorithm that returns a minimal set
of necessary sensor nodes for observability in the presence of
symmetries.
| 8 |
abstract—increasingly large document collections require
improved information processing methods for searching,
retrieving, and organizing text. central to these information
processing methods is document classification, which has become
an important application for supervised learning. recently the
performance of traditional supervised classifiers has degraded as
the number of documents has increased. this is because along
with growth in the number of documents has come an increase
in the number of categories. this paper approaches this problem
differently from current document classification methods that
view the problem as multi-class classification. instead we
perform hierarchical classification using an approach we call
hierarchical deep learning for text classification (hdltex).
hdltex employs stacks of deep learning architectures to
provide specialized understanding at each level of the document
hierarchy.
| 1 |
abstract
we show that existence of a global polynomial lyapunov function for a homogeneous polynomial vector field or a planar polynomial vector field (under a mild condition) implies existence
of a polynomial lyapunov function that is a sum of squares (sos) and that the negative of its
derivative is also a sum of squares. this result is extended to show that such sos-based certificates of stability are guaranteed to exist for all stable switched linear systems. for this class of
systems, we further show that if the derivative inequality of the lyapunov function has an sos
certificate, then the lyapunov function itself is automatically a sum of squares. these converse
results establish cases where semidefinite programming is guaranteed to succeed in finding proofs
of lyapunov inequalities. finally, we demonstrate some merits of replacing the sos requirement
on a polynomial lyapunov function with an sos requirement on its top homogeneous component.
in particular, we show that this is a weaker algebraic requirement in addition to being cheaper
to impose computationally.
| 3 |
abstract
we present fashion-mnist, a new dataset comprising of 28 × 28 grayscale
images of 70, 000 fashion products from 10 categories, with 7, 000 images
per category. the training set has 60, 000 images and the test set has
10, 000 images.
fashion-mnist is intended to serve as a direct dropin replacement for the original mnist dataset for benchmarking machine
learning algorithms, as it shares the same image size, data format and the
structure of training and testing splits. the dataset is freely available at
https://github.com/zalandoresearch/fashion-mnist.
| 1 |
abstract
the prevention of dangerous chemical accidents is a primary problem of industrial manufacturing. in the accidents of dangerous chemicals, the oil gas explosion plays an important
role. the essential task of the explosion prevention is to estimate the better explosion limit
of a given oil gas. in this paper, support vector machines (svm) and logistic regression
(lr) are used to predict the explosion of oil gas. lr can get the explicit probability formula
of explosion, and the explosive range of the concentrations of oil gas according to the concentration of oxygen. meanwhile, svm gives higher accuracy of prediction. furthermore,
considering the practical requirements, the effects of penalty parameter on the distribution
of two types of errors are discussed.
keywords: explosion prediction, oil gas, svm, logistic regression, penalty parameter
| 5 |
abstract—in this paper, the structural controllability of
the systems over f(z) is studied using a new mathematical
method-matroids. firstly, a vector matroid is defined over
f(z). secondly, the full rank conditions of [ si a | b]
| 3 |
abstract— we study planning problems where autonomous
agents operate inside environments that are subject to uncertainties and not fully observable. partially observable markov
decision processes (pomdps) are a natural formal model to
capture such problems. because of the potentially huge or
even infinite belief space in pomdps, synthesis with safety
guarantees is, in general, computationally intractable. we
propose an approach that aims to circumvent this difficulty:
in scenarios that can be partially or fully simulated in a virtual
environment, we actively integrate a human user to control
an agent. while the user repeatedly tries to safely guide the
agent in the simulation, we collect data from the human input.
via behavior cloning, we translate the data into a strategy
for the pomdp. the strategy resolves all nondeterminism and
non-observability of the pomdp, resulting in a discrete-time
markov chain (mc). the efficient verification of this mc gives
quantitative insights into the quality of the inferred human
strategy by proving or disproving given system specifications.
for the case that the quality of the strategy is not sufficient, we
propose a refinement method using counterexamples presented
to the human. experiments show that by including humans into
the pomdp verification loop we improve the state of the art
by orders of magnitude in terms of scalability.
| 2 |
abstraction for describing feedforward (and potentially recurrent) neural network architectures is that of computational skeletons as
introduced in daniely et al. (2016). recall the following definition.
definition 3.1. a computational skeleton s is a directed asyclic graph whose non-input nodes are
labeled by activations.
daniely et al. (2016) provides an excellent account of how these graph structures abstract the many
neural network architectures we see in practice. we will give these skeletons "flesh and skin"
so to speak, and in doing so pursure a suitable generalization of neural networks which allows
intermediate mappings between possibly infinite dimensional topological vector spaces. dfms are
that generalization.
definition 3.2 (deep function machines). a deep function machine d is a computational skeleton
s indexed by i with the following properties:
• every vertex in s is a topological vector space x` where ` ∈ i.
• if nodes ` ∈ a ⊂ i feed into `0 then the activation on `0 is denoted y ` ∈ x` and is defined
as
!
x
0
y` = g
t` y `
(3.1)
`∈a
| 1 |
abstract
the features of non-stationary multi-component signals are often difficult to be extracted for expert
systems. in this paper, a new method for feature extraction that is based on maximization of local
gaussian correlation function of wavelet coefficients and signal is presented. the effect of empirical
mode decomposition (emd) to decompose multi-component signals to intrinsic mode functions
(imfs), before using of local gaussian correlation are discussed. the experimental vibration signals
from two gearbox systems are used to show the efficiency of the presented method. linear support
vector machine (svm) is utilized to classify feature sets extracted with the presented method. the
obtained results show that the features extracted in this method have excellent ability to classify
faults without any additional feature selection; it is also shown that emd can improve or degrade
features according to the utilized feature reduction method.
keywords: gear, gaussian-correlation, wavelet, emd, svm, fault detection
1. introduction
nowadays, vibration condition monitoring of industrial machines is used as a suitable tool for early
detection of variety faults. data acquisition, feature extraction, and classification are three general
parts of any expert monitoring systems. one the most difficult and important procedure in fault
diagnosis is feature extraction which is done by signal processing methods. there are various
techniques in signal processing, which are usually categorized to time (e.g. [1, 2]), frequency (e.g.
[3]), and time-frequency (e.g. [4, 5]) domain analyses. among these, time-frequency analyses have
attracted more attention because these methods provide an energy distribution of signal in timefrequency plane simultaneously, so frequency intensity of non-stationary signals can be analyzed in
time domain.
continuous wavelet transform (cwt), as a time-frequency representation of signal, provides an
effective tool for vibration-based signal in fault detection. cwt provides a multi-resolution
capability in analyzing the transitory features of non-stationary signals. behind the advantages of
cwt, there are some drawbacks; one of these is that cwt provides redundant data, so it makes
feature extraction more complicated. due to this data redundancy, data mining and feature reduction
are extensively used, such as decision trees (dt) (e.g. [6]), principal component analysis (pca) (e.g.
[7]), independent component analysis (ica) (e.g. [8-10]), genetic algorithm with support vector
machines (ga-svm) (e.g. [1, 2]), genetic algorithm with artificial neural networks (ga-ann) (e.g.
[1, 2]), self organizing maps (som) (e.g. [11]), and etc.
selection of wavelet bases is very important in order to indicate the maximal capability of features
extraction for desired faults. as an alternative, tse et al. [4] presented "exact wavelet analysis" for
selection of the best wavelet family member and reduction of data redundancy. in this method, for
| 5 |
abstract. convex polyhedral abstractions of logic programs have been found very
useful in deriving numeric relationships between program arguments in order to
prove program properties and in other areas such as termination and complexity
analysis. we present a tool for constructing polyhedral analyses of (constraint)
logic programs. the aim of the tool is to make available, with a convenient interface, state-of-the-art techniques for polyhedral analysis such as delayed widening, narrowing, “widening up-to”, and enhanced automatic selection of widening
points. the tool is accessible on the web, permits user programs to be uploaded
and analysed, and is integrated with related program transformations such as size
abstractions and query-answer transformation. we then report some experiments
using the tool, showing how it can be conveniently used to analyse transition
systems arising from models of embedded systems, and an emulator for a pic microcontroller which is used for example in wearable computing systems. we discuss
issues including scalability, tradeoffs of precision and computation time, and other
program transformations that can enhance the results of analysis.
| 6 |
abstract
notations in formulas
s(m )
s
smax
λ(m )
|λ|max
θ(.)
θ0 , θ̇
win
w
wout
ut
ū∞
xt
yt
ot
o˜t
d(., .)
f(.)
fū∞
fcrit
qū∞ ,t
qt
φ1 ()
φk ()
η, κ, γ
λū∞
| 9 |
abstract
we consider a multi-agent framework for distributed optimization where each agent in the
network has access to a local convex function and the collective goal is to achieve consensus
on the parameters that minimize the sum of the agents’ local functions. we propose an algorithm wherein each agent operates asynchronously and independently of the other agents in
the network. when the local functions are strongly-convex with lipschitz-continuous gradients,
we show that a subsequence of the iterates at each agent converges to a neighbourhood of the
global minimum, where the size of the neighbourhood depends on the degree of asynchrony in
the multi-agent network. when the agents work at the same rate, convergence to the global minimizer is achieved. numerical experiments demonstrate that asynchronous subgradient-push
can minimize the global objective faster than state-of-the-art synchronous first-order methods,
is more robust to failing or stalling agents, and scales better with the network size.
| 3 |
abstract
we propose hilbert transform and analytic signal construction for signals over graphs.
this is motivated by the popularity of hilbert transform, analytic signal, and modulation analysis in conventional signal processing, and the observation that complementary insight is often obtained by viewing conventional signals in the graph setting.
our definitions of hilbert transform and analytic signal use a conjugate-symmetry-like
property exhibited by the graph fourier transform (gft), resulting in a ’one-sided’
spectrum for the graph analytic signal. the resulting graph hilbert transform is shown
to possess many interesting mathematical properties and also exhibit the ability to highlight anomalies/discontinuities in the graph signal and the nodes across which signal
discontinuities occur. using the graph analytic signal, we further define amplitude,
phase, and frequency modulations for a graph signal. we illustrate the proposed concepts by showing applications to synthesized and real-world signals. for example,
we show that the graph hilbert transform can indicate presence of anomalies and that
graph analytic signal, and associated amplitude and frequency modulations reveal complementary information in speech signals.
keywords: graph signal, analytic signal, hilbert transform, demodulation, anomaly
detection.
email addresses: arunv@kth.se (arun venkitaraman), sach@kth.se (saikat chatterjee),
ph@kth.se (peter händel)
| 7 |
abstract
| 1 |
abstract. draisma recently proved that polynomial representations of gl∞ are topologically noetherian. we generalize this result to algebraic representations of infinite rank
classical groups.
| 0 |
abstract—we propose the learned primal-dual algorithm for
tomographic reconstruction. the algorithm accounts for a (possibly non-linear) forward operator in a deep neural network by
unrolling a proximal primal-dual optimization method, but where
the proximal operators have been replaced with convolutional
neural networks. the algorithm is trained end-to-end, working
directly from raw measured data and it does not depend on any
initial reconstruction such as filtered back-projection (fbp).
we compare performance of the proposed method on low
dose computed tomography reconstruction against fbp, total
variation (tv), and deep learning based post-processing of fbp.
for the shepp-logan phantom we obtain > 6 db psnr improvement against all compared methods. for human phantoms
the corresponding improvement is 6.6 db over tv and 2.2 db
over learned post-processing along with a substantial improvement in the structural similarity index. finally, our algorithm
involves only ten forward-back-projection computations, making
the method feasible for time critical clinical applications.
index terms—inverse problems, tomography, deep learning,
primal-dual, optimization
| 9 |
abstract— in this work, we introduce a compositional framework for the construction of finite abstractions (a.k.a. symbolic
models) of interconnected discrete-time control systems. the
compositional scheme is based on the joint dissipativity-type
properties of discrete-time control subsystems and their finite
abstractions. in the first part of the paper, we use a notion
of so-called storage function as a relation between each subsystem and its finite abstraction to construct compositionally
a notion of so-called simulation function as a relation between
interconnected finite abstractions and that of control systems.
the derived simulation function is used to quantify the error
between the output behavior of the overall interconnected
concrete system and that of its finite abstraction. in the
second part of the paper, we propose a technique to construct
finite abstractions together with their corresponding storage
functions for a class of discrete-time control systems under some
incremental passivity property. we show that if a discrete-time
control system is so-called incrementally passivable, then one
can construct its finite abstraction by a suitable quantization
of the input and state sets together with the corresponding
storage function. finally, the proposed results are illustrated by
constructing a finite abstraction of a network of linear discretetime control systems and its corresponding simulation function
in a compositional way. the compositional conditions in this
example do not impose any restriction on the gains or the
number of the subsystems which, in particular, elucidates the
effectiveness of dissipativity-type compositional reasoning for
networks of systems.
| 3 |
abstract
a distributed discrete-time algorithm is proposed for multi-agent networks to achieve a
common least squares solution of a group of linear equations, in which each agent only knows
some of the equations and is only able to receive information from its nearby neighbors. for
fixed, connected, and undirected networks, the proposed discrete-time algorithm results in each
agents solution estimate to converging exponentially fast to the same least squares solution.
moreover, the convergence does not require careful choices of time-varying small step sizes.
| 3 |
abstract
| 6 |
abstract
in this tool demonstration, we give an overview of the chameleon type debugger. the type debugger’s primary use is to identify locations within a source program which are involved in a type error. by further
examining these (potentially) problematic program locations, users gain a better understanding of their program and are able to work towards the actual mistake which was the cause of the type error. the debugger
is interactive, allowing the user to provide additional information to narrow down the search space. one
of the novel aspects of the debugger is the ability to explain erroneous-looking types. in the event that an
unexpected type is inferred, the debugger can highlight program locations which contributed to that result.
furthermore, due to the flexible constraint-based foundation that the debugger is built upon, it can naturally
handle advanced type system features such as haskell’s type classes and functional dependencies.
keywords :
| 6 |
abstract
a solution is provided in this note for the adaptive consensus problem of nonlinear multi-agent systems with unknown and non-identical
control directions assuming a strongly connected underlying graph
topology. this is achieved with the introduction of a novel variable
transformation called pi consensus error transformation. the new
variables include the position error of each agent from some arbitrary
fixed point along with an integral term of the weighted total displacement of the agent’s position from all neighbor positions. it is proven
that if these new variables are bounded and regulated to zero, then
asymptotic consensus among all agents is ensured. the important
feature of this transformation is that it provides input decoupling in
the dynamics of the new error variables making the consensus control design a simple and direct task. using classical nussbaum gain
based techniques, distributed controllers are designed to regulate the
pi consensus error variables to zero and ultimately solve the agreement
problem. the proposed approach also allows for a specific calculation
of the final consensus point based on the controller parameter selection and the associated graph topology. simulation results verify our
theoretical derivations.
| 3 |
abstract
current speech enhancement techniques operate on the spectral
domain and/or exploit some higher-level feature. the majority
of them tackle a limited number of noise conditions and rely on
first-order statistics. to circumvent these issues, deep networks
are being increasingly used, thanks to their ability to learn complex functions from large example sets. in this work, we propose the use of generative adversarial networks for speech enhancement. in contrast to current techniques, we operate at the
waveform level, training the model end-to-end, and incorporate 28 speakers and 40 different noise conditions into the same
model, such that model parameters are shared across them. we
evaluate the proposed model using an independent, unseen test
set with two speakers and 20 alternative noise conditions. the
enhanced samples confirm the viability of the proposed model,
and both objective and subjective evaluations confirm the effectiveness of it. with that, we open the exploration of generative architectures for speech enhancement, which may progressively incorporate further speech-centric design choices to
improve their performance.
index terms: speech enhancement, deep learning, generative
adversarial networks, convolutional neural networks.
| 9 |
abstract
automatic summarisation is a popular approach to reduce a document to its main
arguments. recent research in the area has
focused on neural approaches to summarisation, which can be very data-hungry.
however, few large datasets exist and none
for the traditionally popular domain of scientific publications, which opens up challenging research avenues centered on encoding large, complex documents. in this
paper, we introduce a new dataset for
summarisation of computer science publications by exploiting a large resource
of author provided summaries and show
straightforward ways of extending it further. we develop models on the dataset
making use of both neural sentence encoding and traditionally used summarisation features and show that models which
encode sentences as well as their local and global context perform best, significantly outperforming well-established
baseline methods.
| 2 |
abstract. we give a new computer-assisted proof of the classification of maximal subgroups of the simple group 2 e6 (2) and its extensions by any subgroup
of the outer automorphism group s3 . this is not a new result, but no earlier
proof exists in the literature. a large part of the proof consists of a computational analysis of subgroups generated by an element of order 2 and an element
of order 3. this method can be effectively automated, and via statistical analysis also provides a sanity check on results that may have been obtained by
delicate theoretical arguments.
| 4 |
abstract—recognizing objects in natural images is an
intricate problem involving multiple conflicting objectives.
deep convolutional neural networks, trained on large datasets,
achieve convincing results and are currently the state-of-theart approach for this task. however, the long time needed to
train such deep networks is a major drawback. we tackled
this problem by reusing a previously trained network. for
this purpose, we first trained a deep convolutional network
on the ilsvrc -12 dataset. we then maintained the learned
convolution kernels and only retrained the classification part
on different datasets. using this approach, we achieved an
accuracy of 67.68 % on cifar -100, compared to the previous
state-of-the-art result of 65.43 %. furthermore, our findings
indicate that convolutional networks are able to learn generic
feature extractors that can be used for different tasks.
| 1 |
abstract
this paper presents a computational model for the cooperation of constraint domains
and an implementation for a particular case of practical importance. the computational
model supports declarative programming with lazy and possibly higher-order functions,
predicates, and the cooperation of different constraint domains equipped with their respective solvers, relying on a so-called constraint functional logic programming (cf lp )
scheme. the implementation has been developed on top of the cf lp system t oy,
supporting the cooperation of the three domains h, r and fd, which supply equality and
disequality constraints over symbolic terms, arithmetic constraints over the real numbers,
and finite domain constraints over the integers, respectively. the computational model
has been proved sound and complete w.r.t. the declarative semantics provided by the
cf lp scheme, while the implemented system has been tested with a set of benchmarks
and shown to behave quite efficiently in comparison to the closest related approach we are
aware of.
to appear in theory and practice of logic programming (tplp)
keywords: cooperating constraint domains, constraint functional logic programming, constrained lazy narrowing, implementation.
| 6 |
abstract
monte carlo (mc) simulations of transport in random porous networks indicate that for high variances of the lognormal permeability distribution, the transport of a passive tracer is non-fickian. here we model this non-fickian
dispersion in random porous networks using discrete temporal markov models. we show that such temporal models
capture the spreading behavior accurately. this is true despite the fact that the slow velocities are strongly correlated
in time, and some studies have suggested that the persistence of low velocities would render the temporal markovian
model inapplicable. compared to previously proposed temporal stochastic differential equations with case specific drift
and diffusion terms, the models presented here require fewer modeling assumptions. moreover, we show that discrete
temporal markov models can be used to represent dispersion in unstructured networks, which are widely used to model
porous media. a new method is proposed to extend the state space of temporal markov models to improve the model
predictions in the presence of extremely low velocities in particle trajectories and extend the applicability of the model
to higher temporal resolutions. finally, it is shown that by combining multiple transitions, temporal models are more
efficient for computing particle evolution compared to correlated ctrw with spatial increments that are equal to the
lengths of the links in the network.
keywords: anomalous transport, markov models, stochastic transport modeling, stencil method
1. introduction
modeling transport in porous media is highly important in various applications including water resources
management and extraction of fossil fuels. predicting
flow and transport in aquifers and reservoirs plays an
important role in managing these resources. a significant
factor influencing transport is the heterogeneity of the
flow field, which results from the underlying heterogeneity
of the conductivity field. transport in such heterogeneous
domains displays non-fickian characteristics such as
long tails for the first arrival time probability density
function (pdf) and non-gaussian spatial distributions
(berkowitz et al., 2006; bouchaud and georges, 1990;
edery et al., 2014). capturing this non-fickian behavior
is particularly important for predictions of contaminant
transport in water resources. for example, in water
resources management long tails of the arrival time pdf
can have a major impact on the contamination of drinking
water, and therefore efficient predictions of the spatial
extents of contaminant plumes is key (nowak et al., 2012;
moslehi and de barros, 2017; ghorbanidehno et al., 2015).
| 5 |
abstract. we classify conjugacy classes of involutions in the isometry groups
of nondegenerate, symmetric bilinear forms over the field f2 . the new component of this work focuses on the case of an orthogonal form on an evendimensional space. in this context we show that the involutions satisfy a
remarkable duality, and we investigate several numerical invariants.
| 4 |
abstract
deep convolutional networks (cnns) have exhibited
their potential in image inpainting for producing plausible results. however, in most existing methods, e.g., context encoder, the missing parts are predicted by propagating
the surrounding convolutional features through a fully connected layer, which intends to produce semantically plausible but blurry result. in this paper, we introduce a special shift-connection layer to the u-net architecture, namely
shift-net, for filling in missing regions of any shape with
sharp structures and fine-detailed textures. to this end, the
encoder feature of the known region is shifted to serve as
an estimation of the missing parts. a guidance loss is introduced on decoder feature to minimize the distance between the decoder feature after fully connected layer and
the ground truth encoder feature of the missing parts. with
such constraint, the decoder feature in missing region can
be used to guide the shift of encoder feature in known
region. an end-to-end learning algorithm is further developed to train the shift-net. experiments on the paris
streetview and places datasets demonstrate the efficiency
and effectiveness of our shift-net in producing sharper, finedetailed, and visually plausible results.
| 1 |
abstract: we consider multivariate polynomials and investigate how many zeros of
multiplicity at least r they can have over a cartesian product of finite subsets of a field.
here r is any prescribed positive integer and the definition of multiplicity that we use is the
one related to hasse derivatives. as a generalization of material in [2, 5] a general version of
the schwartz-zippel was presented in [8] which from the leading monomial – with respect to
a lexicographic ordering – estimates the sum of zeros when counted with multiplicity. the
corresponding corollary on the number of zeros of multiplicity at least r is in general not
sharp and therefore in [8] a recursively defined function d was introduced using which one
can derive improved information. the recursive function being rather complicated, the only
known closed formula consequences of it are for the case of two variables [8]. in the present
paper we derive closed formula consequences for arbitrary many variables, but for the powers
in the leading monomial being not too large. our bound can be viewed as a generalization of
the footprint bound [10, 6] – the classical footprint bound taking not multiplicity into account.
| 0 |
abstract
1
| 1 |
abstract
in this paper an autoregressive time series model with conditional heteroscedasticity is
considered, where both conditional mean and conditional variance function are modeled
nonparametrically. a test for the model assumption of independence of innovations from
past time series values is suggested. the test is based on an weighted l2 -distance of
empirical characteristic functions. the asymptotic distribution under the null hypothesis
of independence is derived and consistency against fixed alternatives is shown. a smooth
autoregressive residual bootstrap procedure is suggested and its performance is shown in
a simulation study.
| 10 |
abstract. recurrent neural networks and in particular long shortterm memory (lstm) networks have demonstrated state-of-the-art accuracy in several emerging artificial intelligence tasks. however, the
models are becoming increasingly demanding in terms of computational
and memory load. emerging latency-sensitive applications including mobile robots and autonomous vehicles often operate under stringent computation time constraints. in this paper, we address the challenge of deploying computationally demanding lstms at a constrained time budget
by introducing an approximate computing scheme that combines iterative low-rank compression and pruning, along with a novel fpga-based
lstm architecture. combined in an end-to-end framework, the approximation method’s parameters are optimised and the architecture is configured to address the problem of high-performance lstm execution in
time-constrained applications. quantitative evaluation on a real-life image captioning application indicates that the proposed methods required
up to 6.5× less time to achieve the same application-level accuracy compared to a baseline method, while achieving an average of 25× higher
accuracy under the same computation time constraints.
keywords: lstm, low-rank approximation, pruning, fpgas
| 1 |
abstract. there has been substantial interest in estimating the value of a graph parameter, i.e.,
of a real-valued function defined on the set of finite graphs, by querying a randomly sampled
substructure whose size is independent of the size of the input. graph parameters that may be
successfully estimated in this way are said to be testable or estimable, and the sample complexity
qz = qz (ε) of an estimable parameter z is the size of a random sample of a graph g required to
ensure that the value of z(g) may be estimated within an error of ε with probability at least 2/3. in
this paper, for any fixed monotone graph property p = forb(f), we study the sample complexity
of estimating a bounded graph parameter zf that, for an input graph g, counts the number of
spanning subgraphs of g that satisfy p. to improve upon previous upper bounds on the sample
complexity, we show that the vertex set of any graph that satisfies a monotone property p may be
partitioned equitably into a constant number of classes in such a way that the cluster graph induced
by the partition is not far from satisfying a natural weighted graph generalization of p. properties
for which this holds are said to be recoverable, and the study of recoverable properties may be of
independent interest.
| 8 |
abstract
the next generation wireless networks (i.e. 5g and beyond), which would be extremely dynamic and
complex due to the ultra-dense deployment of heterogeneous networks (hetnets), poses many critical
challenges for network planning, operation, management and troubleshooting. at the same time, generation
and consumption of wireless data are becoming increasingly distributed with ongoing paradigm shift from
people-centric to machine-oriented communications, making the operation of future wireless networks even
more complex. in mitigating the complexity of future network operation, new approaches of intelligently
utilizing distributed computational resources with improved context-awareness becomes extremely important.
in this regard, the emerging fog (edge) computing architecture aiming to distribute computing, storage, control,
communication, and networking functions closer to end users, have a great potential for enabling efficient
operation of future wireless networks. these promising architectures make the adoption of artificial intelligence
(ai) principles which incorporate learning, reasoning and decision-making mechanism, as natural choices
for designing a tightly integrated network. towards this end, this article provides a comprehensive survey
on the utilization of ai integrating machine learning, data analytics and natural language processing (nlp)
techniques for enhancing the efficiency of wireless network operation. in particular, we provide comprehensive
discussion on the utilization of these techniques for efficient data acquisition, knowledge discovery, network
planning, operation and management of the next generation wireless networks. a brief case study utilizing the
ai techniques for this network has also been provided.
keywords– 5g and beyond, artificial (machine) intelligence, context-aware-wireless, ml, nlp, ontology
| 7 |
abstract. in photoacoustic imaging (pa), delay-and-sum (das) beamformer is a common beamforming algorithm
having a simple implementation. however, it results in a poor resolution and high sidelobes. to address these challenges, a new algorithm namely delay-multiply-and-sum (dmas) was introduced having lower sidelobes compared
to das. to improve the resolution of dmas, a novel beamformer is introduced using minimum variance (mv) adaptive beamforming combined with dmas, so-called minimum variance-based dmas (mvb-dmas). it is shown
that expanding the dmas equation results in multiple terms representing a das algebra. it is proposed to use the
mv adaptive beamformer instead of the existing das. mvb-dmas is evaluated numerically and experimentally. in
particular, at the depth of 45 mm mvb-dmas results in about 31 db, 18 db and 8 db sidelobes reduction compared
to das, mv and dmas, respectively. the quantitative results of the simulations show that mvb-dmas leads to
improvement in full-width-half-maximum about 96 %, 94 % and 45 % and signal-to-noise ratio about 89 %, 15 % and
35 % compared to das, dmas, mv, respectively. in particular, at the depth of 33 mm of the experimental images,
mvb-dmas results in about 20 db sidelobes reduction in comparison with other beamformers.
keywords: photoacoustic imaging, beamforming, delay-multiply-and-sum, minimum variance, linear-array imaging.
*ali mahloojifar, mahlooji@modares.ac.ir
| 7 |
abstract a markov decision process (mdp) framework is adopted to represent
ensemble control of devices with cyclic energy consumption patterns, e.g., thermostatically controlled loads. specifically we utilize and develop the class of mdp
models previously coined linearly solvable mdps, that describe optimal dynamics
of the probability distribution of an ensemble of many cycling devices. two principally different settings are discussed. first, we consider optimal strategy of the
ensemble aggregator balancing between minimization of the cost of operations and
minimization of the ensemble welfare penalty, where the latter is represented as a
kl-divergence between actual and normal probability distributions of the ensemble.
then, second, we shift to the demand response setting modeling the aggregator’s
task to minimize the welfare penalty under the condition that the aggregated consumption matches the targeted time-varying consumption requested by the system
operator. we discuss a modification of both settings aimed at encouraging or constraining the transitions between different states. the dynamic programming feature
of the resulting modified mdps is always preserved; however, ‘linear solvability’ is
lost fully or partially, depending on the type of modification. we also conducted
some (limited in scope) numerical experimentation using the formulations of the
first setting. we conclude by discussing future generalizations and applications.
| 3 |
abstract. let k be a field of characteristic zero and a a kalgebra such that all the k-subalgebras generated by finitely many
elements of a are finite dimensional over k. a k-e-derivation of
a is a k-linear map of the form i − φ for some k-algebra endomorphism φ of a, where i denotes the identity map of a. in
this paper we first show that for all locally finite k-derivations
d and locally finite k-algebra automorphisms φ of a, the images of d and i − φ do not contain any nonzero idempotent of
a. we then use this result to show some cases of the lfed and
lned conjectures proposed in [z4]. more precisely, we show the
lned conjecture for a, and the lfed conjecture for all locally
finite k-derivations of a and all locally finite k-e-derivations of
the form δ = i − φ with φ being surjective. in particular, both
conjectures are proved for all finite dimensional k-algebras. furthermore, some finite extensions of derivations and automorphism
to inner derivations and inner automorphisms, respectively, have
also been established. this result is not only crucial in the proofs
of the results above, but also interesting on its own right.
| 0 |
abstract
a univariate polynomial f over a field is decomposable if f =
g ◦ h = g(h) for nonlinear polynomials g and h. in order to count the
decomposables, one wants to know, under a suitable normalization, the
number of equal-degree collisions of the form f = g ◦ h = g ∗ ◦ h∗ with
(g, h) 6= (g ∗ , h∗ ) and deg g = deg g ∗ . such collisions only occur in the
wild case, where the field characteristic p divides deg f . reasonable
bounds on the number of decomposables over a finite field are known,
but they are less sharp in the wild case, in particular for degree p2 .
we provide a classification of all polynomials of degree p2 with
a collision. it yields the exact number of decomposable polynomials
of degree p2 over a finite field of characteristic p. we also present
an efficient algorithm that determines whether a given polynomial of
degree p2 has a collision or not.
| 0 |
abstract
inhomogeneous random graph models encompass many network models such as stochastic block
models and latent position models. we consider the problem of statistical estimation of the matrix of
connection probabilities based on the observations of the adjacency matrix of the network. taking the
stochastic block model as an approximation, we construct estimators of network connection probabilities
– the ordinary block constant least squares estimator, and its restricted version. we show that they
satisfy oracle inequalities with respect to the block constant oracle. as a consequence, we derive optimal
rates of estimation of the probability matrix. our results cover the important setting of sparse networks.
another consequence consists in establishing upper bounds on the minimax risks for graphon estimation
in the l2 norm when the probability matrix is sampled according to a graphon model. these bounds
include an additional term accounting for the “agnostic” error induced by the variability of the latent
unobserved variables of the graphon model. in this setting, the optimal rates are influenced not only
by the bias and variance components as in usual nonparametric problems but also include the third
component, which is the agnostic error. the results shed light on the differences between estimation
under the empirical loss (the probability matrix estimation) and under the integrated loss (the graphon
estimation).
| 10 |
abstract
motivation: intimately tied to assembly quality is the complexity of the de bruijn graph built by the
assembler. thus, there have been many paradigms developed to decrease the complexity of the de bruijn
graph. one obvious combinatorial paradigm for this is to allow the value of k to vary; having a larger value
of k where the graph is more complex and a smaller value of k where the graph would likely contain fewer
spurious edges and vertices. one open problem that affects the practicality of this method is how to predict
the value of k prior to building the de bruijn graph. we show that optimal values of k can be predicted
prior to assembly by using the information contained in a phylogenetically-close genome and therefore, help
make the use of multiple values of k practical for genome assembly.
results: we present hyda-vista, which is a genome assembler that uses homology information to choose
a value of k for each read prior to the de bruijn graph construction. the chosen k is optimal if there are
no sequencing errors and the coverage is sufficient. fundamental to our method is the construction of the
maximal sequence landscape, which is a data structure that stores for each position in the input string, the
largest repeated substring containing that position. in particular, we show the maximal sequence landscape
can be constructed in o(n+n log n)-time and o(n)-space. hyda-vista first constructs the maximal sequence
landscape for a homologous genome. the reads are then aligned to this reference genome, and values of k are
assigned to each read using the maximal sequence landscape and the alignments. eventually, all the reads
are assembled by an iterative de bruijn graph construction method. our results and comparison to other
assemblers demonstrate that hyda-vista achieves the best assembly of e. coli before repeat resolution or
scaffolding.
availability: hyda-vista is freely available at https://sites.google.com/site/hydavista. the code
for constructing the maximal sequence landscape and the choosing the optimal value of k for each read is
also on the website and could be incorporated into any genome assembler.
contact: basir@cs.colostate.edu
| 5 |
abstract. euclidean functions with values in an arbitrary well-ordered set
were first considered in a 1949 work of motzkin and studied in more detail
in work of fletcher, samuel and nagata in the 1970’s and 1980’s. here these
results are revisited, simplified, and extended. the two main themes are (i)
consideration of ord-valued functions on an artinian poset and (ii) use of
ordinal arithmetic, including the hessenberg-brookfield ordinal sum. in particular, to any euclidean ring we associate an ordinal invariant, its euclidean
order type, and we initiate a study of this invariant. the main new result
gives upper and lower bounds on the euclidean order type of a finite product
of euclidean rings in terms of the euclidean order types of the factor rings.
| 0 |
abstract we study the problem of computing the maxima of a set of
n d-dimensional points. for dimensions 2 and 3, there are algorithms to
solve the problem with order-oblivious instance-optimal running time.
however, in higher dimensions there is still room for improvements. we
present an algorithm sensitive to the structural entropy of the input set,
which improves the running time, for large classes of instances, on the
best solution for maxima to date for d ≥ 4.
| 8 |
abstract
in [8] one of the authors constructed uncountable families of groups of type f p
and of n-dimensional poincaré duality groups for each n ≥ 4. we show that the
groups constructed in [8] comprise uncountably many quasi-isometry classes. we
deduce that for each n ≥ 4 there are uncountably many quasi-isometry classes of
acyclic n-manifolds admitting free cocompact properly discontinuous discrete group
actions.
| 4 |
abstract: this paper develops a novel approach to obtaining the optimal scheduling strategy in a multi-input multi-output (mimo)
multi-access channel (mac), where each transmitter is powered by an individual energy harvesting process. relying on the stateof-the-art convex optimization tools, the proposed approach provides a low-complexity block coordinate ascent algorithm to obtain
the optimal transmission policy that maximizes the weighted sum-throughput for mimo mac. the proposed approach can provide
the optimal benchmarks for all practical schemes in energy-harvesting powered mimo mac transmissions. based on the revealed
structure of the optimal policy, we also propose an efficient online scheme, which requires only causal knowledge of energy arrival
realizations. numerical results are provided to demonstrate the merits of the proposed novel scheme.
| 7 |
abstract
this paper presents findings for training a q-learning reinforcement learning agent
using natural gradient techniques. we compare the original deep q-network (dqn)
algorithm to its natural gradient counterpart (ngdqn), measuring ngdqn and
dqn performance on classic controls environments without target networks. we
find that ngdqn performs favorably relative to dqn, converging to significantly
better policies faster and more frequently. these results indicate that natural
gradient could be used for value function optimization in reinforcement learning to
accelerate and stabilize training.
| 2 |
abstract
epileptic seizure activity shows complicated dynamics in both space and time. to understand
the evolution and propagation of seizures spatially extended sets of data need to be analysed.
we have previously described an efficient filtering scheme using variational laplace that can be
used in the dynamic causal modelling (dcm)
framework (friston et al., 2003) to estimate the
temporal dynamics of seizures recorded using either invasive or non-invasive electrical recordings (eeg/ecog). spatiotemporal dynamics are
modelled using a partial differential equation –
in contrast to the ordinary differential equation
used in our previous work on temporal estimation
of seizure dynamics (cooray et al., 2016). we
provide the requisite theoretical background for
the method and test the ensuing scheme on simulated seizure activity data and empirical invasive
ecog data. the method provides a framework
to assimilate the spatial and temporal dynamics
of seizure activity, an aspect of great physiological and clinical importance.
| 9 |
abstract
this paper is concerned with the properties of gaussian random fields defined on a
riemannian homogeneous space, under the assumption that the probability distribution be invariant under the isometry group of the space. we first indicate, building
on early results on yaglom, how the available information on group-representationtheory-related special functions makes it possible to give completely explicit descriptions of these fields in many cases of interest. we then turn to the expected size of the
zero-set: extending two-dimensional results from optics and neuroscience, we show
that every invariant field comes with a natural unit of volume (defined in terms of the
geometrical redundancies in the field) with respect to which the average size of the
zero-set depends only on the dimension of the source and target spaces, and not on
the precise symmetry exhibited by the field. both the volume unit and the associated
density of zeroes can in principle be evaluated from a single sample of the field, and
our result provides a numerical signature for the fact that a given individual map be
a sample from an invariant gaussian field.
| 4 |
abstract
we consider a phase retrieval problem, where we want to reconstruct
a n-dimensional vector from its phaseless scalar products with m sensing
vectors, independently sampled from complex normal distributions. we
show that, with a suitable initalization procedure, the classical algorithm
of alternating projections succeeds with high probability when m ≥ cn,
for some c > 0. we conjecture that this result is still true when no special
initialization procedure is used, and present numerical experiments that
support this conjecture.
| 10 |
abstract:
antitubercular activity of sulfathiazole derivitives series were subjected
to quantitative structure activity
relationship (qsar) analysis with an attempt to derive and understand a correlation between the biologically
activity as dependent variable and various descriptors as independent variables. qsar models generated using 28
compounds. several statistical regression expressions were obtained using partial least squares (pls) regression
,multiple linear regression (mlr) and principal component regression (pcr) methods. the among these
methods, partial least square regression (pls) method has shown very promising result as compare to other two
methods. a qsar model was generated by a training set of 18 molecules with correlation coefficient r ( ) of
0.9191 , significant cross validated correlation coefficient ( ) of 0.8300 , f test of 53.5783 ,
for external test set
(
-3.6132, coefficient of correlation of predicted data set
partial least squares regression method.
| 5 |
abstract
quantum computing is a promising approach of computation that is based on equations from quantum mechanics. a
simulator for quantum algorithms must be capable of performing heavy mathematical matrix transforms. the design of
the simulator itself takes one of three forms: quantum turing machine, network model or circuit model of connected
gates or, quantum programming language, yet, some simulators are hybrid.
we studied previous simulators and then we adopt features from three simulators of different implementation
languages, different paradigms, and for different platforms. they are quantum computing language (qcl), quasi,
and quantum optics toolbox for matlab 5. our simulator for quantum algorithms takes the form of a package or a
programming library for quantum computing, with a case study showing the ability of using it in the circuit model.
the .net is a promising platform for computing. vb.net is an easy, high productive programming language with
the full power and functionality provided by the .net framework. it is highly readable, writeable, and flexible
language, compared to another language such as c#.net in many aspects. we adopted vb.net although its shortage
in built-in mathematical complex and matrix operations, compared to matlab.
for implementation, we first built a mathematical core of matrix operations. then, we built a quantum core which
contains: basic qubits and register operations, basic 1d, 2d, and 3d quantum gates, and multi-view visualization of the
quantum state, then a window for demos to show you how to use and get the most of the package.
keywords: quantum computing, quantum simulator, quantum programming language, q# , a quantum computation
package , .net platform, turing machine, quantum circuit model, quantum gates.
| 6 |
abstract. we adapt the construction of the grothendieck group associated
to a commutative monoı̈d to handle idempotent monoı̈ds. our construction works for a restricted class of commutative monoı̈ds, it agrees with the
grothendieck group construction in many cases and yields a hypergroup which
solves the universal problem for morphisms to hypergroups. it gives the expected non-trivial hypergroup construction in the case of idempotent monoı̈ds.
| 0 |
abstract. this article deals with problems related to efficient sensor placement in linear time-invariant discrete-time systems with partial state observations. the output matrix is assumed to be constrained in the sense that
the set of states that each output can measure are pre-specified. two problems are addressed assuming purely structural conditions at the level of only
the interconnections between the system being known. 1) we establish that
identifying the minimal number of sensors required to ensure a desired structural observability index is np-complete. 2) we propose an efficient greedy
strategy for selecting a fixed number of sensors from the given set of sensors in
order to maximize the number of states structurally observable in the system.
we identify a large class of systems for which both the problems are solvable
in polynomial time using simple greedy algorithms to provide best approximate solutions. an illustration of the techniques developed here is given on
the benchmark ieee 118-bus power network, which has ∼ 400 states in its
linearized model.
| 3 |
abstract
biometric authentication is important for a large
range of systems, including but not limited to consumer electronic devices such as phones. understanding the limits of and attacks on such systems
is therefore crucial. this paper presents an attack on fingerprint recognition system using masterprints, synthetic fingerprints that are capable
of spoofing multiple people’s fingerprints. the
method described is the first to generate complete
image-level masterprints, and further exceeds the
attack accuracy of previous methods that could
not produce complete images. the method, latent variable evolution, is based on training a
generative adversarial network on a set of real
fingerprint images. stochastic search in the form
of the covariance matrix adaptation evolution
strategy is then used to search for latent variable
(inputs) to the generator network that optimize
the number of matches from a fingerprint recognizer. we find masterprints that a commercial
fingerprint system matches to 23% of all users in
a strict security setting, and 77% of all users at
a looser security setting. the underlying method
is likely to have broad usefulness for security research as well as in aesthetic domains.
| 1 |
abstract
we present new capacity upper bounds for the discrete-time poisson channel with no dark current
and an average-power constraint. these bounds are a simple consequence of techniques developed by one
of the authors for the seemingly unrelated problem of upper bounding the capacity of binary deletion and
repetition channels. previously, the best known capacity upper bound in the regime where the averagepower constraint does not approach zero was due to martinez (josa b, 2007), which we re-derive as a
special case of our framework. furthermore, we instantiate our framework to obtain a closed-form bound
that noticeably improves the result of martinez everywhere.
| 7 |
abstract
we study the problem of 2-dimensional orthogonal range counting with additive error. given
a set p of n points drawn from an n × n grid and an error parameter ε, the goal is to build
a data structure, such that for any orthogonal range r, it can return the number of points in
p ∩ r with additive error εn. a well-known solution for this problem is the ε-approximation,
which is a subset a ⊆ p that can estimate the number of points in p ∩ r with the number of
points in a ∩ r. it is known that an ε-approximation of size o( 1ε log2.5 1ε ) exists for any p with
respect to orthogonal ranges, and the best lower bound is ω( 1ε log 1ε ).
the ε-approximation is a rather restricted data structure, as we are not allowed to store any
information other than the coordinates of the points in p . in this paper, we explore what can be
achieved without any restriction on the data structure. we first describe a simple data structure
that uses o( 1ε (log2 1ε + log n)) bits and answers queries with error εn. we then prove a lower
bound that any data structure that answers queries with error εn must use ω( 1ε (log2 1ε + log n))
bits. our lower bound is information-theoretic: we show that there is a collection of 2ω(n log n)
point sets with large union combinatorial discrepancy, and thus are hard to distinguish unless
we use ω(n log n) bits.
| 8 |
abstract
we present gradual type theory, a logic and type theory for call-by-name gradual typing. we define the
central constructions of gradual typing (the dynamic type, type casts and type error) in a novel way, by
universal properties relative to new judgments for gradual type and term dynamism, which were developed
in blame calculi and to state the “gradual guarantee” theorem of gradual typing. combined with the
ordinary extensionality (η) principles that type theory provides, we show that most of the standard
operational behavior of casts is uniquely determined by the gradual guarantee. this provides a semantic
justification for the definitions of casts, and shows that non-standard definitions of casts must violate
these principles. our type theory is the internal language of a certain class of preorder categories called
equipments. we give a general construction of an equipment interpreting gradual type theory from a
2-category representing non-gradual types and programs, which is a semantic analogue of findler and
felleisen’s definitions of contracts, and use it to build some concrete domain-theoretic models of gradual
typing.
| 6 |
abstract
deep neural network (dnn) acoustic models have yielded
many state-of-the-art results in automatic speech recognition
(asr) tasks. more recently, recurrent neural network (rnn)
models have been shown to outperform dnns counterparts.
however, state-of-the-art dnn and rnn models tend to be impractical to deploy on embedded systems with limited computational capacity. traditionally, the approach for embedded platforms is to either train a small dnn directly, or to train a small
dnn that learns the output distribution of a large dnn. in this
paper, we utilize a state-of-the-art rnn to transfer knowledge
to small dnn. we use the rnn model to generate soft alignments and minimize the kullback-leibler divergence against
the small dnn. the small dnn trained on the soft rnn alignments achieved a 3.93 wer on the wall street journal (wsj)
eval92 task compared to a baseline 4.54 wer or more than 13%
relative improvement.
index terms: deep neural networks, recurrent neural networks, automatic speech recognition, model compression,
embedded platforms
| 9 |
abstract
small objects detection is a challenging task in computer vision due to its limited resolution and information. in order
to solve this problem, the majority of existing methods sacrifice speed for improvement in accuracy. in this paper, we
aim to detect small objects at a fast speed, using the best object detector single shot multibox detector (ssd) with
respect to accuracy-vs-speed trade-off as base architecture. we propose a multi-level feature fusion method for
introducing contextual information in ssd, in order to improve the accuracy for small objects. in detailed fusion
operation, we design two feature fusion modules, concatenation module and element-sum module, different in the way of
adding contextual information. experimental results show that these two fusion modules obtain higher map on pascal
voc2007 than baseline ssd by 1.6 and 1.7 points respectively, especially with 2-3 points improvement on some small
objects categories. the testing speed of them is 43 and 40 fps respectively, superior to the state of the art
deconvolutional single shot detector (dssd) by 29.4 and 26.4 fps.
keywords: small object detection, feature fusion, real-time, single shot multi-box detector.
| 7 |
abstract
non-orthogonal multiple access (noma) has attracted much recent attention owing to its capability
for improving the system spectral efficiency in wireless communications. deploying noma in heterogeneous network can satisfy users’ explosive data traffic requirements, and noma will likely play an
important role in the fifth-generation (5g) mobile communication networks. however, noma brings
new technical challenges on resource allocation due to the mutual cross-tier interference in heterogeneous
networks. in this article, to study the tradeoff between data rate performance and energy consumption
in noma, we examine the problem of energy-efficient user scheduling and power optimization in 5g
noma heterogeneous networks. the energy-efficient user scheduling and power allocation schemes
are introduced for the downlink 5g noma heterogeneous network for perfect and imperfect channel
state information (csi) respectively. simulation results show that the resource allocation schemes can
significantly increase the energy efficiency of 5g noma heterogeneous network for both cases of
perfect csi and imperfect csi.
| 7 |
abstract. chest x-ray is the most common medical imaging exam used
to assess multiple pathologies. automated algorithms and tools have the
potential to support the reading workflow, improve efficiency, and reduce reading errors. with the availability of large scale data sets, several methods have been proposed to classify pathologies on chest x-ray
images. however, most methods report performance based on random
image based splitting, ignoring the high probability of the same patient
appearing in both training and test set. in addition, most methods fail to
explicitly incorporate the spatial information of abnormalities or utilize
the high resolution images. we propose a novel approach based on location aware dense networks (dnetloc), whereby we incorporate both
high-resolution image data and spatial information for abnormality classification. we evaluate our method on the largest data set reported in the
community, containing a total of 86,876 patients and 297,541 chest x-ray
images. we achieve (i) the best average auc score for published training
and test splits on the single benchmarking data set (chestx-ray14 [1]),
and (ii) improved auc scores when the pathology location information
is explicitly used. to foster future research we demonstrate the limitations of the current benchmarking setup [1] and provide new reference
patient-wise splits for the used data sets. this could support consistent
and meaningful benchmarking of future methods on the largest publicly
available data sets.
| 2 |
abstract. we investigate the action of outer automorphisms of finite groups of lie
type on their irreducible characters. we obtain a definite result for cuspidal characters.
as an application we verify the inductive mckay condition for some further infinite
families of simple groups at certain primes.
| 4 |
abstract
we propose an ecg denoising method based on a feed forward neural
network with three hidden layers. particulary useful for very noisy signals,
this approach uses the available ecg channels to reconstruct a noisy
channel. we tested the method, on all the records from physionet mitbih arrhythmia database, adding electrode motion artifact noise. this
denoising method improved the perfomance of publicly available ecg
analysis programs on noisy ecg signals. this is an offline method that
can be used to remove noise from very corrupted holter records.
| 5 |
abstract—network transfer and disk read are the most time
consuming operations in the repair process for node failures in
erasure-code-based distributed storage systems. recent developments on reed-solomon codes, the most widely used erasure
codes in practical storage systems, have shown that efficient repair
schemes specifically tailored to these codes can significantly reduce
the network bandwidth spent to recover single failures. however,
the i/o cost, that is, the number of disk reads performed in these
repair schemes remains largely unknown. we take the first step to
address this gap in the literature by investigating the i/o costs of
some existing repair schemes for full-length reed-solomon codes.
| 7 |
abstract. subset sum and k-sat are two of the most extensively studied problems in computer
science, and conjectures about their hardness are among the cornerstones of fine-grained complexity.
one of the most intriguing open problems in this area is to base the hardness of one of these problems
on the other.
our main result is a tight reduction from k-sat to subset sum on dense instances, proving that
bellman’s 1962 pseudo-polynomial o∗ (t )-time algorithm for subset-sum on n numbers and target t
cannot be improved to time t 1−ε · 2o(n) for any ε > 0, unless the strong exponential time hypothesis
(seth) fails. this is one of the strongest known connections between any two of the core problems of
fine-grained complexity.
as a corollary, we prove a “direct-or” theorem for subset sum under seth, offering a new tool
for proving conditional lower bounds: it is now possible to assume that deciding whether one out of n
given instances of subset sum is a yes instance requires time (n t )1−o(1) . as an application of this
corollary, we prove a tight seth-based lower bound for the classical bicriteria s, t-path problem,
which is extensively studied in operations research. we separate its complexity from that of subset
sum: on graphs with m edges and edge lengths bounded by l, we show that the o(lm) pseudopolynomial time algorithm by joksch from 1966 cannot be improved to õ(l + m), in contrast to a
recent improvement for subset sum (bringmann, soda 2017).
| 8 |
abstract
in evolutionary biology, the speciation history of living organisms is represented graphically by a phylogeny, that is, a rooted tree whose leaves correspond to current species and
branchings indicate past speciation events. phylogenies are commonly estimated from molecular sequences, such as dna sequences, collected from the species of interest. at a high level,
the idea behind this inference is simple: the further apart in the tree of life are two species,
the greater is the number of mutations to have accumulated in their genomes since their most
recent common ancestor. in order to obtain accurate estimates in phylogenetic analyses, it
is standard practice to employ statistical approaches based on stochastic models of sequence
evolution on a tree. for tractability, such models necessarily make simplifying assumptions
about the evolutionary mechanisms involved. in particular, commonly omitted are insertions
and deletions of nucleotides—also known as indels.
properly accounting for indels in statistical phylogenetic analyses remains a major challenge in computational evolutionary biology. here we consider the problem of reconstructing
ancestral sequences on a known phylogeny in a model of sequence evolution incorporating nucleotide substitutions, insertions and deletions, specifically the classical tkf91 process. we
focus on the case of dense phylogenies of bounded height, which we refer to as the taxon-rich
setting, where statistical consistency is achievable. we give the first polynomial-time ancestral reconstruction algorithm with provable guarantees under constant rates of mutation. our
algorithm succeeds when the phylogeny satisfies the “big bang” condition, a necessary and
sufficient condition for statistical consistency in this context.
| 10 |
abstract
over the years, many different indexing techniques and search algorithms
have been proposed, including css-trees, csb+ -trees, k-ary binary search,
and fast architecture sensitive tree search. there have also been papers on
how best to set the many different parameters of these index structures, such
as the node size of csb+ -trees.
these indices have been proposed because cpu speeds have been increasing at a dramatically higher rate than memory speeds, giving rise to the von
neumann cpu–memory bottleneck. to hide the long latencies caused by
memory access, it has become very important to well-utilize the features of
modern cpus. in order to drive down the average number of cpu clock
cycles required to execute cpu instructions, and thus increase throughput, it
has become important to achieve a good utilization of cpu resources. some
of these are the data and instruction caches, and the translation lookaside
buffers. but it also has become important to avoid branch misprediction
penalties, and utilize vectorization provided by cpus in the form of simd
instructions.
while the layout of index structures has been heavily optimized for the
data cache of modern cpus, the instruction cache has been neglected so far.
in this paper, we present nitrogen, a framework for utilizing code generation
for speeding up index traversal in main memory database systems. by
bringing together data and code, we make index structures use the dormant
resource of the instruction cache. we show how to combine index compilation
with previous approaches, such as binary tree search, cache-sensitive tree
search, and the architecture-sensitive tree search presented by kim et al.
v
| 8 |
abstract
we consider a transportation system of heterogeneously connected vehicles, where not all vehicles are able to communicate. heterogeneous connectivity in transportation systems
is coupled to practical constraints such that (i) not all vehicles may be equipped with devices having communication
interfaces, (ii) some vehicles may not prefer to communicate
due to privacy and security reasons, and (iii) communication links are not perfect and packet losses and delay occur
in practice. in this context, it is crucial to develop control
algorithms by taking into account the heterogeneity. in this
paper, we particularly focus on making traffic phase scheduling decisions. we develop a connectivity-aware traffic phase
scheduling algorithm for heterogeneously connected vehicles
that increases the intersection efficiency (in terms of the average number of vehicles that are allowed to pass the intersection) by taking into account the heterogeneity. the simulation results show that our algorithm significantly improves
the efficiency of intersections as compared to the baselines.
| 3 |
abstract
| 1 |
abstract
in this paper, we propose a single-agent logic of
goal-directed knowing how extending the standard
epistemic logic of knowing that with a new knowing how operator. the semantics of the new operator is based on the idea that knowing how to
achieve φ means that there exists a (uniform) strategy such that the agent knows that it can make sure
φ. we give an intuitive axiomatization of our logic
and prove the soundness, completeness and decidability of the logic. the crucial axioms relating
knowing that and knowing how illustrate our understanding of knowing how in this setting. this
logic can be used in representing both knowledgethat and knowledge-how.
| 2 |
abstract
surveys can be viewed as programs, complete with logic,
control flow, and bugs. word choice or the order in which
questions are asked can unintentionally bias responses. vague,
confusing, or intrusive questions can cause respondents to
abandon a survey. surveys can also have runtime errors: inattentive respondents can taint results. this effect is especially
problematic when deploying surveys in uncontrolled settings,
such as on the web or via crowdsourcing platforms. because
the results of surveys drive business decisions and inform scientific conclusions, it is crucial to make sure they are correct.
we present s urvey m an, a system for designing, deploying, and automatically debugging surveys. survey authors
write their surveys in a lightweight domain-specific language
aimed at end users. s urvey m an statically analyzes the survey to provide feedback to survey authors before deployment.
it then compiles the survey into javascript and deploys it either to the web or a crowdsourcing platform. s urvey m an’s
dynamic analyses automatically find survey bugs, and control
for the quality of responses. we evaluate s urvey m an’s
algorithms analytically and empirically, demonstrating its
effectiveness with case studies of social science surveys conducted via amazon’s mechanical turk.
| 6 |
abstract
in this paper, the performance of multiple-input multiple-output non-orthogonal multiple access
(mimo-noma) is investigated when multiple users are grouped into a cluster. the superiority of
mimo-noma over mimo orthogonal multiple access (mimo-oma) in terms of both sum channel
capacity and ergodic sum capacity is proved analytically. furthermore, it is demonstrated that the more
users are admitted to a cluster, the lower is the achieved sum rate, which illustrates the tradeoff between
the sum rate and maximum number of admitted users. on this basis, a user admission scheme is proposed,
which is optimal in terms of both sum rate and number of admitted users when the signal-to-interferenceplus-noise ratio thresholds of the users are equal. when these thresholds are different, the proposed
scheme still achieves good performance in balancing both criteria. moreover, under certain conditions,
it maximizes the number of admitted users. in addition, the complexity of the proposed scheme is linear
to the number of users per cluster. simulation results verify the superiority of mimo-noma over
mimo-oma in terms of both sum rate and user fairness, as well as the effectiveness of the proposed
user admission scheme.
| 7 |
abstract
this paper introduces and approximately solves a multi-component
problem where small rectangular items are produced from large rectangular bins via guillotine cuts. an item is characterized by its width, height,
due date, and earliness and tardiness penalties per unit time. each item
induces a cost that is proportional to its earliness and tardiness. items cut
from the same bin form a batch, whose processing and completion times
depend on its assigned items. the items of a batch have the completion
time of their bin. the objective is to find a cutting plan that minimizes
the weighted sum of earliness and tardiness penalties. we address this
problem via a constraint programming (cp) based heuristic (cph) and
an agent based modelling heuristic (abh). cph is an impact-based search
strategy, implemented in the general-purpose solver ibm cp optimizer.
abh is constructive. it builds a solution through repeated negotiations
between the set of agents representing the items and the set representing the bins. the agents cooperate to minimize the weighted earlinesstardiness penalties. the computational investigation shows that cph
outperforms abh on small-sized instances while the opposite prevails for
larger instances.
| 8 |
abstract view that our behavioral
types provide on osgi.
• a first implementation of a finite automata based behavioral type system for osgi that integrates
different tools and workflows into a framework.
• early versions of editors and related code for supporting adaption and checking.
• an exemplarily integration of behavioral type checkers comprising minimization, normalization
and comparison. one checker has been implemented in plain java. additionally we have integrated
a checker and synthesis tool presented in [12] for deciding compatibility, deadlock freedom and
detecting conflicts in non-deterministic specifications at runtime and development time.
• usage scenarios (interaction protocols) of our behavioral types for osgi at runtime and development time.
• the modeling of an example system: a booking system to show different usage scenarios.
b. buhnova, l. happe, j. kofroň:
formal engineering approaches to software components
and architectures 2013 (fesca’13)
eptcs 108, 2013, pp. 79–93, doi:10.4204/eptcs.108.6
| 6 |
abstract. a principled approach to the design of program verification and construction tools is applied to separation logic. the control flow is modelled by
power series with convolution as separating conjunction. a generic construction
lifts resource monoids to assertion and predicate transformer quantales. the data
flow is captured by concrete store/heap models. these are linked to the separation
algebra by soundness proofs. verification conditions and transformation laws are
derived by equational reasoning within the predicate transformer quantale. this
separation of concerns makes an implementation in the isabelle/hol proof assistant simple and highly automatic. the resulting tool is correct by construction;
it is explained on the classical linked list reversal example.
| 6 |
abstract
knowledge representation is a long-history topic
in ai, which is very important. a variety of models have been proposed for knowledge graph embedding, which projects symbolic entities and relations into continuous vector space. however,
most related methods merely focus on the datafitting of knowledge graph, and ignore the interpretable semantic expression. thus, traditional
embedding methods are not friendly for applications that require semantic analysis, such as
question answering and entity retrieval. to this
end, this paper proposes a semantic representation
method for knowledge graph (ksr), which imposes a two-level hierarchical generative process
that globally extracts many aspects and then locally assigns a specific category in each aspect for
every triple. since both aspects and categories are
semantics-relevant, the collection of categories in
each aspect is treated as the semantic representation of this triple. extensive experiments show that
our model outperforms other state-of-the-art baselines substantially.
| 2 |
abstract—face aging has raised considerable attentions and
interest from the computer vision community in recent years.
numerous approaches ranging from purely image processing
techniques to deep learning structures have been proposed in
literature. in this paper, we aim to give a review of recent
developments of modern deep learning based approaches, i.e.
deep generative models, for face aging task. their structures,
formulation, learning algorithms as well as synthesized results
are also provided with systematic discussions. moreover, the
aging databases used in most methods to learn the aging process
are also reviewed.
keywords-face aging, face age progression, deep generative models.
| 1 |
abstract
policy optimization methods have shown great promise in
solving complex reinforcement and imitation learning tasks.
while model-free methods are broadly applicable, they often
require many samples to optimize complex policies. modelbased methods greatly improve sample-efficiency but at the
cost of poor generalization, requiring a carefully handcrafted
model of the system dynamics for each task. recently, hybrid methods have been successful in trading off applicability
for improved sample-complexity. however, these have been
limited to continuous action spaces. in this work, we present
a new hybrid method based on an approximation of the dynamics as an expectation over the next state under the current policy. this relaxation allows us to derive a novel hybrid policy gradient estimator, combining score function and
pathwise derivative estimators, that is applicable to discrete
action spaces. we show significant gains in sample complexity, ranging between 1.7 and 25×, when learning parameterized policies on cart pole, acrobot, mountain car and hand
mass. our method is applicable to both discrete and continuous action spaces, when competing pathwise methods are
limited to the latter.
| 2 |
abstract
motivated by the increasing need to understand the algorithmic foundations of distributed
large-scale graph computations, we study a number of fundamental graph problems in a messagepassing model for distributed computing where k ≥ 2 machines jointly perform computations on
graphs with n nodes (typically, n k). the input graph is assumed to be initially randomly
partitioned among the k machines, a common implementation in many real-world systems.
communication is point-to-point, and the goal is to minimize the number of communication
rounds of the computation.
our main result is an (almost) optimal distributed randomized algorithm for graph connectivity. our algorithm runs in õ(n/k 2 ) rounds (õ notation hides a polylog(n) factor and
an additive polylog(n) term). this improves over the best previously known bound of õ(n/k)
[klauck et al., soda 2015], and is optimal (up to a polylogarithmic factor) in view of an existing
lower bound of ω̃(n/k 2 ). our improved algorithm uses a bunch of techniques, including linear
graph sketching, that prove useful in the design of efficient distributed graph algorithms. using
the connectivity algorithm as a building block, we then present fast randomized algorithms for
computing minimum spanning trees, (approximate) min-cuts, and for many graph verification
problems. all these algorithms take õ(n/k 2 ) rounds, and are optimal up to polylogarithmic
factors. we also show an almost matching lower bound of ω̃(n/k 2 ) rounds for many graph
verification problems by leveraging lower bounds in random-partition communication complexity.
| 8 |
abstract
we propose a framework that learns a representation transferable across different
domains and tasks in a label efficient manner. our approach battles domain shift
with a domain adversarial loss, and generalizes the embedding to novel task using
a metric learning-based approach. our model is simultaneously optimized on
labeled source data and unlabeled or sparsely labeled data in the target domain.
our method shows compelling results on novel classes within a new domain even
when only a few labeled examples per class are available, outperforming the
prevalent fine-tuning approach. in addition, we demonstrate the effectiveness of
our framework on the transfer learning task from image object recognition to video
action recognition.
| 1 |
abstract
| 7 |
abstract
we start with a simple introduction to topological data analysis where the most
popular tool is called a persistent diagram. briefly, a persistent diagram is a multiset of points in the plane describing the persistence of topological features of a
compact set when a scale parameter varies. since statistical methods are difficult
to apply directly on persistence diagrams, various alternative functional summary
statistics have been suggested, but either they do not contain the full information of
the persistence diagram or they are two-dimensional functions. we suggest a new
functional summary statistic that is one-dimensional and hence easier to handle,
and which under mild conditions contains the full information of the persistence diagram. its usefulness is illustrated in statistical settings concerned with point clouds
and brain artery trees. the appendix includes additional methods and examples,
together with technical details. the r-code used for all examples is available at
http://people.math.aau.dk/~christophe/rcode.zip.
| 10 |
abstract. in this work, we present a deep learning framework for multiclass breast cancer image classification as our submission to the international conference on image analysis and recognition (iciar) 2018
grand challenge on breast cancer histology images (bach). as these
histology images are too large to fit into gpu memory, we first propose
using inception v3 to perform patch level classification. the patch level
predictions are then passed through an ensemble fusion framework involving majority voting, gradient boosting machine (gbm), and logistic
regression to obtain the image level prediction. we improve the sensitivity of the normal and benign predicted classes by designing a dual path
network (dpn) to be used as a feature extractor where these extracted
features are further sent to a second layer of ensemble prediction fusion
using gbm, logistic regression, and support vector machine (svm) to refine predictions. experimental results demonstrate our framework shows
a 12.5% improvement over the state-of-the-art model.
| 1 |
abstract
segmentation of histopathology sections is an ubiquitous requirement in digital
pathology and due to the large variability of biological tissue, machine learning
techniques have shown superior performance over standard image processing methods. as part of the glas@miccai2015 colon gland segmentation challenge, we
present a learning-based algorithm to segment glands in tissue of benign and malignant colorectal cancer. images are preprocessed according to the hematoxylineosin staining protocol and two deep convolutional neural networks (cnn) are
trained as pixel classifiers. the cnn predictions are then regularized using a
figure-ground segmentation based on weighted total variation to produce the final
segmentation result. on two test sets, our approach achieves a tissue classification
accuracy of 98% and 94%, making use of the inherent capability of our system to
distinguish between benign and malignant tissue.
| 1 |
abstract— in this paper, a progressive learning technique for multi -class classification is proposed. this newly developed
learning technique is independent of the number of class constraints and it can learn new classes while still retaining the
knowledge of previous classes. whenever a new class (non-native to the knowledge learnt thus far) is encountered, the
neural network structure gets remodeled automatically by facilitating new neurons and interconnections , and the
parameters are calculated in such a way that it retains the knowledge learnt thus far. this technique is suitable for realworld applications where the number of classes is often unknown and online learning from real -time data is required. the
consistency and the complexity of the progressive learning technique are analyzed. s everal standard datasets are used to
evaluate the performance of the developed technique. a comparative study shows that the developed technique is superior.
| 9 |
abstract
in our previous papers we have described efficient and reliable methods of generation of representative volume elements (rve) perfectly suitable for analysis of composite materials via stochastic
homogenization.
in this paper we profit from these methods to analyze the influence of the morphology on the
effective mechanical properties of the samples. more precisely, we study the dependence of main
mechanical characteristics of a composite medium on various parameters of the mixture of inclusions
composed of spheres and cylinders. on top of that we introduce various imperfections to inclusions
and observe the evolution of effective properties related to that.
the main computational approach used throughout the work is the fft-based homogenization
technique, validated however by comparison with the direct finite elements method. we give details
on the features of the method and the validation campaign as well.
keywords: composite materials, cylindrical and spherical reinforcements, mechanical properties, stochastic
homogenization
| 5 |
abstract. in this article, we will prove a full topological version of popa’s measurable cocycle superrigidity theorem for full shifts [36]. more precisely, we prove
that every hölder continuous cocycle for the full shifts of every finitely generated
group g that has one end, undistorted elements and sub-exponential divergence
function is cohomologous to a group homomorphism via a hölder continuous transfer map if the target group is complete and admits a compatible bi-invariant metric. using the ideas of behrstock, druţu, mosher, mozes and sapir [4, 5, 17, 18],
we show that the class of our acting groups is large including wide groups having undistorted elements and one-ended groups with strong thick of finite orders.
as a consequence, irreducible uniform lattices of most of higher rank connected
semisimple lie groups, mapping class groups of g-genus surfaces with p-punches,
g ≥ 2, p ≥ 0; richard thompson groups f, t, v ; aut(fn ), out(fn ), n ≥ 3; certain
(2 dimensional)-coxeter groups; and one-ended right-angled artin groups are in
our class. this partially extends the main result in [12].
| 4 |
abstract
this technical note addresses the distributed fixed-time consensus protocol design problem for
multi-agent systems with general linear dynamics over directed communication graphs. by using motion
planning approaches, a class of distributed fixed-time consensus algorithms are developed, which rely
only on the sampling information at some sampling instants. for linear multi-agent systems, the proposed
algorithms solve the fixed-time consensus problem for any directed graph containing a directed spanning
tree. in particular, the settling time can be off-line pre-assigned according to task requirements. compared
with the existing results for multi-agent systems, to our best knowledge, it is the first-time to solve fixedtime consensus problems for general linear multi-agent systems over directed graphs having a directed
spanning tree. extensions to the fixed-time formation flying are further studied for multiple satellites
described by hill equations.
index terms
fixed-time consensus, linear multi-agent system, directed graph, pre-specified settling time, directed
spanning tree.
| 3 |
abstract
we study the problem of estimating multivariate log-concave probability density
functions. we prove the first sample complexity upper bound for learning log-concave
densities on rd , for all d ≥ 1. prior to our work, no upper bound on the sample
complexity of this learning problem was known for the case of d > 3.
in more detail,
we give an estimator that, for any d ≥ 1 and ǫ > 0, draws
õd (1/ǫ)(d+5)/2 samples from an unknown target log-concave density on rd , and
outputs a hypothesis that (with high probability) is ǫ-close to the target, in total variation distance. our upper bound
on the sample complexity comes close to the known
lower bound of ωd (1/ǫ)(d+1)/2 for this problem.
| 7 |
abstract
we design fast dynamic algorithms for proper vertex and edge colorings in a graph undergoing edge
insertions and deletions. in the static setting, there are simple linear time algorithms for (∆ + 1)- vertex
coloring and (2∆ − 1)-edge coloring in a graph with maximum degree ∆. it is natural to ask if we can
efficiently maintain such colorings in the dynamic setting as well. we get the following three results. (1)
we present a randomized algorithm which maintains a (∆ + 1)-vertex coloring with o(log ∆) expected
amortized update time. (2) we present a deterministic algorithm which maintains a (1 + o(1))∆-vertex
coloring with o(polylog ∆) amortized update time. (3) we present a simple, deterministic algorithm
which maintains a (2∆ − 1)-edge coloring with
√ o(log ∆) worst-case update time. this improves the
recent o(∆)-edge coloring algorithm with õ( ∆) worst-case update time [bm17].
| 8 |
abstracting the details of parallel implementation from the developer. most existing libraries provide
implementations of skeletons that are defined over flat data types such as lists or arrays. however,
skeleton-based parallel programming is still very challenging as it requires intricate analysis of the
underlying algorithm and often uses inefficient intermediate data structures. further, the algorithmic
structure of a given program may not match those of list-based skeletons. in this paper, we present
a method to automatically transform any given program to one that is defined over a list and is more
likely to contain instances of list-based skeletons. this facilitates the parallel execution of a transformed program using existing implementations of list-based parallel skeletons. further, by using an
existing transformation called distillation in conjunction with our method, we produce transformed
programs that contain fewer inefficient intermediate data structures.
| 6 |
abstract
in recent years, there has been tremendous progress in automated
synthesis techniques that are able to automatically generate code
based on some intent expressed by the programmer. a major challenge for the adoption of synthesis remains in having the programmer communicate their intent. when the expressed intent is coarsegrained (for example, restriction on the expected type of an expression), the synthesizer often produces a long list of results for the
programmer to choose from, shifting the heavy-lifting to the user.
an alternative approach, successfully used in end-user synthesis is
programming by example (pbe), where the user leverages examples
to interactively and iteratively refine the intent. however, using only
examples is not expressive enough for programmers, who can observe the generated program and refine the intent by directly relating
to parts of the generated program.
we present a novel approach to interacting with a synthesizer
using a granular interaction model. our approach employs a rich
interaction model where (i) the synthesizer decorates a candidate
program with debug information that assists in understanding the
program and identifying good or bad parts, and (ii) the user is
allowed to provide feedback not only on the expected output of a
program, but also on the underlying program itself. that is, when the
user identifies a program as (partially) correct or incorrect, they can
also explicitly indicate the good or bad parts, to allow the synthesizer
to accept or discard parts of the program instead of discarding the
program as a whole.
we show the value of our approach in a controlled user study.
our study shows that participants have strong preference to using
granular feedback instead of examples, and are able to provide
granular feedback much faster.
| 6 |
abstract
we study bisimulation and context equivalence in a probabilistic λ-calculus. the contributions of this paper are threefold. firstly we show a technique for proving congruence
of probabilistic applicative bisimilarity. while the technique follows howe’s method, some
of the technicalities are quite different, relying on non-trivial “disentangling” properties for
sets of real numbers. secondly we show that, while bisimilarity is in general strictly finer
than context equivalence, coincidence between the two relations is attained on pure λ-terms.
the resulting equality is that induced by levy-longo trees, generally accepted as the finest
extensional equivalence on pure λ-terms under a lazy regime. finally, we derive a coinductive
characterisation of context equivalence on the whole probabilistic language, via an extension
in which terms akin to distributions may appear in redex position. another motivation for the
extension is that its operational semantics allows us to experiment with a different congruence
technique, namely that of logical bisimilarity.
| 6 |
abstract
in this paper, we conduct an empirical study on discovering
the ordered collective dynamics obtained by a population of
artificial intelligence (ai) agents. our intention is to put ai
agents into a simulated natural context, and then to understand their induced dynamics at the population level. in particular, we aim to verify if the principles developed in the
real world could also be used in understanding an artificiallycreated intelligent population. to achieve this, we simulate a
large-scale predator-prey world, where the laws of the world
are designed by only the findings or logical equivalence that
have been discovered in nature. we endow the agents with
the intelligence based on deep reinforcement learning, and
scale the population size up to millions. our results show that
the population dynamics of ai agents, driven only by each
agent’s individual self interest, reveals an ordered pattern that
is similar to the lotka-volterra model studied in population
biology. we further discover the emergent behaviors of collective adaptations in studying how the agents’ grouping behaviors will change with the environmental resources. both of
the two findings could be explained by the self-organization
theory in nature.
| 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.