text
stringlengths 8
3.91k
| label
int64 0
10
|
---|---|
abstract
an element g of a finite group g is said to be vanishing in g if there exists an
irreducible character χ of g such that χ(g) = 0; in this case, g is also called a zero
of g. the aim of this paper is to obtain structural properties of a factorised group
g = ab when we impose some conditions on prime power order elements g ∈ a ∪ b
which are (non-)vanishing in g.
keywords finite groups · products of groups · irreducible characters · conjugacy
classes · vanishing elements
2010 msc 20d40 · 20c15 · 20e45
| 4 |
abstract. it is shown that the algebra h ∞ of bounded dirichlet series
is not a coherent ring, and has infinite bass stable rank. as corollaries
of the latter result, it is derived that h ∞ has infinite topological stable
rank and infinite krull dimension.
| 0 |
abstract
we show how to efficiently obtain the algebraic normal form of boolean
functions vanishing on hamming spheres centred at zero. by exploiting the
symmetry of the problem we obtain formulas for particular cases, and a
computational method to address the general case. a list of all the polynomials corresponding to spheres of radius up to 64 is provided. moreover,
we explicitly provide a connection to the binary möbius transform of the
elementary symmetric functions. we conclude by presenting a method based
on polynomial evaluation to compute the minimum distance of binary linear
codes.
keywords: binary polynomials, binary möbius transform, elementary
symmetric functions, minimum distance, linear codes
1. introduction
many computationally hard problems can be described by boolean polynomial systems, and the standard approach is the computation of the gröbner
basis of the corresponding ideal. since it is a quite common scenario, we will
restrict ourselves to ideals of f2 [x1 , . . . , xn ] containing the entire set of field
equations {x2i + xi }i . to ease the notation, our work environment will therefore be the quotient ring r = f2 [x1 , . . . , xn ]/(x21 +x1 , . . . , x2n +xn ). moreover,
most of our results do not depend on the number n of variables, and when
not otherwise specified we consider r to be defined in infinitely many variables. we denote with x the set of our variables.
in this work we characterise the vanishing ideal it of the set of binary vectors
contained in the hamming sphere of radius t − 1. this characterisation corresponds to the explicit construction of the square-free polynomial φt whose
roots are exactly the set of points of weight at most t−1. it is worth mentioning that this polynomial corresponds to the algebraic normal form (anf) of
preprint submitted to elsevier
| 0 |
abstract: existing mathematical theory interprets the concept of standard deviation as the
dispersion degree. therefore, in measurement theory, both uncertainty concept and precision
concept, which are expressed with standard deviation or times standard deviation, are also defined
as the dispersion of measurement result, so that the concept logic is tangled. through comparative
analysis of the standard deviation concept and re-interpreting the measurement error evaluation
principle, this paper points out that the concept of standard deviation is actually single error’s
probability interval value instead of dispersion degree, and that the error with any regularity can be
evaluated by standard deviation, corrected this mathematical concept, and gave the correction
direction of measurement concept logic. these will bring a global change to measurement theory
system.
keywords: measurement error; standard deviation; variance; covariance; probability theory
| 10 |
abstract
in this paper, we propose a very concise deep learning approach for
collaborative filtering that jointly models distributional representation for users and
items. the proposed framework obtains better performance when compared against
current state-of-art algorithms and that made the distributional representation
model a promising direction for further research in the collaborative filtering.
| 9 |
abstract—the virtual network embedding problem (vnep)
captures the essence of many resource allocation problems of
today’s cloud providers, which offer their physical computation and networking resources to customers. customers request
resources in the form of virtual networks, i.e. as a directed
graph, specifying computational requirements at the nodes and
bandwidth requirements on the edges. an embedding of a
virtual network on the shared physical infrastructure is the joint
mapping of (virtual) nodes to suitable physical servers together
with the mapping of (virtual) edges onto paths in the physical
network connecting the respective servers. we study the offline
setting of the vnep in which multiple requests are given and
the task is to find the most profitable set of requests to embed
while not exceeding the physical resource capacities.
this paper initiates the study of approximation algorithms
for the vnep by employing randomized rounding of linear
programming solutions. we show that the standard linear
programming formulation exhibits an inherent structural deficit,
yielding large (or even infinite) integrality gaps. in turn, focusing
on the class of cactus graphs for virtual networks, we devise
a novel linear programming formulation together with an
algorithm to decompose fractional solutions into convex combinations of valid embeddings. applying randomized rounding,
we obtain the first tri-criteria approximation algorithm in the
classic resource augmentation model.
| 8 |
abstract. let a → b be a homomorphism of commutative rings. the squaring operation is a functor sqb/a from the derived category d(b) of complexes
of b-modules into itself. this operation is needed for the definition of rigid
complexes (in the sense of van den bergh), that in turn leads to a new approach
to grothendieck duality for rings, schemes and even dm stacks.
in our paper with j.j. zhang from 2008 we introduced the squaring operation, and explored some of its properties. unfortunately some of the proofs
in that paper had severe gaps in them.
in the present paper we reproduce the construction of the squaring operation. this is done in a more general context than in the first paper: here we
consider a homomorphism a → b of commutative dg rings. our first main
result is that the square sqb/a (m ) of a dg b-module m is independent of
the resolutions used to present it. our second main result is on the trace functoriality of the squaring operation. we give precise statements and complete
correct proofs.
in a subsequent paper we will reproduce the remaining parts of the 2008
paper that require fixing. this will allow us to proceed with the other papers,
mentioned in the bibliography, on the rigid approach to grothendieck duality.
the proofs of the main results require a substantial amount of foundational
work on commutative and noncommutative dg rings, including a study of
semi-free dg rings, their lifting properties, and their homotopies. this part
of the paper could be of independent interest.
| 0 |
abstract
the area of computation called artificial intelligence (ai) is falsified by describing a previous 1972
falsification of ai by british applied mathematician james lighthill. it is explained how lighthill’s
arguments continue to apply to current ai. it is argued that ai should use the popperian scientific method
in which it is the duty of every scientist to attempt to falsify theories and if theories are falsified to replace
or modify them. the paper describes the popperian method in detail and discusses paul nurse’s application
of the method to cell biology that also involves questions of mechanism and behavior. arguments used by
lighthill in his original 1972 report that falsifed ai are discussed. the lighthill arguments are then shown
to apply to current ai. the argument uses recent scholarship to explain lighthill’s assumptions and to
show how the arguments based on those assumptions continue to falsify modern ai. an iimportant focus
of the argument involves hilbert’s philosophical programme that defined knowledge and truth as provable
formal sentences. current ai takes the hilbert programme as dogma beyond criticism while lighthill as a
mid 20th century applied mathematician had abandoned it. the paper uses recent scholarship to explain
john von neumann’s criticism of ai that i claim was assumed by lighthill. the paper discusses computer
chess programs to show lighthill’s combinatorial explosion still applies to ai but not humans. an
argument showing that turing machines (tm) are not the correct description of computation is given. the
paper concludes by advocating studying computation as peter naur’s dataology.
| 2 |
abstract. we give a maximal independent set (mis) algorithm that runs in o(log log ∆) rounds
in the congested clique model, where ∆ is the maximum degree of the input graph. this improves
log ∆
√
upon the o( log(∆)·log
+ log log ∆) rounds algorithm of [ghaffari, podc ’17], where n is the
log n
number of vertices of the input graph.
in the first stage of our algorithm, we simulate the first o( polynlog n ) iterations of the sequential
random order greedy algorithm for mis in the congested clique model in o(log log ∆) rounds. this
thins out the input graph relatively quickly: after this stage, the maximum degree of the residual
graph is poly-logarithmic. in the second stage, we run the mis algorithm of [ghaffari, podc ’17]
on the residual graph, which completes in o(log log ∆) rounds on graphs of poly-logarithmic degree.
| 8 |
abstract
analytical computation methods are proposed for evaluating the minimum dwell time and average dwell time guaranteeing the asymptotic
stability of a discrete-time switched linear system whose switchings are
assumed to respect a given directed graph. the minimum and average
dwell time can be found using the graph that governs the switchings, and
the associated weights. this approach, which is used in a previous work
for continuous-time systems having non-defective subsystems, has been
adapted to discrete-time switched systems and generalized to allow defective subsystems. moreover, we present a method to improve the dwell
time estimation in the case of bimodal switched systems. in this method,
scaling algorithms to minimize the condition number are used to give
better minimum dwell time and average dwell time estimates.
keywords: switched systems, minimum dwell time, average dwell
time, optimum cycle ratio, asymptotic stability, switching graph.
| 3 |
abstract – premature convergence is one of the important
issues while using genetic programming for data modeling. it
can be avoided by improving population diversity. intelligent
genetic operators can help to improve the population diversity.
crossover is an important operator in genetic programming.
so, we have analyzed number of intelligent crossover operators
and proposed an algorithm with the modification of soft brood
crossover operator. it will help to improve the population
diversity and reduce the premature convergence. we have
performed experiments on three different symbolic regression
problems. then we made the performance comparison of our
proposed crossover (modified soft brood crossover) with the
existing soft brood crossover and subtree crossover operators.
index terms – intelligent crossover, genetic programming,
soft brood crossover
| 9 |
abstract—1-bit digital-to-analog (dacs) and analog-to-digital
converters (adcs) are gaining more interest in massive mimo
systems for economical and computational efficiency. we present
a new precoding technique to mitigate the inter-user-interference
(iui) and the channel distortions in a 1-bit downlink mumiso system with qpsk symbols. the transmit signal vector is
optimized taking into account the 1-bit quantization. we develop
a sort of mapping based on a look-up table (lut) between the
input signal and the transmit signal. the lut is updated for
each channel realization. simulation results show a significant
gain in terms of the uncoded bit-error-ratio (ber) compared to
the existing linear precoding techniques.
| 7 |
abstract
evolutionary algorithms are well suited for solving the knapsack problem. some empirical studies claim that evolutionary
algorithms can produce good solutions to the 0-1 knapsack problem. nonetheless, few rigorous investigations address the quality
of solutions that evolutionary algorithms may produce for the knapsack problem. the current paper focuses on a theoretical
investigation of three types of (n+1) evolutionary algorithms that exploit bitwise mutation, truncation selection, plus different
repair methods for the 0-1 knapsack problem. it assesses the solution quality in terms of the approximation ratio. our work
indicates that the solution produced by pure strategy and mixed strategy evolutionary algorithms is arbitrarily bad. nevertheless,
the evolutionary algorithm using helper objectives may produce 1/2-approximation solutions to the 0-1 knapsack problem.
index terms
evolutionary algorithm, approximation algorithm, knapsack problem, solution quality
| 9 |
abstract
school bus planning is usually divided into routing and scheduling due to the complexity of
solving them concurrently. however, the separation between these two steps may lead to worse
solutions with higher overall costs than that from solving them together. when finding the
minimal number of trips in the routing problem, neglecting the importance of trip compatibility
may increase the number of buses actually needed in the scheduling problem. this paper
proposes a new formulation for the multi-school homogeneous fleet routing problem that
maximizes trip compatibility while minimizing total travel time. this incorporates the trip
compatibility for the scheduling problem in the routing problem. since the problem is inherently
just a routing problem, finding a good solution is not cumbersome. to compare the performance
of the model with traditional routing problems, we generate eight mid-size data sets. through
importing the generated trips of the routing problems into the bus scheduling (blocking) problem,
it is shown that the proposed model uses up to 13% fewer buses than the common traditional
routing models.
keywords: school bus routing, trip compatibility, school bus scheduling, bus blocking
| 2 |
abstract. i describe an approach to compiling common idioms in r
code directly to native machine code and illustrate it with several examples. not only can this yield significant performance gains, but it
allows us to use new approaches to computing in r. importantly, the
compilation requires no changes to r itself, but is done entirely via r
packages. this allows others to experiment with different compilation
strategies and even to define new domain-specific languages within r.
we use the low-level virtual machine (llvm ) compiler toolkit to
create the native code and perform sophisticated optimizations on the
code. by adopting this widely used software within r, we leverage
its ability to generate code for different platforms such as cpus and
gpus, and will continue to benefit from its ongoing development. this
approach potentially allows us to develop high-level r code that is also
fast, that can be compiled to work with different data representations
and sources, and that could even be run outside of r. the approach
aims to both provide a compiler for a limited subset of the r language
and also to enable r programmers to write other compilers. this is
another approach to help us write high-level descriptions of what we
want to compute, not how.
key words and phrases: programming language, efficient computation, compilation, extensible compiler toolkit.
ploiting, and coming to terms with, technologies for
parallel computing including shared and nonshared
computing with data is in a very interesting pemulti-core processors and gpus (graphics processriod at present and this has significant implications
ing units). these challenge us to innovate and sigfor how we choose to go forward with our computnificantly enhance our existing computing platforms
ing platforms and education in statistics and related
and to develop new languages and systems so that
fields. we are simultaneously (i) leveraging higher- we are able to meet not just tomorrow’s needs, but
level, interpreted languages such as r, matlab, those of the next decade.
python and recently julia, (ii) dealing with increasstatisticians play an important role in the “big
ing volume and complexity of data, and (iii) ex- data” surge, and therefore must pay attention to
logistical and performance details of statistical comduncan temple lang is associate professor,
putations that we could previously ignore. we need
department of statistics, university of california at
to think about how best to meet our own computdavis, 4210 math sciences building, davis, california
ing needs for the near future and also how to best be
95616, usa e-mail: duncan@r-project.org.
able to participate in multi-disciplinary efforts that
require serious computing involving statistical ideas
this is an electronic reprint of the original article
published by the institute of mathematical statistics in and methods. are we best served with our own computing platform such as r (r core team (2013))?
statistical science, 2014, vol. 29, no. 2, 181–200. this
do we need our own system? can we afford the luxreprint differs from the original in pagination and
typographic detail.
ury of our own system, given the limited resources
1. background & motivation
| 6 |
abstract
clustering analysis plays an important role in scientific research
and commercial application. k-means algorithm is a widely
used partition method in clustering. however, it is known that
the k-means algorithm may get stuck at suboptimal solutions,
depending on the choice of the initial cluster centers. in this
article, we propose a technique to handle large scale data, which
can select initial clustering center purposefully using genetic
algorithms (gas), reduce the sensitivity to isolated point, avoid
dissevering big cluster, and overcome deflexion of data in some
degree that caused by the disproportion in data partitioning
owing to adoption of multi-sampling.
we applied our method to some public datasets these show the
advantages of the proposed approach for example hepatitis c
dataset that has been taken from the machine learning
warehouse of university of california. our aim is to evaluate
hepatitis dataset. in order to evaluate this dataset we did some
preprocessing operation, the reason to preprocessing is to
summarize the data in the best and suitable way for our
algorithm. missing values of the instances are adjusted using
local mean method.
| 9 |
abstract—the problem of multi-area interchange scheduling
in the presence of stochastic generation and load is considered.
a new interchange scheduling technique based on a two-stage
stochastic minimization of overall expected operating cost is
proposed. because directly solving the stochastic optimization is
intractable, an equivalent problem that maximizes the expected
social welfare is formulated. the proposed technique leverages
the operator’s capability of forecasting locational marginal prices
(lmps) and obtains the optimal interchange schedule without
iterations among operators.
index terms—inter-regional interchange scheduling, multiarea economic dispatch, seams issue.
| 3 |
abstract. we define the parametric closure problem, in which the input is a partially ordered set whose
elements have linearly varying weights and the goal is to compute the sequence of minimum-weight
downsets of the partial order as the weights vary. we give polynomial time solutions to many important
special cases of this problem including semiorders, reachability orders of bounded-treewidth graphs,
partial orders of bounded width, and series-parallel partial orders. our result for series-parallel orders
provides a significant generalization of a previous result of carlson and eppstein on bicriterion subtree
problems.
| 8 |
abstract
for many compiled languages, source-level types are erased
very early in the compilation process. as a result, further compiler passes may convert type-safe source into type-unsafe
machine code. type-unsafe idioms in the original source and
type-unsafe optimizations mean that type information in a
stripped binary is essentially nonexistent. the problem of recovering high-level types by performing type inference over
stripped machine code is called type reconstruction, and offers a useful capability in support of reverse engineering and
decompilation.
in this paper, we motivate and develop a novel type system and algorithm for machine-code type inference. the
features of this type system were developed by surveying a
wide collection of common source- and machine-code idioms,
building a catalog of challenging cases for type reconstruction. we found that these idioms place a sophisticated set
of requirements on the type system, inducing features such
as recursively-constrained polymorphic types. many of the
features we identify are often seen only in expressive and powerful type systems used by high-level functional languages.
using these type-system features as a guideline, we have
developed retypd: a novel static type-inference algorithm for
machine code that supports recursive types, polymorphism,
and subtyping. retypd yields more accurate inferred types
than existing algorithms, while also enabling new capabilities
∗ this
| 6 |
abstract
we simulate the self-propulsion of devices in a fluid in the regime of low
reynolds numbers. each device consists of three bodies (spheres or capsules) connected with two damped harmonic springs. sinusoidal driving
forces compress the springs which are resolved within a rigid body physics
engine. the latter is consistently coupled to a 3d lattice boltzmann framework for the fluid dynamics. in simulations of three-sphere devices, we find
that the propulsion velocity agrees well with theoretical predictions. in simulations where some or all spheres are replaced by capsules, we find that the
asymmetry of the design strongly affects the propelling efficiency.
keywords: stokes flow, self-propelled microorganism, lattice boltzmann
method, numerical simulation
1. introduction
engineered micro-devices, developed in such a way that they are able to
move alone through a fluid and, simultaneously, emit a signal, can be of cru∗
| 5 |
abstract
we study the problem of testing identity against a given distribution (a.k.a. goodness-of-fit) with a
focus on the high confidence regime. more precisely, given samples from an unknown distribution p
over n elements, an explicitly given distribution q, and parameters 0 < ε, δ < 1, we wish to distinguish,
with probability at least 1 − δ, whether the distributions are identical versus ε-far in total variation (or
statistical) distance. existing work has focused on the constant confidence regime,
√ i.e., the case that
δ = ω(1), for which the sample complexity of identity testing is known to be θ( n/ε2 ).
typical applications of distribution property testing require small values of the confidence parameter
δ (which correspond to small “p-values” in the statistical hypothesis testing terminology). prior work
achieved arbitrarily small values of δ via black-box amplification, which multiplies the required number
of samples by θ(log(1/δ)). we show that this upper bound is suboptimal for any δ = o(1), and give a
new identity tester that achieves the optimal sample complexity. our new upper and lower bounds show
that the optimal sample complexity of identity testing is
1 p
θ 2
n log(1/δ) + log(1/δ)
ε
for any n, ε, and δ. for the special case of uniformity testing, where the given distribution is the uniform
distribution un over the domain, our new tester is surprisingly simple: to test whether p = un versus
dtv (p, un ) ≥ ε, we simply threshold dtv (b
p, un ), where pb is the empirical probability distribution. we
believe that our novel analysis techniques may be useful for other distribution testing problems as well.
| 8 |
abstract—linear precoding has been widely studied in the context of massive multiple-input-multiple-output (mimo) together
with two common power normalization techniques, namely,
matrix normalization (mn) and vector normalization (vn).
despite this, their effect on the performance of massive mimo
systems has not been thoroughly studied yet. the aim of this
paper is to fulfill this gap by using large system analysis.
considering a system model that accounts for channel estimation,
pilot contamination, arbitrary pathloss, and per-user channel
correlation, we compute tight approximations for the signal-tointerference-plus-noise ratio and the rate of each user equipment
in the system while employing maximum ratio transmission
(mrt), zero forcing (zf), and regularized zf precoding under
both mn and vn techniques. such approximations are used
to analytically reveal how the choice of power normalization
affects the performance of mrt and zf under uncorrelated
fading channels. it turns out that zf with vn resembles a
sum rate maximizer while it provides a notion of fairness under
mn. numerical results are used to validate the accuracy of the
asymptotic analysis and to show that in massive mimo, noncoherent interference and noise, rather than pilot contamination,
are often the major limiting factors of the considered precoding
schemes.
index terms—massive mimo, linear precoding, power normalization techniques, large system analysis, pilot contamination.
| 7 |
abstract
| 5 |
abstract
mixed-integer mathematical programs are among the most commonly used models for a wide set of
problems in operations research and related fields. however, there is still very little known about what
can be expressed by small mixed-integer programs. in particular, prior to this work, it was open whether
some classical problems, like the minimum odd-cut problem, can be expressed by a compact mixedinteger program with few (even constantly many) integer variables. this is in stark contrast to linear
formulations, where recent breakthroughs in the field of extended formulations have shown that many
polytopes associated to classical combinatorial optimization problems do not even admit approximate
extended formulations of sub-exponential size.
we provide a general framework for lifting inapproximability results of extended formulations to the
setting of mixed-integer extended formulations, and obtain almost tight lower bounds on the number of
integer variables needed to describe a variety of classical combinatorial optimization problems. among
the implications we obtain, we show that any mixed-integer extended formulation of sub-exponential
size for the matching polytope, cut polytope, traveling salesman polytope or dominant of the odd-cut
polytope, needs ω(n/ log n) many integer variables, where n is the number of vertices of the underlying
graph. conversely, the above-mentioned polyhedra admit polynomial-size mixed-integer formulations
with only o(n) or o(n log n) (for the traveling salesman polytope) many integer variables.
our results build upon a new decomposition technique that, for any convex set c, allows for approximating any mixed-integer description of c by the intersection of c with the union of a small number of
affine subspaces.
keywords: extension complexity, mixed-integer programs, extended formulations
| 8 |
abstract. in this paper we present grammatic – a tool for textual
syntax definition. grammatic serves as a front-end for parser generators
(and other tools) and brings modularity and reuse to their development
artifacts. it adapts techniques for separation of concerns from apsectoriented programming to grammars and uses templates for grammar
reuse. we illustrate usage of grammatic by describing a case study:
bringing separation of concerns to antlr parser generator, which is
achieved without a common time- and memory-consuming technique of
building an ast to separate semantic actions from a grammar definition.
| 6 |
abstract. let (r, m) be a relative cohen-macaulay local ring with respect to an ideal
a of r and set c := ht a. in this paper, we investigate some properties of the matlis
dual hca (r)∨ of the r-module hca (r) and we show that such modules treat like canonical
modules over cohen-macaulay local rings. also, we provide some duality and equivalence results with respect to the module hca (r)∨ and so these results lead to achieve
generalizations of some known results, such as the local duality theorem, which have
been provided over a cohen-macaulay local ring which admits a canonical module.
| 0 |
abstract probability of the observation and is an elementary observer itself.
since information initially originates in quantum process with conjugated probabilities, its study should focus not on
physics of observing process‘ interacting particles but on its information-theoretical essence.
the approach substantiates every step of the origin through the unified formalism of mathematics and logic.
such formalism allows understand and describe the regularity (law) of these informational processes.
preexisting physical law is irrelevant to the emerging regularities in this approach.
the approach initial points are:
1. interaction of the objects or particles is primary indicator of their origin. the field of probability is source of
information and physics. the interactions are abstract ―yes-no‖ actions of an impulse, probabilistic or real.
3
| 7 |
abstract
controlling resource usage in distributed systems is a challenging task given the dynamics
involved in access granting. consider, for instance, the setting of floating licenses where access
can be granted if the request originates in a licensed domain and the number of active users
is within the license limits, and where licenses can be interchanged. access granting in such
scenarios is given in terms of floating authorizations, addressed in this paper as first class entities
of a process calculus model, encompassing the notions of domain, accounting and delegation.
we present the operational semantics of the model in two equivalent alternative ways, each
informing on the specific nature of authorizations. we also introduce a typing discipline to
single out systems that never get stuck due to lacking authorizations, addressing configurations
where authorization assignment is not statically prescribed in the system specification.
| 6 |
abstract
the purpose of this paper is to construct confidence intervals for the regression coefficients in the
fine-gray model for competing risks data with random censoring, where the number of covariates
can be larger than the sample size. despite strong motivation from biostatistics applications, highdimensional fine-gray model has attracted relatively little attention among the methodological or
theoretical literatures. we fill in this blank by proposing first a consistent regularized estimator
and then the confidence intervals based on the one-step bias-correcting estimator. we are able
to generalize the partial likelihood approach for the fine-gray model under random censoring
despite many technical difficulties. we lay down a methodological and theoretical framework for
the one-step bias-correcting estimator with the partial likelihood, which does not have independent
and identically distributed entries. we also handle for our theory the approximation error from
the inverse probability weighting (ipw), proposing novel concentration results for time dependent
processes. in addition to the theoretical results and algorithms, we present extensive numerical
experiments and an application to a study of non-cancer mortality among prostate cancer patients
using the linked medicare-seer data.
key words: p-values, survival analysis, high-dimensional inference, one-step estimator.
| 10 |
abstract
visual question answering (or vqa) is a
new and exciting problem that combines
natural language processing and computer
vision techniques. we present a survey
of the various datasets and models that
have been used to tackle this task. the
first part of this survey details the various datasets for vqa and compares them
along some common factors. the second part of this survey details the different
approaches for vqa, classified into four
types: non-deep learning models, deep
learning models without attention, deep
learning models with attention, and other
models which do not fit into the first three.
finally, we compare the performances of
these approaches and provide some directions for future work.
| 2 |
abstract
| 1 |
abstract
this paper presents minimax rates for density estimation when the data dimension d is allowed
to grow with the number of observations n rather than remaining fixed as in previous analyses. we
prove a non-asymptotic lower bound which gives the worst-case rate over standard classes of smooth
densities, and we show that kernel density estimators achieve this rate. we also give oracle choices for
the bandwidth and derive the fastest rate d can grow with n to maintain estimation consistency.
| 10 |
abstract. daligault, rao and thomassé asked whether every hereditary
graph class that is well-quasi-ordered by the induced subgraph relation has
bounded clique-width. lozin, razgon and zamaraev (jctb 2017+) gave a negative answer to this question, but their counterexample is a class that can only
be characterised by infinitely many forbidden induced subgraphs. this raises
the issue of whether the question has a positive answer for finitely defined hereditary graph classes. apart from two stubborn cases, this has been confirmed
when at most two induced subgraphs h1 , h2 are forbidden. we confirm it for
one of the two stubborn cases, namely for the (h1 , h2 ) = (triangle, p2 + p4 )
case, by proving that the class of (triangle, p2 + p4 )-free graphs has bounded
clique-width and is well-quasi-ordered. our technique is based on a special decomposition of 3-partite graphs. we also use this technique to prove that the
class of (triangle, p1 + p5 )-free graphs, which is known to have bounded cliquewidth, is well-quasi-ordered. our results enable us to complete the classification
of graphs h for which the class of (triangle, h)-free graphs is well-quasi-ordered.
| 8 |
abstract. we present the macaulay2 package numericalimplicitization, which allows for user-friendly computation of the basic invariants of the image of a polynomial map, such as dimension, degree, and hilbert function values.
this package relies on methods of numerical algebraic geometry, such as homotopy continuation and monodromy.
| 0 |
abstract. in this paper we give necessary and sufficient conditions for the
cohen-macaulayness of the tangent cone of a monomial curve in the 4-dimensional
affine space. we study particularly the case where c is a gorenstein noncomplete intersection monomial curve.
| 0 |
abstract
a hallmark of human intelligence is the ability to ask rich, creative, and revealing
questions. here we introduce a cognitive model capable of constructing humanlike questions. our approach treats questions as formal programs that, when executed on the state of the world, output an answer. the model specifies a probability
distribution over a complex, compositional space of programs, favoring concise
programs that help the agent learn in the current context. we evaluate our approach by modeling the types of open-ended questions generated by humans who
were attempting to learn about an ambiguous situation in a game. we find that our
model predicts what questions people will ask, and can creatively produce novel
questions that were not present in the training set. in addition, we compare a number of model variants, finding that both question informativeness and complexity
are important for producing human-like questions.
| 2 |
abstract sources,” arxiv preprint arxiv:1707.09567, july
2017.
| 7 |
abstract—in the area of computer vision, deep learning has
produced a variety of state-of-the-art models that rely on massive
labeled data. however, collecting and annotating images from
the real world has a great demand for labor and money
investments and is usually too passive to build datasets with
specific characteristics, such as small area of objects and high
occlusion level. under the framework of parallel vision, this
paper presents a purposeful way to design artificial scenes and
automatically generate virtual images with precise annotations.
a virtual dataset named paralleleye is built, which can be used
for several computer vision tasks. then, by training the dpm
(deformable parts model) and faster r-cnn detectors, we prove
that the performance of models can be significantly improved
by combining paralleleye with publicly available real-world
datasets during the training phase. in addition, we investigate
the potential of testing the trained models from a specific aspect
using intentionally designed virtual datasets, in order to discover
the flaws of trained models. from the experimental results, we
conclude that our virtual dataset is viable to train and test the
object detectors.
index terms—parallel vision, virtual dataset, object detection,
deep learning.
| 1 |
abstract. we propose a fragment of many-sorted second order logic
esmt and show that checking satisfiability of sentences in this fragment
is decidable. this logic has an ∃∗ ∀∗ quantifier prefix that is conducive
to modeling synthesis problems. moreover, it allows reasoning using a
combination of background theories provided that they have a decidable
satisfiability problem for the ∃∗ ∀∗ fo-fragment (e.g., linear arithmetic).
our decision procedure reduces the satisfiability of esmt formulae to
satisfiability queries of the background theories, allowing us to use existing efficient smt solvers for these theories; hence our procedure can be
seen as effectively smt (esmt) reasoning.
keywords: second order logic, synthesis, decidable fragment
| 6 |
abstract. functions between groups with the property that all function conjugates are inverse preserving are called sandwich morphisms. these maps preserve a structure within the group known as the sandwich structure. sandwich
structures are left distributive idempotent left involutary magmas. these provide a generalisation of groups which we call a sandwich. this paper explores
sandwiches and their relationship to groups.
| 4 |
abstract. based on the structure theory of pairs of skew-symmetric matrices,
we give a conjecture for the hilbert series of the exterior algebra modulo the
ideal generated by two generic quadratic forms. we show that the conjectured
series is an upper bound in the coefficient-wise sense, and we determine a
majority of the coefficients. we also conjecture that the series is equal to the
series of the squarefree polynomial ring modulo the ideal generated by the
squares of two generic linear forms.
| 0 |
abstract
background: frameshift translation is an important phenomenon that
contributes to the appearance of novel coding dna sequences (cds) and
functions in gene evolution, by allowing alternative amino acid translations of
gene coding regions.
frameshift translations can be identified by aligning two cds, from a same
gene or from homologous genes, while accounting for their codon structure. two
main classes of algorithms have been proposed to solve the problem of aligning
cds, either by amino acid sequence alignment back-translation, or by
simultaneously accounting for the nucleotide and amino acid levels. the former
does not allow to account for frameshift translations and up to now, the latter
exclusively accounts for frameshift translation initiation, not considering the
length of the translation disruption caused by a frameshift.
results: we introduce a new scoring scheme with an algorithm for the pairwise
alignment of cds accounting for frameshift translation initiation and length,
while simultaneously considering nucleotide and amino acid sequences. the main
specificity of the scoring scheme is the introduction of a penalty cost accounting
for frameshift extension length to compute an adequate similarity score for a cds
alignment. the second specificity of the model is that the search space of the
problem solved is the set of all feasible alignments between two cds. previous
approaches have considered restricted search space or additional constraints on
the decomposition of an alignment into length-3 sub-alignments. the algorithm
described in this paper has the same asymptotic time complexity as the classical
needleman-wunsch algorithm.
conclusions: we compare the method to other cds alignment methods based
on an application to the comparison of pairs of cds from homologous human,
mouse and cow genes of ten mammalian gene families from the
ensembl-compara database. the results show that our method is particularly
robust to parameter changes as compared to existing methods. it also appears to
be a good compromise, performing well both in the presence and absence of
frameshift translations. an implementation of the method is available at
https://github.com/udes-cobius/fsepsa.
keywords: coding dna sequences; pairwise alignment; frameshifts; dynamic
programming.
| 8 |
abstract— this paper presents an adaptive high performance
control method for autonomous miniature race cars. racing
dynamics are notoriously hard to model from first principles,
which is addressed by means of a cautious nonlinear model
predictive control (nmpc) approach that learns to improve
its dynamics model from data and safely increases racing
performance. the approach makes use of a gaussian process
(gp) and takes residual model uncertainty into account through
a chance constrained formulation. we present a sparse gp
approximation with dynamically adjusting inducing inputs,
enabling a real-time implementable controller. the formulation
is demonstrated in simulations, which show significant improvement with respect to both lap time and constraint satisfaction
compared to an nmpc without model learning.
| 3 |
abstract—a key issue in the control of distributed discrete
systems modeled as markov decisions processes, is that often the
state of the system is not directly observable at any single location
in the system. the participants in the control scheme must share
information with one another regarding the state of the system
in order to collectively make informed control decisions, but this
information sharing can be costly. harnessing recent results from
information theory regarding distributed function computation,
in this paper we derive, for several information sharing model
structures, the minimum amount of control information that must
be exchanged to enable local participants to derive the same control decisions as an imaginary omniscient controller having full
knowledge of the global state. incorporating consideration for this
amount of information that must be exchanged into the reward
enables one to trade the competing objectives of minimizing this
control information exchange and maximizing the performance
of the controller. an alternating optimization framework is then
provided to help find the efficient controllers and messaging
schemes. a series of running examples from wireless resource
allocation illustrate the ideas and design tradeoffs.
| 3 |
abstract
we prove that if a group g = ab is the mutually permutable product of the supersoluble subgroups a and b, then the supersoluble residual of g coincides with the
nilpotent residual of the derived subgroup g′ .
keywords: finite groups, supersoluble subgroup, mutually permutable product.
msc2010: 20d20, 20e34
| 4 |
abstract. a century ago, camille jordan proved that the complex general linear
group gln (c) has the jordan property: there is a jordan constant cn such that every
finite subgroup h ≤ gln (c) has an abelian subgroup h1 of index [h : h1 ] ≤ cn . we
show that every connected algebraic group g (which is not necessarily linear) has the
jordan property with the jordan constant depending only on dim g, and that the full
automorphism group aut(x) of every projective variety x has the jordan property.
| 4 |
abstract
this thesis is concerned with studying the properties of gradings on several examples of cluster algebras, primarily of infinite type. we start by considering
two classes of finite type cluster algebras: those of type bn and cn . we give the
number of cluster variables of each occurring degree and verify that the grading
is balanced. these results complete a classification in [16] for coefficient-free finite
type cluster algebras.
we then consider gradings on cluster algebras generated by 3×3 skew-symmetric
matrices. we show that the mutation-cyclic matrices give rise to gradings in which
all occurring degrees are positive and have only finitely many associated cluster
variables (excepting one particular case). for the mutation-acyclic matrices, we
prove that all occurring degrees have infinitely many variables and give a direct
proof that the gradings are balanced.
we provide a condition for a graded cluster algebra generated by a quiver to
have infinitely many degrees, based on the presence of a subquiver in its mutation
class. we use this to study the gradings on cluster algebras that are (quantum)
coordinate rings of matrices and grassmannians and show that they contain cluster
variables of all degrees in n.
next we consider the finite list (given in [9]) of mutation-finite quivers that do
not correspond to triangulations of marked surfaces. we show that a(x7 ) has a
grading in which there are only two degrees, with infinitely many cluster variables
e6 , e
e7 and e
e8 have infinitely
in both. we also show that the gradings arising from e
many variables in certain degrees.
finally, we study gradings arising from triangulations of marked bordered 2dimensional surfaces (see [10]). we adapt a definition from [24] to define the
space of valuation functions on such a surface and prove combinatorially that this
space is isomorphic to the space of gradings on the associated cluster algebra. we
illustrate this theory by applying it to a family of examples, namely, the annulus
with n + m marked points. we show that the standard grading is of mixed type,
i
| 0 |
abstract. feedforward neural networks have wide applicability in various
disciplines of science due to their universal approximation property. some authors have shown that single hidden layer feedforward neural networks (slfns)
with fixed weights still possess the universal approximation property provided
that approximated functions are univariate. but this phenomenon does not
lay any restrictions on the number of neurons in the hidden layer. the more
this number, the more the probability of the considered network to give precise results. in this note, we constructively prove that slfns with the fixed
weight 1 and two neurons in the hidden layer can approximate any continuous
function on a compact subset of the real line. the applicability of this result
is demonstrated in various numerical examples. finally, we show that slfns
with fixed weights cannot approximate all continuous multivariate functions.
| 7 |
abstract. we consider the problem of inferring temporal specifications from
demonstrations by an agent interacting with an uncertain, stochastic environment.
such specifications are useful for correct-by-construction control of autonomous
systems operating in uncertain environments. some demonstrations may have errors, and the specification inference method must be robust to them. we provide
a novel formulation of the problem as a maximum a posteriori (map) probability
inference problem, and give an efficient approach to solve this problem, demonstrated by case studies inspired by robotics.
| 2 |
abstract. this paper introduces scavenger, the first theorem prover for
pure first-order logic without equality based on the new conflict resolution
calculus. conflict resolution has a restricted resolution inference rule that
resembles (a first-order generalization of) unit propagation as well as a
rule for assuming decision literals and a rule for deriving new clauses by
(a first-order generalization of) conflict-driven clause learning.
| 2 |
abstract
we consider a reinforcement learning (rl) setting in which the agent interacts with a
sequence of episodic mdps. at the start of each episode the agent has access to some
side-information or context that determines the dynamics of the mdp for that episode.
our setting is motivated by applications in healthcare where baseline measurements of a
patient at the start of a treatment episode form the context that may provide information
about how the patient might respond to treatment decisions.
we propose algorithms for learning in such contextual markov decision processes
(cmdps) under an assumption that the unobserved mdp parameters vary smoothly with
the observed context. we also give lower and upper pac bounds under the smoothness
assumption. because our lower bound has an exponential dependence on the dimension, we
consider a tractable linear setting where the context is used to create linear combinations
of a finite set of mdps. for the linear setting, we give a pac learning algorithm based on
kwik learning techniques.
keywords: reinforcement learning, pac bounds, kwik learning.
| 2 |
abstract— in this paper, we develop a robust efficient visual slam system that utilizes heterogeneous point and line
features. by leveraging orb-slam [1], the proposed system
consists of stereo matching, frame tracking, local mapping,
loop detection, and bundle adjustment of both point and line
features. in particular, as the main theoretical contributions
of this paper, we, for the first time, employ the orthonormal
representation as the minimal parameterization to model line
features along with point features in visual slam and analytically derive the jacobians of the re-projection errors with
respect to the line parameters, which significantly improves
the slam solution. the proposed slam has been extensively
tested in both synthetic and real-world experiments whose
results demonstrate that the proposed system outperforms the
state-of-the-art methods in various scenarios.
| 1 |
abstract
we consider the problem of non-parametric regression with a potentially large number of covariates. we propose a convex, penalized estimation framework that is particularly well-suited for highdimensional sparse additive models. the proposed approach combines appealing features of finite basis
representation and smoothing penalties for non-parametric estimation. in particular, in the case of
additive models, a finite basis representation provides a parsimonious representation for fitted functions but is not adaptive when component functions posses different levels of complexity. on the other
hand, a smoothing spline type penalty on the component functions is adaptive but does not offer a parsimonious representation of the estimated function. the proposed approach simultaneously achieves
parsimony and adaptivity in a computationally efficient framework. we demonstrate these properties
through empirical studies on both real and simulated datasets. we show that our estimator converges
at the minimax rate for functions within a hierarchical class. we further establish minimax rates for a
large class of sparse additive models. the proposed method is implemented using an efficient algorithm
that scales similarly to the lasso with the number of covariates and samples size.
| 10 |
abstract
a system with artificial intelligence usually relies on symbol manipulation, at least partly and implicitly. however, the interpretation of the symbols – what they represent and what they are about – is
ultimately left to humans, as designers and users of the system. how symbols can acquire meaning for
the system itself, independent of external interpretation, is an unsolved problem. some grounding of
symbols can be obtained by embodiment, that is, by causally connecting symbols (or sub-symbolic variables) to the physical environment, such as in a robot with sensors and effectors. however, a causal
connection as such does not produce representation and aboutness of the kind that symbols have for
humans. here i present a theory that explains how humans and other living organisms have acquired
the capability to have symbols and sub-symbolic variables that represent, refer to, and are about something else. the theory shows how reference can be to physical objects, but also to abstract objects, and
even how it can be misguided (errors in reference) or be about non-existing objects. i subsequently
abstract the primary components of the theory from their biological context, and discuss how and under
what conditions the theory could be implemented in artificial agents. a major component of the theory
is the strong nonlinearity associated with (potentially unlimited) self-reproduction. the latter is likely
not acceptable in artificial systems. it remains unclear if goals other than those inherently serving selfreproduction can have aboutness and if such goals could be stabilized.
| 9 |
abstractions and a natural separation of protocols
and computations. we describe a reo-to-java compiler and illustrate its use through examples.
| 6 |
abstract—the realisation of sensing modalities based on the
principles of compressed sensing is often hindered by discrepancies between the mathematical model of its sensing operator,
which is necessary during signal recovery, and its actual physical
implementation, which can amply differ from the assumed
model. in this paper we tackle the bilinear inverse problem of
recovering a sparse input signal and some unknown, unstructured
multiplicative factors affecting the sensors that capture each
compressive measurement. our methodology relies on collecting
a few snapshots under new draws of the sensing operator, and
applying a greedy algorithm based on projected gradient descent
and the principles of iterative hard thresholding. we explore
empirically the sample complexity requirements of this algorithm
by testing its phase transition, and show in a practically relevant
instance of this problem for compressive imaging that the exact
solution can be obtained with only a few snapshots.
index terms—compressed sensing, blind calibration, iterative hard thresholding, non-convex optimisation, bilinear
inverse problems
| 7 |
abstract
trial-and-error based reinforcement learning
(rl) has seen rapid advancements in recent
times, especially with the advent of deep neural networks. however, the majority of autonomous rl algorithms require a large number of interactions with the environment. a
large number of interactions may be impractical in many real-world applications, such as
robotics, and many practical systems have to
obey limitations in the form of state space
or control constraints. to reduce the number
of system interactions while simultaneously
handling constraints, we propose a modelbased rl framework based on probabilistic
model predictive control (mpc). in particular, we propose to learn a probabilistic transition model using gaussian processes (gps)
to incorporate model uncertainty into longterm predictions, thereby, reducing the impact of model errors. we then use mpc to
find a control sequence that minimises the
expected long-term cost. we provide theoretical guarantees for first-order optimality in
the gp-based transition models with deterministic approximate inference for long-term
planning. we demonstrate that our approach
does not only achieve state-of-the-art data
efficiency, but also is a principled way for rl
in constrained environments.
| 3 |
abstract
current measures of machine intelligence are either difficult to evaluate or lack the ability to test
a robot’s problem-solving capacity in open worlds.
we propose a novel evaluation framework based on
the formal notion of macgyver test which provides
a practical way for assessing the resilience and resourcefulness of artificial agents.
| 2 |
abstract
| 9 |
abstract variety
in a complete variety. j. math. kyoto univ. 3 (1963), 89–102.
| 0 |
abstract—emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating
them on conventional computers, particularly in terms of speed
and energy consumption. however, this usually comes at the
cost of reduced control over the dynamics of the emulated
networks. in this paper, we demonstrate how iterative training
of a hardware-emulated network can compensate for anomalies
induced by the analog substrate. we first convert a deep
neural network trained in software to a spiking network on the
brainscales wafer-scale neuromorphic system, thereby enabling
an acceleration factor of 10 000 compared to the biological
time domain. this mapping is followed by the in-the-loop
training, where in each training step, the network activity is first
recorded in hardware and then used to compute the parameter
updates in software via backpropagation. an essential finding
is that the parameter updates do not have to be precise, but
only need to approximately follow the correct gradient, which
simplifies the computation of updates. using this approach,
after only several tens of iterations, the spiking network shows
an accuracy close to the ideal software-emulated prototype.
the presented techniques show that deep spiking networks
emulated on analog neuromorphic devices can attain good
computational performance despite the inherent variations of
the analog substrate.
| 9 |
abstract
we investigate the ihara zeta functions of finite schreier graphs γn of the
basilica group. we show that γ1+n is 2 sheeted unramified normal covering
z
. in fact, for any n > 1, r ≥ 1 the
of γn , ∀ n ≥ 1 with galois group
2z
n
graph γn+r is 2 sheeted unramified, non normal covering of γr . in order to
do this we give the definition of the generalized replacement product of
schreier graphs. we also show the corresponding results in zig zag product
of schreier graphs γn with a 4 cycle.
| 4 |
abstract—state-of-the-art static analysis tools for verifying
finite-precision code compute worst-case absolute error bounds
on numerical errors. these are, however, often not a good estimate of accuracy as they do not take into account the magnitude
of the computed values. relative errors, which compute errors
relative to the value’s magnitude, are thus preferable. while
today’s tools do report relative error bounds, these are merely
computed via absolute errors and thus not necessarily tight or
more informative. furthermore, whenever the computed value
is close to zero on part of the domain, the tools do not report
any relative error estimate at all. surprisingly, the quality of
relative error bounds computed by today’s tools has not been
systematically studied or reported to date.
in this paper, we investigate how state-of-the-art static techniques for computing sound absolute error bounds can be
used, extended and combined for the computation of relative
errors. our experiments on a standard benchmark set show that
computing relative errors directly, as opposed to via absolute
errors, is often beneficial and can provide error estimates up
to six orders of magnitude tighter, i.e. more accurate. we also
show that interval subdivision, another commonly used technique
to reduce over-approximations, has less benefit when computing
relative errors directly, but it can help to alleviate the effects of
the inherent issue of relative error estimates close to zero.
| 6 |
abstract
| 9 |
abstract. we present a formal framework for repairing infinite-state,
imperative, sequential programs, with (possibly recursive) procedures
and multiple assertions; the framework can generate repaired programs
by modifying the original erroneous program in multiple program locations, and can ensure the readability of the repaired program using
user-defined expression templates; the framework also generates a set of
inductive assertions that serve as a proof of correctness of the repaired
program. as a step toward integrating programmer intent and intuition
in automated program repair, we present a cost-aware formulation —
given a cost function associated with permissible statement modifications, the goal is to ensure that the total program modification cost does
not exceed a given repair budget. as part of our predicate abstractionbased solution framework, we present a sound and complete algorithm
for repair of boolean programs. we have developed a prototype tool
based on smt solving and used it successfully to repair diverse errors in
benchmark c programs.
| 6 |
abstract— swarm systems constitute a challenging problem
for reinforcement learning (rl) as the algorithm needs to learn
decentralized control policies that can cope with limited local
sensing and communication abilities of the agents. although
there have been recent advances of deep rl algorithms applied
to multi-agent systems, learning communication protocols while
simultaneously learning the behavior of the agents is still
beyond the reach of deep rl algorithms. however, while it
is often difficult to directly define the behavior of the agents,
simple communication protocols can be defined more easily
using prior knowledge about the given task. in this paper,
we propose a number of simple communication protocols that
can be exploited by deep reinforcement learning to find decentralized control policies in a multi-robot swarm environment.
the protocols are based on histograms that encode the local
neighborhood relations of the agents and can also transmit
task-specific information, such as the shortest distance and
direction to a desired target. in our framework, we use an
adaptation of trust region policy optimization to learn complex collaborative tasks, such as formation building, building a
communication link, and pushing an intruder. we evaluate our
findings in a simulated 2d-physics environment, and compare
the implications of different communication protocols.
| 2 |
abstract
in the modern era, abundant information is easily accessible from various sources, however only
a few of these sources are reliable as they mostly contain unverified contents. we develop a system
to validate the truthfulness of a given statement together with underlying evidence. the proposed
system provides supporting evidence when the statement is tagged as false. our work relies on an
inference method on a knowledge graph (kg) to identify the truthfulness of statements. in order
to extract the evidence of falseness, the proposed algorithm takes into account combined
knowledge from kg and ontologies. the system shows very good results as it provides valid and
concise evidence. the quality of kg plays a role in the performance of the inference method which
explicitly affects the performance of our evidence-extracting algorithm.
| 2 |
abstract. we use the natural homeomorphism between a regular cw-complex x and its face poset px to establish a canonical
isomorphism between the cellular chain complex of x and the result of applying the poset construction of [cla10] to px . for a
monomial ideal whose free resolution is supported on a regular
cw-complex, this isomorphism allows the free resolution of the
ideal to be realized as a cw-poset resolution. conversely, any
cw-poset resolution of a monomial ideal gives rise to a resolution
supported on a regular cw-complex.
| 0 |
abstract
a connected path decomposition of a simple graph g is a path decomposition (x1 , . . . , xl )
such that the subgraph of g induced by x1 ∪ · · · ∪ xi is connected for each i ∈ {1, . . . , l}. the
connected pathwidth of g is then the minimum width over all connected path decompositions
of g. we prove that for each fixed k, the connected pathwidth of any input graph can
be computed in polynomial-time. this answers an open question raised by fedor v. fomin
during the grasta 2017 workshop, since connected pathwidth is equivalent to the connected
(monotone) node search game.
| 8 |
abstract
we consider interactive algorithms in the pool-based setting, and in the streambased setting. interactive algorithms observe suggested elements (representing actions or
queries), and interactively select some of them and receive responses. pool-based algorithms can select elements at any order, while stream-based algorithms observe elements
in sequence, and can only select elements immediately after observing them. we assume
that the suggested elements are generated independently from some source distribution,
and ask what is the stream size required for emulating a pool algorithm with a given pool
size. we provide algorithms and matching lower bounds for general pool algorithms,
and for utility-based pool algorithms. we further show that a maximal gap between the
two settings exists also in the special case of active learning for binary classification.
| 10 |
abstract. we study the prime graph question for integral group rings. this question can
be reduced to almost simple groups by a result of kimmerle and konovalov. we prove that
the prime graph question has an affirmative answer for all almost simple groups having a
socle isomorphic to psl(2, pf ) for f ≤ 2, establishing the prime graph question for all groups
where the only non-abelian composition factors are of the aforementioned form. using this, we
determine exactly how far the so-called help method can take us for (almost simple) groups
having an order divisible by at most 4 different primes.
| 4 |
abstract
various applications involve assigning discrete label values to a collection of objects based on some
pairwise noisy data. due to the discrete—and hence nonconvex—structure of the problem, computing the
optimal assignment (e.g. maximum likelihood assignment) becomes intractable at first sight. this paper
makes progress towards efficient computation by focusing on a concrete joint alignment problem—that
is, the problem of recovering n discrete variables xi ∈ {1, · · · , m}, 1 ≤ i ≤ n given noisy observations
of their modulo differences {xi − xj mod m}. we propose a low-complexity and model-free procedure,
which operates in a lifted space by representing distinct label values in orthogonal directions, and which
attempts to optimize quadratic functions over hypercubes. starting with a first guess computed via a
spectral method, the algorithm successively refines the iterates via projected power iterations. we prove
that for a broad class of statistical models, the proposed projected power method makes no error—and
hence converges to the maximum likelihood estimate—in a suitable regime. numerical experiments have
been carried out on both synthetic and real data to demonstrate the practicality of our algorithm. we
expect this algorithmic framework to be effective for a broad range of discrete assignment problems.
| 7 |
abstract — brain-inspired learning mechanisms, e.g. spike
timing dependent plasticity (stdp), enable agile and fast on-thefly adaptation capability in a spiking neural network. when
incorporating emerging nanoscale resistive non-volatile memory
(nvm) devices, with ultra-low power consumption and highdensity integration capability, a spiking neural network hardware
would result in several orders of magnitude reduction in energy
consumption at a very small form factor and potentially herald
autonomous learning machines. however, actual memory devices
have shown to be intrinsically binary with stochastic switching,
and thus impede the realization of ideal stdp with continuous
analog values. in this work, a dendritic-inspired processing
architecture is proposed in addition to novel cmos neuron
circuits. the utilization of spike attenuations and delays
transforms the traditionally undesired stochastic behavior of
binary nvms into a useful leverage that enables biologicallyplausible stdp learning. as a result, this work paves a pathway
to adopt practical binary emerging nvm devices in brain-inspired
neuromorphic computing.
index terms— brain-inspired computing, crossbar,
neuromorphic computing, machine learning, memristor,
emerging non-volatile memory, rram, silicon neuron, spiketiming dependent plasticity (stdp), spiking neural network.
| 9 |
abstract. a theorem proved by dobrinskaya in 2006 shows that there is a
strong connection between the k(π, 1) conjecture for artin groups and the
classifying space of artin monoids. more recently ozornova obtained a different
proof of dobrinskaya’s theorem based on the application of discrete morse
theory to the standard cw model of the classifying space of an artin monoid.
in ozornova’s work there are hints at some deeper connections between the
above-mentioned cw model and the salvetti complex, a cw complex which
arises in the combinatorial study of artin groups. in this work we show that
such connections actually exist, and as a consequence we derive yet another
proof of dobrinskaya’s theorem.
| 4 |
abstract—the timely provision of traffic sign information
to drivers is essential for the drivers to respond, to ensure
safe driving, and to avoid traffic accidents in a timely manner. we proposed a timely visual recognizability quantitative
evaluation method for traffic signs in large-scale transportation
environments. to achieve this goal, we first address the concept
of a visibility field to reflect the visible distribution of threedimensional (3d) space and construct a traffic sign visibility
evaluation model (vem) to measure the traffic sign’s visibility
for a given viewpoint. then, based on the vem, we proposed the
concept of the visual recognizability field (vrf) to reflect the
visual recognizability distribution in 3d space and established a
visual recognizability evaluation model (vrem) to measure
a traffic sign’s visual recognizability for a given viewpoint.
next, we proposed a traffic sign timely visual recognizability
evaluation model (tstvrem) by combining vrem, the actual
maximum continuous visual recognizable distance, and traffic
big data to measure a traffic signs visual recognizability in
different lanes. finally, we presented an automatic algorithm to
implement the tstvrem model through traffic sign and road
marking detection and classification, traffic sign environment
point cloud segmentation, viewpoints calculation, and tstvrem
model realization. the performance of our method for traffic sign
timely visual recognizability evaluation is tested on three road
point clouds acquired by a mobile laser scanning system (riegl
vmx-450) according to road traffic signs and markings (gb
5768-1999 in china) , showing that our method is feasible and
efficient.
index terms—traffic sign, visibility, visibility field, visual
recognizability field, recognizability, mobile laser scanning, point
clouds.
| 1 |
abstract—deep learning methods can play a crucial role in
anomaly detection, prediction, and supporting decision making
for applications like personal health-care, pervasive body sensing,
etc. however, current architecture of deep networks suffers the
privacy issue that users need to give out their data to the
model (typically hosted in a server or a cluster on cloud) for
training or prediction. this problem is getting more severe for
those sensitive health-care or medical data (e.g fmri or body
sensors measures like eeg signals). in addition to this, there
is also a security risk of leaking these data during the data
transmission from user to the model (especially when it’s through
internet). targeting at these issues, in this paper we proposed a
new architecture for deep network in which users don’t reveal
their original data to the model. in our method, feed-forward
propagation and data encryption are combined into one process:
we migrate the first layer of deep network to users’ local devices,
and apply the activation functions locally, and then use “dropping
activation output” method to make the output non-invertible.
the resulting approach is able to make model prediction without
accessing users’ sensitive raw data. experiment conducted in this
paper showed that our approach achieves the desirable privacy
protection requirement, and demonstrated several advantages
over the traditional approach with encryption / decryption.
| 1 |
abstract. in most constraint programming systems, a limited number
of search engines is offered while the programming of user-customized
search algorithms requires low-level efforts, which complicates the deployment of such algorithms. to alleviate this limitation, concepts such
as computation spaces have been developed. computation spaces provide
a coarse-grained restoration mechanism, because they store all information contained in a search tree node. other granularities are possible, and
in this paper we make the case for dynamically adapting the restoration
granularity during search. in order to elucidate programmable restoration granularity, we present restoration as an aspect of a constraint programming system, using the model of aspect-oriented programming. a
proof-of-concept implementation using gecode shows promising results.
| 6 |
abstract
deep learning approaches have made tremendous
progress in the field of semantic segmentation over the past
few years. however, most current approaches operate in
the 2d image space. direct semantic segmentation of unstructured 3d point clouds is still an open research problem. the recently proposed pointnet architecture presents
an interesting step ahead in that it can operate on unstructured point clouds, achieving encouraging segmentation results. however, it subdivides the input points into a grid of
blocks and processes each such block individually. in this
paper, we investigate the question how such an architecture
can be extended to incorporate larger-scale spatial context.
we build upon pointnet and propose two extensions that
enlarge the receptive field over the 3d scene. we evaluate
the proposed strategies on challenging indoor and outdoor
datasets and show improved results in both scenarios.
| 1 |
abstract
set-membership estimation is usually formulated in the context of set-valued calculus and no probabilistic calculations are necessary.
in this paper, we show that set-membership estimation can be equivalently formulated in the probabilistic setting by employing sets of
probability measures. inference in set-membership estimation is thus carried out by computing expectations with respect to the updated
set of probability measures p as in the probabilistic case. in particular, it is shown that inference can be performed by solving a particular
semi-infinite linear programming problem, which is a special case of the truncated moment problem in which only the zero-th order
moment is known (i.e., the support). by writing the dual of the above semi-infinite linear programming problem, it is shown that, if the
nonlinearities in the measurement and process equations are polynomial and if the bounding sets for initial state, process and measurement
noises are described by polynomial inequalities, then an approximation of this semi-infinite linear programming problem can efficiently be
obtained by using the theory of sum-of-squares polynomial optimization. we then derive a smart greedy procedure to compute a polytopic
outer-approximation of the true membership-set, by computing the minimum-volume polytope that outer-bounds the set that includes all
the means computed with respect to p.
key words: state estimation; filtering; set-membership estimation; set of probability measures; sum-of-squares polynomials.
| 3 |
abstract—evolutionary computation methods have been successfully applied to neural networks since two decades ago, while
those methods cannot scale well to the modern deep neural networks due to the complicated architectures and large quantities
of connection weights. in this paper, we propose a new method
using genetic algorithms for evolving the architectures and
connection weight initialization values of a deep convolutional
neural network to address image classification problems. in the
proposed algorithm, an efficient variable-length gene encoding
strategy is designed to represent the different building blocks and
the unpredictable optimal depth in convolutional neural networks.
in addition, a new representation scheme is developed for effectively initializing connection weights of deep convolutional neural
networks, which is expected to avoid networks getting stuck into
local minima which is typically a major issue in the backward
gradient-based optimization. furthermore, a novel fitness evaluation method is proposed to speed up the heuristic search
with substantially less computational resource. the proposed
algorithm is examined and compared with 22 existing algorithms
on nine widely used image classification tasks, including the stateof-the-art methods. the experimental results demonstrate the
remarkable superiority of the proposed algorithm over the stateof-the-art algorithms in terms of classification error rate and the
number of parameters (weights).
index terms—genetic algorithms, convolutional neural network, image classification, deep learning.
| 9 |
abstract. in this paper we classify the isomorphism classes of four dimensional nilpotent associative algebras over a field f, studying regular subgroups
of the affine group agl4 (f). in particular we provide explicit representatives
for such classes when f is a finite field, the real field r or an algebraically
closed field.
| 4 |
abstract
this paper attempts a more formal approach to the legibility of text
based programming languages, presenting, with proof, minimum
possible ways of representing structure in text interleaved with
information. this presumes that a minimalist approach is best for
purposes of human readability, data storage and transmission, and
machine evaluation.
several proposals are given for improving the expression of interleaved hierarchical structure. for instance, a single colon can
replace a pair of brackets, and bracket types do not need to be repeated in both opening and closing symbols or words. historic and
customary uses of punctuation symbols guided the chosen form
and nature of the improvements.
| 6 |
abstract: the multilinear normal distribution is a widely used tool in tensor analysis
of magnetic resonance imaging (mri). diffusion tensor mri provides a statistical
estimate of a symmetric 2nd -order diffusion tensor, for each voxel within an imaging
volume. in this article, tensor elliptical (te) distribution is introduced as an extension to the multilinear normal (mln) distribution. some properties including the
characteristic function and distribution of affine transformations are given. an integral representation connecting densities of te and mln distributions is exhibited
that is used in deriving the expectation of any measurable function of a te variate.
key words and phrases: characteristic generator; inverse laplace transform; stochastic representation; tensor; vectorial operator.
ams classification: primary: 62e15, 60e10 secondary: 53a45, 15a69
| 10 |
abstract—tracking with a pan-tilt-zoom (ptz) camera has
been a research topic in computer vision for many years.
compared to tracking with a still camera, the images captured
with a ptz camera are highly dynamic in nature because the
camera can perform large motion resulting in quickly changing
capture conditions. furthermore, tracking with a ptz camera
involves camera control to position the camera on the target. for
successful tracking and camera control, the tracker must be fast
enough, or has to be able to predict accurately the next position
of the target. therefore, standard benchmarks do not allow to
assess properly the quality of a tracker for the ptz scenario. in
this work, we use a virtual ptz framework to evaluate different
tracking algorithms and compare their performances. we also
extend the framework to add target position prediction for the
next frame, accounting for camera motion and processing delays.
by doing this, we can assess if predicting can make long-term
tracking more robust as it may help slower algorithms for keeping
the target in the field of view of the camera. results confirm that
both speed and robustness are required for tracking under the
ptz scenario.
index terms—pan-tilt-zoom tracking, performance evaluation, tracking algorithms
| 1 |
abstract
we investigate the problem of language-based image editing (lbie) in this work. given a source
image and a natural language description, we want to generate a target image by editing the source image based on the description. we propose a generic modeling framework for two sub-tasks of lbie:
language-based image segmentation and image colorization. the framework uses recurrent attentive
models to fuse image and language features. instead of using a fixed step size, we introduce for each region of the image a termination gate to dynamically determine in each inference step whether to continue
extrapolating additional information from the textual description. the effectiveness of the framework has
been validated on three datasets. first, we introduce a synthetic dataset, called cosal, to evaluate the
end-to-end performance of our lbie system. second, we show that the framework leads to state-of-theart performance on image segmentation on the referit dataset. third, we present the first language-based
colorization result on the oxford-102 flowers dataset, laying the foundation for future research.
| 1 |
abstract
we prove an ω(d/ log sw
nd ) lower bound for the average-case cell-probe complexity of deterministic or las vegas randomized algorithms solving approximate near-neighbor (ann) problem in
d-dimensional hamming space in the cell-probe model with w-bit cells, using a table of size s.
this lower bound matches the highest known worst-case cell-probe lower bounds for any static
data structure problems.
this average-case cell-probe lower bound is proved in a general framework which relates the
cell-probe complexity of ann to isoperimetric inequalities in the underlying metric space. a
tighter connection between ann lower bounds and isoperimetric inequalities is established by
a stronger richness lemma proved by cell-sampling techniques.
| 8 |
abstract
reduced-rank regression is a dimensionality reduction method with many applications. the asymptotic theory for reduced rank estimators of parameter matrices in multivariate linear models has been
studied extensively. in contrast, few theoretical results are available for reduced-rank multivariate generalised linear models. we develop m-estimation theory for concave criterion functions that are maximised
over parameters spaces that are neither convex nor closed. these results are used to derive the consistency
and asymptotic distribution of maximum likelihood estimators in reduced-rank multivariate generalised
linear models, when the response and predictor vectors have a joint distribution. we illustrate our results
in a real data classification problem with binary covariates.
| 10 |
abstract
recent results by alagic and russell have given some evidence that
the even-mansour cipher may be secure against quantum adversaries
with quantum queries, if considered over other groups than (z/2)n .
this prompts the question as to whether or not other classical schemes
may be generalized to arbitrary groups and whether classical results
still apply to those generalized schemes.
in this paper, we generalize the even-mansour cipher and the feistel cipher. we show that even and mansour’s original notions of secrecy are obtained on a one-key, group variant of the even-mansour
cipher. we generalize the result by kilian and rogaway, that the
even-mansour cipher is pseudorandom, to super pseudorandomness,
also in the one-key, group case. using a slide attack we match the
bound found above. after generalizing the feistel cipher to arbitrary
groups we resolve an open problem of patel, ramzan, and sundaram
by showing that the 3-round feistel cipher over an arbitrary group is
not super pseudorandom.
finally, we generalize a result by gentry and ramzan showing that
the even-mansour cipher can be implemented using the feistel cipher
as the public permutation. in this last result, we also consider the
one-key case over a group and generalize their bound.
| 4 |
abstract– the liouville theorem states that bounded holomorphic complex functions
are necessarily constant. holomorphic functions fulfill the socalled cauchy-riemann
(cr) conditions. the cr conditions mean that a complex z-derivative is independent
of the direction. holomorphic functions are ideal for activation functions of complex
neural networks, but the liouville theorem makes them useless. yet recently the use
of hyperbolic numbers, lead to the construction of hyperbolic number neural networks.
we will describe the cauchy-riemann conditions for hyperbolic numbers and show that
there exists a new interesting type of bounded holomorphic functions of hyperbolic
numbers, which are not constant. we give examples of such functions. they therefore
substantially expand the available candidates for holomorphic activation functions for
hyperbolic number neural networks.
keywords: hyperbolic numbers, liouville theorem, cauchy-riemann conditions,
bounded holomorphic functions
| 9 |
abstract
the social force model is one of the most prominent models of pedestrian dynamics. as such naturally
much discussion and criticism has spawned around it, some of which concerns the existence of oscillations in the
movement of pedestrians. this contribution is investigating under which circumstances, parameter choices, and
model variants oscillations do occur and how this can be prevented. it is shown that oscillations can be excluded
if the model parameters fulfill certain relations. the fact that with some parameter choices oscillations occur
and with some not is exploited to verify a specific computer implementation of the model.
| 5 |
abstract,
conceptual ideas from ai safety, to bridge the gap
between practical contemporary challenges and
longer term concerns which are of an uncertain
time horizon. in addition to providing concrete
problems for researchers and engineers to tackle,
we hope this discussion will be a useful introduction to the concept of oracle ai for newcomers to
the subject. we state at the outset that within
the context of oracle ai, our analysis is limited
in scope to systems which perform mathematical computation, and not to oracles in general.
nonetheless, considering how little effort has been
directed at the superintelligence control problem,
we are confident that there is low-hanging fruit
in addressing these more general issues which are
awaiting discovery.
| 2 |
abstract
this is the second part of a two part work in which we prove that for every
finitely generated subgroup γ < out(fn ), either γ is virtually abelian or its
second bounded cohomology hb2 (γ; r) contains an embedding of ℓ1 . here in
part ii we focus on finite lamination subgroups γ — meaning that the set of
all attracting laminations of elements of γ is finite — and on the construction
of hyperbolic actions of those subgroups to which the general theory of part i
is applicable.
| 4 |
abstract. we propose a type-based resource usage analysis for the π-calculus extended
with resource creation/access primitives. the goal of the resource usage analysis is to
statically check that a program accesses resources such as files and memory in a valid
manner. our type system is an extension of previous behavioral type systems for the πcalculus. it can guarantee the safety property that no invalid access is performed, as well as
the property that necessary accesses (such as the close operation for a file) are eventually
performed unless the program diverges. a sound type inference algorithm for the type
system is also developed to free the programmer from the burden of writing complex type
annotations. based on our algorithm, we have implemented a prototype resource usage
analyzer for the π-calculus. to the authors’ knowledge, this is the first type-based resource
usage analysis that deals with an expressive concurrent language like the π-calculus.
| 6 |
abstract
| 10 |
abstract. we show that for each positive integer k there exist right-angled
artin groups containing free-by-cyclic subgroups whose monodromy automorphisms grow as nk . as a consequence we produce examples of right-angled
artin groups containing finitely presented subgroups whose dehn functions
grow as nk`2 .
| 4 |
abstract
we compute a canonical circular-arc representation for a given circular-arc (ca) graph which
implies solving the isomorphism and recognition problem for this class. to accomplish this we
split the class of ca graphs into uniform and non-uniform ones and employ a generalized version
of the argument given by köbler et al. (2013) that has been used to show that the subclass of
helly ca graphs can be canonized in logspace. for uniform ca graphs our approach works
in logspace and in addition to that helly ca graphs are a strict subset of uniform ca graphs.
thus our result is a generalization of the canonization result for helly ca graphs. in the nonuniform case a specific set ω of ambiguous vertices arises. by choosing the parameter k to be the
cardinality of ω this obstacle can be solved by brute force. this leads to an o(k + log n) space
algorithm to compute a canonical representation for non-uniform and therefore all ca graphs.
1998 acm subject classification g.2.2 graph theory
keywords and phrases graph isomorphism, canonical representation, parameterized algorithm
| 8 |
abstract
in this paper, we consider a scenario where an unmanned aerial vehicle (uav) collects data from
a set of sensors on a straight line. the uav can either cruise or hover while communicating with the
sensors. the objective is to minimize the uav’s total aviation time from a starting point to a destination
while allowing each sensor to successfully upload a certain amount of data using a given amount of
energy. the whole trajectory is divided into non-overlapping data collection intervals, in each of which
one sensor is served by the uav. the data collection intervals, the uav’s navigation speed and the
sensors’ transmit powers are jointly optimized. the formulated aviation time minimization problem is
difficult to solve. we first show that when only one sensor node is present, the sensor’s transmit power
follows a water-filling policy and the uav aviation speed can be found efficiently by bisection search.
then we show that for the general case with multiple sensors, the aviation time minimization problem
can be equivalently reformulated as a dynamic programming (dp) problem. the subproblem involved
in each stage of the dp reduces to handle the case with only one sensor node. numerical results present
insightful behaviors of the uav and the sensors. specifically, it is observed that the uav’s optimal
speed is proportional to the given energy and the inter-sensor distance, but inversely proportional to the
data upload requirement.
| 7 |
abstract. the growth function is the generating function for sizes of spheres
around the identity in cayley graphs of groups. we present a novel method
to calculate growth functions for automatic groups with normal form recognizing automata that recognize a single normal form for each group element,
and are at most context free in complexity: context free grammars can be
translated into algebraic systems of equations, whose solutions represent generating functions of their corresponding non-terminal symbols. this approach
allows us to seamlessly introduce weightings on the growth function: assign
different or even distinct weights to each of the generators in an underlying
presentation, such that this weighting is reflected in the growth function. we
recover known growth functions for small braid groups, and calculate growth
functions that weight each generator in an automatic presentation of the braid
groups according to their lengths in braid generators.
| 4 |
abstract. let g be a finite group and cd(g) denote the set of complex irreducible character degrees of g. in this paper, we prove that if g is a finite
group and h is an almost simple group whose socle is mathieu group such that
cd(g) = cd(h), then there exists an abelian subgroup a of g such that g/a
is isomorphic to h. this study is heading towards the study of an extension of
huppert’s conjecture (2000) for almost simple groups.
| 4 |
abstract
stochastic gradient descent algorithm has been successfully applied on support vector machines (called pegasos) for many classification problems.
in this paper, stochastic gradient descent algorithm is investigated to twin
support vector machines for classification. compared with pegasos, the
proposed stochastic gradient twin support vector machines (sgtsvm) is
insensitive on stochastic sampling for stochastic gradient descent algorithm.
in theory, we prove the convergence of sgtsvm instead of almost sure convergence of pegasos. for uniformly sampling, the approximation between
sgtsvm and twin support vector machines is also given, while pegasos
only has an opportunity to obtain an approximation of support vector machines. in addition, the nonlinear sgtsvm is derived directly from its linear
case. experimental results on both artificial datasets and large scale problems show the stable performance of sgtsvm with a fast learning speed.
keywords: classification, support vector machines, twin support vector
machines, stochastic gradient descent, large scale problem.
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.