id
stringlengths 9
16
| submitter
stringlengths 4
52
⌀ | authors
stringlengths 4
937
| title
stringlengths 7
243
| comments
stringlengths 1
472
⌀ | journal-ref
stringlengths 4
244
⌀ | doi
stringlengths 14
55
⌀ | report-no
stringlengths 3
125
⌀ | categories
stringlengths 5
97
| license
stringclasses 9
values | abstract
stringlengths 33
2.95k
| versions
list | update_date
timestamp[s] | authors_parsed
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0810.4812 | Robin Moser | Robin A. Moser | A constructive proof of the Lovasz Local Lemma | 11 pages; minor corrections | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Lovasz Local Lemma [EL75] is a powerful tool to prove the existence of
combinatorial objects meeting a prescribed collection of criteria. The
technique can directly be applied to the satisfiability problem, yielding that
a k-CNF formula in which each clause has common variables with at most 2^(k-2)
other clauses is always satisfiable. All hitherto known proofs of the Local
Lemma are non-constructive and do thus not provide a recipe as to how a
satisfying assignment to such a formula can be efficiently found. In his
breakthrough paper [Bec91], Beck demonstrated that if the neighbourhood of each
clause be restricted to O(2^(k/48)), a polynomial time algorithm for the search
problem exists. Alon simplified and randomized his procedure and improved the
bound to O(2^(k/8)) [Alo91]. Srinivasan presented in [Sri08] a variant that
achieves a bound of essentially O(2^(k/4)). In [Mos08], we improved this to
O(2^(k/2)). In the present paper, we give a randomized algorithm that finds a
satisfying assignment to every k-CNF formula in which each clause has a
neighbourhood of at most the asymptotic optimum of 2^(k-5)-1 other clauses and
that runs in expected time polynomial in the size of the formula, irrespective
of k. If k is considered a constant, we can also give a deterministic variant.
In contrast to all previous approaches, our analysis does not anymore invoke
the standard non-constructive versions of the Local Lemma and can therefore be
considered an alternative, constructive proof of it.
| [
{
"version": "v1",
"created": "Mon, 27 Oct 2008 14:02:48 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Oct 2008 14:35:58 GMT"
}
] | 2008-10-29T00:00:00 | [
[
"Moser",
"Robin A.",
""
]
] |
0810.4934 | Lukasz Kowalik | Marek Cygan, Lukasz Kowalik, Marcin Pilipczuk and Mateusz Wykurz | Exponential-Time Approximation of Hard Problems | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study optimization problems that are neither approximable in polynomial
time (at least with a constant factor) nor fixed parameter tractable, under
widely believed complexity assumptions. Specifically, we focus on Maximum
Independent Set, Vertex Coloring, Set Cover, and Bandwidth.
In recent years, many researchers design exact exponential-time algorithms
for these and other hard problems. The goal is getting the time complexity
still of order $O(c^n)$, but with the constant $c$ as small as possible. In
this work we extend this line of research and we investigate whether the
constant $c$ can be made even smaller when one allows constant factor
approximation. In fact, we describe a kind of approximation schemes --
trade-offs between approximation factor and the time complexity.
We study two natural approaches. The first approach consists of designing a
backtracking algorithm with a small search tree. We present one result of that
kind: a $(4r-1)$-approximation of Bandwidth in time $O^*(2^{n/r})$, for any
positive integer $r$.
The second approach uses general transformations from exponential-time exact
algorithms to approximations that are faster but still exponential-time. For
example, we show that for any reduction rate $r$, one can transform any
$O^*(c^n)$-time algorithm for Set Cover into a $(1+\ln r)$-approximation
algorithm running in time $O^*(c^{n/r})$. We believe that results of that kind
extend the applicability of exact algorithms for NP-hard problems.
| [
{
"version": "v1",
"created": "Mon, 27 Oct 2008 20:18:00 GMT"
}
] | 2008-10-29T00:00:00 | [
[
"Cygan",
"Marek",
""
],
[
"Kowalik",
"Lukasz",
""
],
[
"Pilipczuk",
"Marcin",
""
],
[
"Wykurz",
"Mateusz",
""
]
] |
0810.4946 | Gregory Gutin | Jean Daligault, Gregory Gutin, Eun Jung Kim, Anders Yeo | FPT Algorithms and Kernels for the Directed $k$-Leaf Problem | null | null | null | null | cs.DS cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A subgraph $T$ of a digraph $D$ is an {\em out-branching} if $T$ is an
oriented spanning tree with only one vertex of in-degree zero (called the {\em
root}). The vertices of $T$ of out-degree zero are {\em leaves}. In the {\sc
Directed $k$-Leaf} Problem, we are given a digraph $D$ and an integral
parameter $k$, and we are to decide whether $D$ has an out-branching with at
least $k$ leaves. Recently, Kneis et al. (2008) obtained an algorithm for the
problem of running time $4^{k}\cdot n^{O(1)}$. We describe a new algorithm for
the problem of running time $3.72^{k}\cdot n^{O(1)}$. In {\sc Rooted Directed
$k$-Leaf} Problem, apart from $D$ and $k$, we are given a vertex $r$ of $D$ and
we are to decide whether $D$ has an out-branching rooted at $r$ with at least
$k$ leaves. Very recently, Fernau et al. (2008) found an $O(k^3)$-size kernel
for {\sc Rooted Directed $k$-Leaf}. In this paper, we obtain an $O(k)$ kernel
for {\sc Rooted Directed $k$-Leaf} restricted to acyclic digraphs.
| [
{
"version": "v1",
"created": "Mon, 27 Oct 2008 21:44:42 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Oct 2008 17:41:51 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Aug 2009 07:52:06 GMT"
}
] | 2009-08-18T00:00:00 | [
[
"Daligault",
"Jean",
""
],
[
"Gutin",
"Gregory",
""
],
[
"Kim",
"Eun Jung",
""
],
[
"Yeo",
"Anders",
""
]
] |
0810.5064 | Travis Gagie | Travis Gagie | A New Algorithm for Building Alphabetic Minimax Trees | in preparation | null | null | null | cs.IT cs.DS math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show how to build an alphabetic minimax tree for a sequence (W = w_1,
>..., w_n) of real weights in (O (n d \log \log n)) time, where $d$ is the
number of distinct integers (\lceil w_i \rceil). We apply this algorithm to
building an alphabetic prefix code given a sample.
| [
{
"version": "v1",
"created": "Tue, 28 Oct 2008 15:59:55 GMT"
}
] | 2008-10-29T00:00:00 | [
[
"Gagie",
"Travis",
""
]
] |
0810.5263 | Andrew Twigg | Rahul Sami, Andy Twigg | Lower bounds for distributed markov chain problems | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the worst-case communication complexity of distributed algorithms
computing a path problem based on stationary distributions of random walks in a
network $G$ with the caveat that $G$ is also the communication network. The
problem is a natural generalization of shortest path lengths to expected path
lengths, and represents a model used in many practical applications such as
pagerank and eigentrust as well as other problems involving Markov chains
defined by networks.
For the problem of computing a single stationary probability, we prove an
$\Omega(n^2 \log n)$ bits lower bound; the trivial centralized algorithm costs
$O(n^3)$ bits and no known algorithm beats this. We also prove lower bounds for
the related problems of approximately computing the stationary probabilities,
computing only the ranking of the nodes, and computing the node with maximal
rank. As a corollary, we obtain lower bounds for labelling schemes for the
hitting time between two nodes.
| [
{
"version": "v1",
"created": "Wed, 29 Oct 2008 12:52:59 GMT"
}
] | 2008-10-30T00:00:00 | [
[
"Sami",
"Rahul",
""
],
[
"Twigg",
"Andy",
""
]
] |
0810.5428 | Amitabha Bagchi | Amitabha Bagchi, Garima Lahoti | Relating Web pages to enable information-gathering tasks | In Proceedings of ACM Hypertext 2009 | null | null | null | cs.IR cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We argue that relationships between Web pages are functions of the user's
intent. We identify a class of Web tasks - information-gathering - that can be
facilitated by a search engine that provides links to pages which are related
to the page the user is currently viewing. We define three kinds of intentional
relationships that correspond to whether the user is a) seeking sources of
information, b) reading pages which provide information, or c) surfing through
pages as part of an extended information-gathering process. We show that these
three relationships can be productively mined using a combination of textual
and link information and provide three scoring mechanisms that correspond to
them: {\em SeekRel}, {\em FactRel} and {\em SurfRel}. These scoring mechanisms
incorporate both textual and link information. We build a set of capacitated
subnetworks - each corresponding to a particular keyword - that mirror the
interconnection structure of the World Wide Web. The scores are computed by
computing flows on these subnetworks. The capacities of the links are derived
from the {\em hub} and {\em authority} values of the nodes they connect,
following the work of Kleinberg (1998) on assigning authority to pages in
hyperlinked environments. We evaluated our scoring mechanism by running
experiments on four data sets taken from the Web. We present user evaluations
of the relevance of the top results returned by our scoring mechanisms and
compare those to the top results returned by Google's Similar Pages feature,
and the {\em Companion} algorithm proposed by Dean and Henzinger (1999).
| [
{
"version": "v1",
"created": "Thu, 30 Oct 2008 07:17:49 GMT"
},
{
"version": "v2",
"created": "Wed, 19 May 2010 11:43:29 GMT"
}
] | 2010-05-20T00:00:00 | [
[
"Bagchi",
"Amitabha",
""
],
[
"Lahoti",
"Garima",
""
]
] |
0810.5477 | Andrew Twigg | Andrew Twigg | Worst-case time decremental connectivity and k-edge witness | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We give a simple algorithm for decremental graph connectivity that handles
edge deletions in worst-case time $O(k \log n)$ and connectivity queries in
$O(\log k)$, where $k$ is the number of edges deleted so far, and uses
worst-case space $O(m^2)$. We use this to give an algorithm for $k$-edge
witness (``does the removal of a given set of $k$ edges disconnect two vertices
$u,v$?'') with worst-case time $O(k^2 \log n)$ and space $O(k^2 n^2)$. For $k =
o(\sqrt{n})$ these improve the worst-case $O(\sqrt{n})$ bound for deletion due
to Eppstein et al. We also give a decremental connectivity algorithm using
$O(n^2 \log n / \log \log n)$ space, whose time complexity depends on the
toughness and independence number of the input graph. Finally, we show how to
construct a distributed data structure for \kvw by giving a labeling scheme.
This is the first data structure for \kvw that can efficiently distributed
without just giving each vertex a copy of the whole structure. Its complexity
depends on being able to construct a linear layout with good properties.
| [
{
"version": "v1",
"created": "Thu, 30 Oct 2008 12:15:33 GMT"
}
] | 2008-10-31T00:00:00 | [
[
"Twigg",
"Andrew",
""
]
] |
0810.5573 | David Correa Martins Jr | Marcelo Ris, Junior Barrera, David C. Martins Jr | A branch-and-bound feature selection algorithm for U-shaped cost
functions | null | null | null | null | cs.CV cs.DS cs.LG | http://creativecommons.org/licenses/by/3.0/ | This paper presents the formulation of a combinatorial optimization problem
with the following characteristics: i.the search space is the power set of a
finite set structured as a Boolean lattice; ii.the cost function forms a
U-shaped curve when applied to any lattice chain. This formulation applies for
feature selection in the context of pattern recognition. The known approaches
for this problem are branch-and-bound algorithms and heuristics, that explore
partially the search space. Branch-and-bound algorithms are equivalent to the
full search, while heuristics are not. This paper presents a branch-and-bound
algorithm that differs from the others known by exploring the lattice structure
and the U-shaped chain curves of the search space. The main contribution of
this paper is the architecture of this algorithm that is based on the
representation and exploration of the search space by new lattice properties
proven here. Several experiments, with well known public data, indicate the
superiority of the proposed method to SFFS, which is a popular heuristic that
gives good results in very short computational time. In all experiments, the
proposed method got better or equal results in similar or even smaller
computational time.
| [
{
"version": "v1",
"created": "Thu, 30 Oct 2008 20:24:28 GMT"
}
] | 2008-11-03T00:00:00 | [
[
"Ris",
"Marcelo",
""
],
[
"Barrera",
"Junior",
""
],
[
"Martins",
"David C.",
"Jr"
]
] |
0810.5578 | Shubha Nabar | Tomas Feder, Shubha U. Nabar, Evimaria Terzi | Anonymizing Graphs | 15 pages, 5 figures | null | null | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by recently discovered privacy attacks on social networks, we study
the problem of anonymizing the underlying graph of interactions in a social
network. We call a graph (k,l)-anonymous if for every node in the graph there
exist at least k other nodes that share at least l of its neighbors. We
consider two combinatorial problems arising from this notion of anonymity in
graphs. More specifically, given an input graph we ask for the minimum number
of edges to be added so that the graph becomes (k,l)-anonymous. We define two
variants of this minimization problem and study their properties. We show that
for certain values of k and l the problems are polynomial-time solvable, while
for others they become NP-hard. Approximation algorithms for the latter cases
are also given.
| [
{
"version": "v1",
"created": "Thu, 30 Oct 2008 21:12:25 GMT"
}
] | 2008-11-03T00:00:00 | [
[
"Feder",
"Tomas",
""
],
[
"Nabar",
"Shubha U.",
""
],
[
"Terzi",
"Evimaria",
""
]
] |
0810.5582 | Shubha Nabar | Rajeev Motwani, Shubha U. Nabar | Anonymizing Unstructured Data | 9 pages, 1 figure | null | null | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider the problem of anonymizing datasets in which each
individual is associated with a set of items that constitute private
information about the individual. Illustrative datasets include market-basket
datasets and search engine query logs. We formalize the notion of k-anonymity
for set-valued data as a variant of the k-anonymity model for traditional
relational datasets. We define an optimization problem that arises from this
definition of anonymity and provide O(klogk) and O(1)-approximation algorithms
for the same. We demonstrate applicability of our algorithms to the America
Online query log dataset.
| [
{
"version": "v1",
"created": "Fri, 31 Oct 2008 19:25:02 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Nov 2008 23:33:20 GMT"
}
] | 2008-11-04T00:00:00 | [
[
"Motwani",
"Rajeev",
""
],
[
"Nabar",
"Shubha U.",
""
]
] |
0810.5685 | Daniel Roche | Mark Giesbrecht and Daniel S. Roche | Interpolation of Shifted-Lacunary Polynomials | 22 pages, to appear in Computational Complexity | Computational Complexity, Vol. 19, No 3., pp. 333-354, 2010 | 10.1007/s00037-010-0294-0 | null | cs.SC cs.DS cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a "black box" function to evaluate an unknown rational polynomial f in
Q[x] at points modulo a prime p, we exhibit algorithms to compute the
representation of the polynomial in the sparsest shifted power basis. That is,
we determine the sparsity t, the shift s (a rational), the exponents 0 <= e1 <
e2 < ... < et, and the coefficients c1,...,ct in Q\{0} such that f(x) =
c1(x-s)^e1+c2(x-s)^e2+...+ct(x-s)^et. The computed sparsity t is absolutely
minimal over any shifted power basis. The novelty of our algorithm is that the
complexity is polynomial in the (sparse) representation size, and in particular
is logarithmic in deg(f). Our method combines previous celebrated results on
sparse interpolation and computing sparsest shifts, and provides a way to
handle polynomials with extremely high degree which are, in some sense, sparse
in information.
| [
{
"version": "v1",
"created": "Fri, 31 Oct 2008 13:35:08 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Nov 2008 00:39:28 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Nov 2008 18:33:23 GMT"
},
{
"version": "v4",
"created": "Thu, 11 Dec 2008 04:05:14 GMT"
},
{
"version": "v5",
"created": "Fri, 4 Dec 2009 06:10:58 GMT"
},
{
"version": "v6",
"created": "Mon, 23 Aug 2010 16:20:07 GMT"
}
] | 2010-12-06T00:00:00 | [
[
"Giesbrecht",
"Mark",
""
],
[
"Roche",
"Daniel S.",
""
]
] |
0811.0254 | Masud Hasan | Muhammad Abdullah Adnan and Masud Hasan | Characterizing Graphs of Zonohedra | 13 pages, 5 figures | null | null | null | cs.CG cs.DM cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A classic theorem by Steinitz states that a graph G is realizable by a convex
polyhedron if and only if G is 3-connected planar. Zonohedra are an important
subclass of convex polyhedra having the property that the faces of a zonohedron
are parallelograms and are in parallel pairs. In this paper we give
characterization of graphs of zonohedra. We also give a linear time algorithm
to recognize such a graph. In our quest for finding the algorithm, we prove
that in a zonohedron P both the number of zones and the number of faces in each
zone is O(square root{n}), where n is the number of vertices of P.
| [
{
"version": "v1",
"created": "Mon, 3 Nov 2008 10:19:10 GMT"
}
] | 2008-11-04T00:00:00 | [
[
"Adnan",
"Muhammad Abdullah",
""
],
[
"Hasan",
"Masud",
""
]
] |
0811.0811 | Andreas Blass | Andreas Blass (University of Michigan), Nachum Dershowitz (Tel Aviv
University), and Yuri Gurevich (Microsoft Research) | When are two algorithms the same? | null | Bulletin of Symbolic Logic, vol. 15, no. 2, pp. 145-168, 2009 | null | null | cs.GL cs.DS cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | People usually regard algorithms as more abstract than the programs that
implement them. The natural way to formalize this idea is that algorithms are
equivalence classes of programs with respect to a suitable equivalence
relation. We argue that no such equivalence relation exists.
| [
{
"version": "v1",
"created": "Wed, 5 Nov 2008 20:38:22 GMT"
}
] | 2020-06-11T00:00:00 | [
[
"Blass",
"Andreas",
"",
"University of Michigan"
],
[
"Dershowitz",
"Nachum",
"",
"Tel Aviv\n University"
],
[
"Gurevich",
"Yuri",
"",
"Microsoft Research"
]
] |
0811.1083 | George Fletcher | George H. L. Fletcher and Peter W. Beck | A role-free approach to indexing large RDF data sets in secondary memory
for efficient SPARQL evaluation | 12 pages, 5 figures, 2 tables | null | null | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Massive RDF data sets are becoming commonplace. RDF data is typically
generated in social semantic domains (such as personal information management)
wherein a fixed schema is often not available a priori. We propose a simple
Three-way Triple Tree (TripleT) secondary-memory indexing technique to
facilitate efficient SPARQL query evaluation on such data sets. The novelty of
TripleT is that (1) the index is built over the atoms occurring in the data
set, rather than at a coarser granularity, such as whole triples occurring in
the data set; and (2) the atoms are indexed regardless of the roles (i.e.,
subjects, predicates, or objects) they play in the triples of the data set. We
show through extensive empirical evaluation that TripleT exhibits multiple
orders of magnitude improvement over the state of the art on RDF indexing, in
terms of both storage and query processing costs.
| [
{
"version": "v1",
"created": "Fri, 7 Nov 2008 05:08:41 GMT"
}
] | 2008-11-20T00:00:00 | [
[
"Fletcher",
"George H. L.",
""
],
[
"Beck",
"Peter W.",
""
]
] |
0811.1301 | Amit Bhosle | Amit M. Bhosle and Teofilo F. Gonzalez | Distributed Algorithms for Computing Alternate Paths Avoiding Failed
Nodes and Links | 8 pages, 2 columns, 1 figure | null | null | null | cs.DC cs.DS cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A recent study characterizing failures in computer networks shows that
transient single element (node/link) failures are the dominant failures in
large communication networks like the Internet. Thus, having the routing paths
globally recomputed on a failure does not pay off since the failed element
recovers fairly quickly, and the recomputed routing paths need to be discarded.
In this paper, we present the first distributed algorithm that computes the
alternate paths required by some "proactive recovery schemes" for handling
transient failures. Our algorithm computes paths that avoid a failed node, and
provides an alternate path to a particular destination from an upstream
neighbor of the failed node. With minor modifications, we can have the
algorithm compute alternate paths that avoid a failed link as well. To the best
of our knowledge all previous algorithms proposed for computing alternate paths
are centralized, and need complete information of the network graph as input to
the algorithm.
| [
{
"version": "v1",
"created": "Sun, 9 Nov 2008 03:34:39 GMT"
}
] | 2008-11-11T00:00:00 | [
[
"Bhosle",
"Amit M.",
""
],
[
"Gonzalez",
"Teofilo F.",
""
]
] |
0811.1304 | Phuong Ha | Phuong Hoai Ha, Philippas Tsigas and Otto J. Anshus | NB-FEB: An Easy-to-Use and Scalable Universal Synchronization Primitive
for Parallel Programming | null | null | null | CS:2008-69 | cs.DC cs.AR cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of universal synchronization primitives that
can support scalable thread synchronization for large-scale many-core
architectures. The universal synchronization primitives that have been deployed
widely in conventional architectures like CAS and LL/SC are expected to reach
their scalability limits in the evolution to many-core architectures with
thousands of cores. We introduce a non-blocking full/empty bit primitive, or
NB-FEB for short, as a promising synchronization primitive for parallel
programming on may-core architectures. We show that the NB-FEB primitive is
universal, scalable, feasible and convenient to use. NB-FEB, together with
registers, can solve the consensus problem for an arbitrary number of processes
(universality). NB-FEB is combinable, namely its memory requests to the same
memory location can be combined into only one memory request, which
consequently mitigates performance degradation due to synchronization "hot
spots" (scalability). Since NB-FEB is a variant of the original full/empty bit
that always returns a value instead of waiting for a conditional flag, it is as
feasible as the original full/empty bit, which has been implemented in many
computer systems (feasibility). The original full/empty bit is well-known as a
special-purpose primitive for fast producer-consumer synchronization and has
been used extensively in the specific domain of applications. In this paper, we
show that NB-FEB can be deployed easily as a general-purpose primitive. Using
NB-FEB, we construct a non-blocking software transactional memory system called
NBFEB-STM, which can be used to handle concurrent threads conveniently.
NBFEB-STM is space efficient: the space complexity of each object updated by
$N$ concurrent threads/transactions is $\Theta(N)$, the optimal.
| [
{
"version": "v1",
"created": "Sun, 9 Nov 2008 00:41:07 GMT"
}
] | 2008-11-11T00:00:00 | [
[
"Ha",
"Phuong Hoai",
""
],
[
"Tsigas",
"Philippas",
""
],
[
"Anshus",
"Otto J.",
""
]
] |
0811.1305 | Ryan Williams | Ryan Williams | Applying Practice to Theory | 16 pages, 1 figure; ACM SIGACT News, December 2008 | null | null | null | cs.CC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How can complexity theory and algorithms benefit from practical advances in
computing? We give a short overview of some prior work using practical
computing to attack problems in computational complexity and algorithms,
informally describe how linear program solvers may be used to help prove new
lower bounds for satisfiability, and suggest a research program for developing
new understanding in circuit complexity.
| [
{
"version": "v1",
"created": "Sun, 9 Nov 2008 00:49:41 GMT"
}
] | 2008-11-11T00:00:00 | [
[
"Williams",
"Ryan",
""
]
] |
0811.1335 | Mugurel Ionut Andreica | Mugurel Ionut Andreica | Algorithmic Techniques for Several Optimization Problems Regarding
Distributed Systems with Tree Topologies | The 16th International Conference on Applied and Industrial
Mathematics, Oradea, Romania, 9-11 October, 2008. ROMAI Journal, vol. 4,
2008. (ISSN: 841-5512). In Press | ROMAI Journal, vol. 4, no. 1, pp. 1-25, 2008 (ISSN: 1841-5512) ;
http://www.romai.ro | null | null | cs.DS cs.DM cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the development of distributed systems progresses, more and more
challenges arise and the need for developing optimized systems and for
optimizing existing systems from multiple perspectives becomes more stringent.
In this paper I present novel algorithmic techniques for solving several
optimization problems regarding distributed systems with tree topologies. I
address topics like: reliability improvement, partitioning, coloring, content
delivery, optimal matchings, as well as some tree counting aspects. Some of the
presented techniques are only of theoretical interest, while others can be used
in practical settings.
| [
{
"version": "v1",
"created": "Sun, 9 Nov 2008 12:59:45 GMT"
}
] | 2009-03-21T00:00:00 | [
[
"Andreica",
"Mugurel Ionut",
""
]
] |
0811.1875 | Daniel Raible | Henning Fernau, Serge Gaspers, Daniel Raible | Exact Exponential Time Algorithms for Max Internal Spanning Tree | null | null | null | null | cs.DS cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the NP-hard problem of finding a spanning tree with a maximum
number of internal vertices. This problem is a generalization of the famous
Hamiltonian Path problem. Our dynamic-programming algorithms for general and
degree-bounded graphs have running times of the form O*(c^n) (c <= 3). The main
result, however, is a branching algorithm for graphs with maximum degree three.
It only needs polynomial space and has a running time of O*(1.8669^n) when
analyzed with respect to the number of vertices. We also show that its running
time is 2.1364^k n^O(1) when the goal is to find a spanning tree with at least
k internal vertices. Both running time bounds are obtained via a Measure &
Conquer analysis, the latter one being a novel use of this kind of analyses for
parameterized algorithms.
| [
{
"version": "v1",
"created": "Wed, 12 Nov 2008 12:09:08 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Jun 2009 07:32:23 GMT"
},
{
"version": "v3",
"created": "Fri, 12 Jun 2009 06:57:33 GMT"
}
] | 2009-06-12T00:00:00 | [
[
"Fernau",
"Henning",
""
],
[
"Gaspers",
"Serge",
""
],
[
"Raible",
"Daniel",
""
]
] |
0811.2457 | Ashish Goel | Ashish Goel, Michael Kapralov, Sanjeev Khanna | Perfect Matchings via Uniform Sampling in Regular Bipartite Graphs | null | null | null | null | cs.DS cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we further investigate the well-studied problem of finding a
perfect matching in a regular bipartite graph. The first non-trivial algorithm,
with running time $O(mn)$, dates back to K\"{o}nig's work in 1916 (here $m=nd$
is the number of edges in the graph, $2n$ is the number of vertices, and $d$ is
the degree of each node). The currently most efficient algorithm takes time
$O(m)$, and is due to Cole, Ost, and Schirra. We improve this running time to
$O(\min\{m, \frac{n^{2.5}\ln n}{d}\})$; this minimum can never be larger than
$O(n^{1.75}\sqrt{\ln n})$. We obtain this improvement by proving a uniform
sampling theorem: if we sample each edge in a $d$-regular bipartite graph
independently with a probability $p = O(\frac{n\ln n}{d^2})$ then the resulting
graph has a perfect matching with high probability. The proof involves a
decomposition of the graph into pieces which are guaranteed to have many
perfect matchings but do not have any small cuts. We then establish a
correspondence between potential witnesses to non-existence of a matching
(after sampling) in any piece and cuts of comparable size in that same piece.
Karger's sampling theorem for preserving cuts in a graph can now be adapted to
prove our uniform sampling theorem for preserving perfect matchings. Using the
$O(m\sqrt{n})$ algorithm (due to Hopcroft and Karp) for finding maximum
matchings in bipartite graphs on the sampled graph then yields the stated
running time. We also provide an infinite family of instances to show that our
uniform sampling result is tight up to poly-logarithmic factors (in fact, up to
$\ln^2 n$).
| [
{
"version": "v1",
"created": "Sat, 15 Nov 2008 05:49:17 GMT"
}
] | 2008-11-18T00:00:00 | [
[
"Goel",
"Ashish",
""
],
[
"Kapralov",
"Michael",
""
],
[
"Khanna",
"Sanjeev",
""
]
] |
0811.2497 | Haris Aziz | Haris Aziz and Mike Paterson | Computing voting power in easy weighted voting games | 12 pages, Presented at the International Symposium on Combinatorial
Optimization 2008 | null | null | null | cs.GT cs.CC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weighted voting games are ubiquitous mathematical models which are used in
economics, political science, neuroscience, threshold logic, reliability theory
and distributed systems. They model situations where agents with variable
voting weight vote in favour of or against a decision. A coalition of agents is
winning if and only if the sum of weights of the coalition exceeds or equals a
specified quota. The Banzhaf index is a measure of voting power of an agent in
a weighted voting game. It depends on the number of coalitions in which the
agent is the difference in the coalition winning or losing. It is well known
that computing Banzhaf indices in a weighted voting game is NP-hard. We give a
comprehensive classification of weighted voting games which can be solved in
polynomial time. Among other results, we provide a polynomial
($O(k{(\frac{n}{k})}^k)$) algorithm to compute the Banzhaf indices in weighted
voting games in which the number of weight values is bounded by $k$.
| [
{
"version": "v1",
"created": "Sat, 15 Nov 2008 14:55:51 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Feb 2010 22:27:38 GMT"
}
] | 2010-02-02T00:00:00 | [
[
"Aziz",
"Haris",
""
],
[
"Paterson",
"Mike",
""
]
] |
0811.2546 | Andrei Bulatov | Andrei A. Bulatov, Evgeny S. Skvortsov | Phase transition for Local Search on planted SAT | 20 pages, 3 figures, submitted to a conference | null | null | null | cs.DS cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Local Search algorithm (or Hill Climbing, or Iterative Improvement) is
one of the simplest heuristics to solve the Satisfiability and
Max-Satisfiability problems. It is a part of many satisfiability and
max-satisfiability solvers, where it is used to find a good starting point for
a more sophisticated heuristics, and to improve a candidate solution. In this
paper we give an analysis of Local Search on random planted 3-CNF formulas. We
show that if there is k<7/6 such that the clause-to-variable ratio is less than
k ln(n) (n is the number of variables in a CNF) then Local Search whp does not
find a satisfying assignment, and if there is k>7/6 such that the
clause-to-variable ratio is greater than k ln(n)$ then the local search whp
finds a satisfying assignment. As a byproduct we also show that for any
constant r there is g such that Local Search applied to a random (not
necessarily planted) 3-CNF with clause-to-variable ratio r produces an
assignment that satisfies at least gn clauses less than the maximal number of
satisfiable clauses.
| [
{
"version": "v1",
"created": "Sun, 16 Nov 2008 01:41:15 GMT"
}
] | 2008-11-18T00:00:00 | [
[
"Bulatov",
"Andrei A.",
""
],
[
"Skvortsov",
"Evgeny S.",
""
]
] |
0811.2572 | Gwena\"el Joret | Jean Cardinal, Samuel Fiorini, Gwena\"el Joret, Rapha\"el M. Jungers,
J. Ian Munro | An Efficient Algorithm for Partial Order Production | Referees' comments incorporated | SIAM J. Comput. Volume 39, Issue 7, pp. 2927-2940 (2010) | 10.1137/090759860 | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of partial order production: arrange the elements of
an unknown totally ordered set T into a target partially ordered set S, by
comparing a minimum number of pairs in T. Special cases include sorting by
comparisons, selection, multiple selection, and heap construction.
We give an algorithm performing ITLB + o(ITLB) + O(n) comparisons in the
worst case. Here, n denotes the size of the ground sets, and ITLB denotes a
natural information-theoretic lower bound on the number of comparisons needed
to produce the target partial order.
Our approach is to replace the target partial order by a weak order (that is,
a partial order with a layered structure) extending it, without increasing the
information theoretic lower bound too much. We then solve the problem by
applying an efficient multiple selection algorithm. The overall complexity of
our algorithm is polynomial. This answers a question of Yao (SIAM J. Comput.
18, 1989).
We base our analysis on the entropy of the target partial order, a quantity
that can be efficiently computed and provides a good estimate of the
information-theoretic lower bound.
| [
{
"version": "v1",
"created": "Mon, 17 Nov 2008 16:23:45 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Dec 2009 14:29:48 GMT"
}
] | 2010-05-06T00:00:00 | [
[
"Cardinal",
"Jean",
""
],
[
"Fiorini",
"Samuel",
""
],
[
"Joret",
"Gwenaël",
""
],
[
"Jungers",
"Raphaël M.",
""
],
[
"Munro",
"J. Ian",
""
]
] |
0811.2853 | Mohsen Bayati | Mohsen Bayati, Andrea Montanari and Amin Saberi | Generating Random Networks Without Short Cycles | 36 pages, 1 figure, accepted to Operations Research | null | null | null | cs.DS cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Random graph generation is an important tool for studying large complex
networks. Despite abundance of random graph models, constructing models with
application-driven constraints is poorly understood. In order to advance
state-of-the-art in this area, we focus on random graphs without short cycles
as a stylized family of graphs, and propose the RandGraph algorithm for
randomly generating them. For any constant k, when m=O(n^{1+1/[2k(k+3)]}),
RandGraph generates an asymptotically uniform random graph with n vertices, m
edges, and no cycle of length at most k using O(n^2m) operations. We also
characterize the approximation error for finite values of n. To the best of our
knowledge, this is the first polynomial-time algorithm for the problem.
RandGraph works by sequentially adding $m$ edges to an empty graph with n
vertices. Recently, such sequential algorithms have been successful for random
sampling problems. Our main contributions to this line of research includes
introducing a new approach for sequentially approximating edge-specific
probabilities at each step of the algorithm, and providing a new method for
analyzing such algorithms.
| [
{
"version": "v1",
"created": "Tue, 18 Nov 2008 08:05:26 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Dec 2017 17:52:07 GMT"
}
] | 2018-01-01T00:00:00 | [
[
"Bayati",
"Mohsen",
""
],
[
"Montanari",
"Andrea",
""
],
[
"Saberi",
"Amin",
""
]
] |
0811.2904 | Srinivasa Rao Satti | Rasmus Pagh and S. Srinivasa Rao | Secondary Indexing in One Dimension: Beyond B-trees and Bitmap Indexes | 16 pages | null | null | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Let S be a finite, ordered alphabet, and let x = x_1 x_2 ... x_n be a string
over S. A "secondary index" for x answers alphabet range queries of the form:
Given a range [a_l,a_r] over S, return the set I_{[a_l;a_r]} = {i |x_i \in
[a_l; a_r]}. Secondary indexes are heavily used in relational databases and
scientific data analysis. It is well-known that the obvious solution, storing a
dictionary for the position set associated with each character, does not always
give optimal query time. In this paper we give the first theoretically optimal
data structure for the secondary indexing problem. In the I/O model, the amount
of data read when answering a query is within a constant factor of the minimum
space needed to represent I_{[a_l;a_r]}, assuming that the size of internal
memory is (|S| log n)^{delta} blocks, for some constant delta > 0. The space
usage of the data structure is O(n log |S|) bits in the worst case, and we
further show how to bound the size of the data structure in terms of the 0-th
order entropy of x. We show how to support updates achieving various time-space
trade-offs.
We also consider an approximate version of the basic secondary indexing
problem where a query reports a superset of I_{[a_l;a_r]} containing each
element not in I_{[a_l;a_r]} with probability at most epsilon, where epsilon >
0 is the false positive probability. For this problem the amount of data that
needs to be read by the query algorithm is reduced to O(|I_{[a_l;a_r]}|
log(1/epsilon)) bits.
| [
{
"version": "v1",
"created": "Tue, 18 Nov 2008 13:31:05 GMT"
}
] | 2008-11-19T00:00:00 | [
[
"Pagh",
"Rasmus",
""
],
[
"Rao",
"S. Srinivasa",
""
]
] |
0811.3055 | Ke Xu | Liang Li and Tian Liu and Ke Xu | Exact phase transition of backtrack-free search with implications on the
power of greedy algorithms | null | null | null | null | cs.AI cs.DM cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Backtracking is a basic strategy to solve constraint satisfaction problems
(CSPs). A satisfiable CSP instance is backtrack-free if a solution can be found
without encountering any dead-end during a backtracking search, implying that
the instance is easy to solve. We prove an exact phase transition of
backtrack-free search in some random CSPs, namely in Model RB and in Model RD.
This is the first time an exact phase transition of backtrack-free search can
be identified on some random CSPs. Our technical results also have interesting
implications on the power of greedy algorithms, on the width of random
hypergraphs and on the exact satisfiability threshold of random CSPs.
| [
{
"version": "v1",
"created": "Wed, 19 Nov 2008 06:33:39 GMT"
}
] | 2008-11-20T00:00:00 | [
[
"Li",
"Liang",
""
],
[
"Liu",
"Tian",
""
],
[
"Xu",
"Ke",
""
]
] |
0811.3062 | Qin Zhang | Zhewei Wei, Ke Yi, Qin Zhang | Dynamic External Hashing: The Limit of Buffering | 10 pages, 1 figure | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hash tables are one of the most fundamental data structures in computer
science, in both theory and practice. They are especially useful in external
memory, where their query performance approaches the ideal cost of just one
disk access. Knuth gave an elegant analysis showing that with some simple
collision resolution strategies such as linear probing or chaining, the
expected average number of disk I/Os of a lookup is merely $1+1/2^{\Omega(b)}$,
where each I/O can read a disk block containing $b$ items. Inserting a new item
into the hash table also costs $1+1/2^{\Omega(b)}$ I/Os, which is again almost
the best one can do if the hash table is entirely stored on disk. However, this
assumption is unrealistic since any algorithm operating on an external hash
table must have some internal memory (at least $\Omega(1)$ blocks) to work
with. The availability of a small internal memory buffer can dramatically
reduce the amortized insertion cost to $o(1)$ I/Os for many external memory
data structures. In this paper we study the inherent query-insertion tradeoff
of external hash tables in the presence of a memory buffer. In particular, we
show that for any constant $c>1$, if the query cost is targeted at
$1+O(1/b^{c})$ I/Os, then it is not possible to support insertions in less than
$1-O(1/b^{\frac{c-1}{4}})$ I/Os amortized, which means that the memory buffer
is essentially useless. While if the query cost is relaxed to $1+O(1/b^{c})$
I/Os for any constant $c<1$, there is a simple dynamic hash table with $o(1)$
insertion cost. These results also answer the open question recently posed by
Jensen and Pagh.
| [
{
"version": "v1",
"created": "Wed, 19 Nov 2008 08:11:14 GMT"
}
] | 2008-11-20T00:00:00 | [
[
"Wei",
"Zhewei",
""
],
[
"Yi",
"Ke",
""
],
[
"Zhang",
"Qin",
""
]
] |
0811.3244 | Warren Schudy | Marek Karpinski, Warren Schudy | Linear Time Approximation Schemes for the Gale-Berlekamp Game and
Related Minimization Problems | 18 pages LaTeX, 2 figures | null | null | null | cs.DS cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We design a linear time approximation scheme for the Gale-Berlekamp Switching
Game and generalize it to a wider class of dense fragile minimization problems
including the Nearest Codeword Problem (NCP) and Unique Games Problem. Further
applications include, among other things, finding a constrained form of matrix
rigidity and maximum likelihood decoding of an error correcting code. As
another application of our method we give the first linear time approximation
schemes for correlation clustering with a fixed number of clusters and its
hierarchical generalization. Our results depend on a new technique for dealing
with small objective function values of optimization problems and could be of
independent interest.
| [
{
"version": "v1",
"created": "Thu, 20 Nov 2008 01:07:49 GMT"
}
] | 2008-11-21T00:00:00 | [
[
"Karpinski",
"Marek",
""
],
[
"Schudy",
"Warren",
""
]
] |
0811.3247 | Marino Pagan | Bruno Codenotti, Stefano De Rossi, Marino Pagan | An experimental analysis of Lemke-Howson algorithm | 15 pages, 18 figures. The source code of our implementation can be
found at http://allievi.sssup.it/game/index.html | null | null | null | cs.DS cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an experimental investigation of the performance of the
Lemke-Howson algorithm, which is the most widely used algorithm for the
computation of a Nash equilibrium for bimatrix games. Lemke-Howson algorithm is
based upon a simple pivoting strategy, which corresponds to following a path
whose endpoint is a Nash equilibrium. We analyze both the basic Lemke-Howson
algorithm and a heuristic modification of it, which we designed to cope with
the effects of a 'bad' initial choice of the pivot. Our experimental findings
show that, on uniformly random games, the heuristics achieves a linear running
time, while the basic Lemke-Howson algorithm runs in time roughly proportional
to a polynomial of degree seven. To conduct the experiments, we have developed
our own implementation of Lemke-Howson algorithm, which turns out to be
significantly faster than state-of-the-art software. This allowed us to run the
algorithm on a much larger set of data, and on instances of much larger size,
compared with previous work.
| [
{
"version": "v1",
"created": "Thu, 20 Nov 2008 00:32:16 GMT"
}
] | 2008-11-21T00:00:00 | [
[
"Codenotti",
"Bruno",
""
],
[
"De Rossi",
"Stefano",
""
],
[
"Pagan",
"Marino",
""
]
] |
0811.3448 | William Gilreath | William F. Gilreath | Binar Sort: A Linear Generalized Sorting Algorithm | PDF from Word, 25-pages, 2-figures, 4-diagrams, version 2.0 | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sorting is a common and ubiquitous activity for computers. It is not
surprising that there exist a plethora of sorting algorithms. For all the
sorting algorithms, it is an accepted performance limit that sorting algorithms
are linearithmic or O(N lg N). The linearithmic lower bound in performance
stems from the fact that the sorting algorithms use the ordering property of
the data. The sorting algorithm uses comparison by the ordering property to
arrange the data elements from an initial permutation into a sorted
permutation.
Linear O(N) sorting algorithms exist, but use a priori knowledge of the data
to use a specific property of the data and thus have greater performance. In
contrast, the linearithmic sorting algorithms are generalized by using a
universal property of data-comparison, but have a linearithmic performance
lower bound. The trade-off in sorting algorithms is generality for performance
by the chosen property used to sort the data elements.
A general-purpose, linear sorting algorithm in the context of the trade-off
of performance for generality at first consideration seems implausible. But,
there is an implicit assumption that only the ordering property is universal.
But, as will be discussed and examined, it is not the only universal property
for data elements. The binar sort is a general-purpose sorting algorithm that
uses this other universal property to sort linearly.
| [
{
"version": "v1",
"created": "Fri, 21 Nov 2008 01:38:09 GMT"
},
{
"version": "v2",
"created": "Tue, 17 May 2011 04:19:05 GMT"
}
] | 2011-05-18T00:00:00 | [
[
"Gilreath",
"William F.",
""
]
] |
0811.3449 | William Gilreath | William F. Gilreath | Binar Shuffle Algorithm: Shuffling Bit by Bit | 27-pages, watermarked | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Frequently, randomly organized data is needed to avoid an anomalous operation
of other algorithms and computational processes. An analogy is that a deck of
cards is ordered within the pack, but before a game of poker or solitaire the
deck is shuffled to create a random permutation. Shuffling is used to assure
that an aggregate of data elements for a sequence S is randomly arranged, but
avoids an ordered or partially ordered permutation.
Shuffling is the process of arranging data elements into a random
permutation. The sequence S as an aggregation of N data elements, there are N!
possible permutations. For the large number of possible permutations, two of
the possible permutations are for a sorted or ordered placement of data
elements--both an ascending and descending sorted permutation. Shuffling must
avoid inadvertently creating either an ascending or descending permutation.
Shuffling is frequently coupled to another algorithmic function --
pseudo-random number generation. The efficiency and quality of the shuffle is
directly dependent upon the random number generation algorithm utilized. A more
effective and efficient method of shuffling is to use parameterization to
configure the shuffle, and to shuffle into sub-arrays by utilizing the encoding
of the data elements. The binar shuffle algorithm uses the encoding of the data
elements and parameterization to avoid any direct coupling to a random number
generation algorithm, but still remain a linear O(N) shuffle algorithm.
| [
{
"version": "v1",
"created": "Fri, 21 Nov 2008 01:45:50 GMT"
}
] | 2008-11-24T00:00:00 | [
[
"Gilreath",
"William F.",
""
]
] |
0811.3490 | Philip Bille | Philip Bille | Faster Approximate String Matching for Short Patterns | To appear in Theory of Computing Systems | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the classical approximate string matching problem, that is, given
strings $P$ and $Q$ and an error threshold $k$, find all ending positions of
substrings of $Q$ whose edit distance to $P$ is at most $k$. Let $P$ and $Q$
have lengths $m$ and $n$, respectively. On a standard unit-cost word RAM with
word size $w \geq \log n$ we present an algorithm using time $$ O(nk \cdot
\min(\frac{\log^2 m}{\log n},\frac{\log^2 m\log w}{w}) + n) $$ When $P$ is
short, namely, $m = 2^{o(\sqrt{\log n})}$ or $m = 2^{o(\sqrt{w/\log w})}$ this
improves the previously best known time bounds for the problem. The result is
achieved using a novel implementation of the Landau-Vishkin algorithm based on
tabulation and word-level parallelism.
| [
{
"version": "v1",
"created": "Fri, 21 Nov 2008 08:52:59 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2011 21:11:16 GMT"
}
] | 2011-03-21T00:00:00 | [
[
"Bille",
"Philip",
""
]
] |
0811.3602 | Yakov Nekrich | Travis Gagie, Marek Karpinski, Yakov Nekrich | Low-Memory Adaptive Prefix Coding | 10 pages | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we study the adaptive prefix coding problem in cases where the
size of the input alphabet is large. We present an online prefix coding
algorithm that uses $O(\sigma^{1 / \lambda + \epsilon}) $ bits of space for any
constants $\eps>0$, $\lambda>1$, and encodes the string of symbols in $O(\log
\log \sigma)$ time per symbol \emph{in the worst case}, where $\sigma$ is the
size of the alphabet. The upper bound on the encoding length is $\lambda n H
(s) +(\lambda \ln 2 + 2 + \epsilon) n + O (\sigma^{1 / \lambda} \log^2 \sigma)$
bits.
| [
{
"version": "v1",
"created": "Fri, 21 Nov 2008 18:23:00 GMT"
}
] | 2008-11-24T00:00:00 | [
[
"Gagie",
"Travis",
""
],
[
"Karpinski",
"Marek",
""
],
[
"Nekrich",
"Yakov",
""
]
] |
0811.3648 | Jelani Nelson | Daniel M. Kane, Jelani Nelson, David P. Woodruff | Revisiting Norm Estimation in Data Streams | added content; modified L_0 algorithm -- ParityLogEstimator in
version 1 contained an error, and the new algorithm uses slightly more space | null | null | null | cs.DS cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of estimating the pth moment F_p (p nonnegative and real) in data
streams is as follows. There is a vector x which starts at 0, and many updates
of the form x_i <-- x_i + v come sequentially in a stream. The algorithm also
receives an error parameter 0 < eps < 1. The goal is then to output an
approximation with relative error at most eps to F_p = ||x||_p^p.
Previously, it was known that polylogarithmic space (in the vector length n)
was achievable if and only if p <= 2. We make several new contributions in this
regime, including:
(*) An optimal space algorithm for 0 < p < 2, which, unlike previous
algorithms which had optimal dependence on 1/eps but sub-optimal dependence on
n, does not rely on a generic pseudorandom generator.
(*) A near-optimal space algorithm for p = 0 with optimal update and query
time.
(*) A near-optimal space algorithm for the "distinct elements" problem (p = 0
and all updates have v = 1) with optimal update and query time.
(*) Improved L_2 --> L_2 dimensionality reduction in a stream.
(*) New 1-pass lower bounds to show optimality and near-optimality of our
algorithms, as well as of some previous algorithms (the "AMS sketch" for p = 2,
and the L_1-difference algorithm of Feigenbaum et al.).
As corollaries of our work, we also obtain a few separations in the
complexity of moment estimation problems: F_0 in 1 pass vs. 2 passes, p = 0 vs.
p > 0, and F_0 with strictly positive updates vs. arbitrary updates.
| [
{
"version": "v1",
"created": "Fri, 21 Nov 2008 22:55:07 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Apr 2009 02:45:30 GMT"
}
] | 2009-04-09T00:00:00 | [
[
"Kane",
"Daniel M.",
""
],
[
"Nelson",
"Jelani",
""
],
[
"Woodruff",
"David P.",
""
]
] |
0811.3723 | Mingyu Xiao | Mingyu Xiao, Leizhen Cai and Andrew C. Yao | Tight Approximation Ratio of a General Greedy Splitting Algorithm for
the Minimum k-Way Cut Problem | 12 pages | null | null | null | cs.DS cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For an edge-weighted connected undirected graph, the minimum $k$-way cut
problem is to find a subset of edges of minimum total weight whose removal
separates the graph into $k$ connected components. The problem is NP-hard when
$k$ is part of the input and W[1]-hard when $k$ is taken as a parameter.
A simple algorithm for approximating a minimum $k$-way cut is to iteratively
increase the number of components of the graph by $h-1$, where $2 \le h \le k$,
until the graph has $k$ components. The approximation ratio of this algorithm
is known for $h \le 3$ but is open for $h \ge 4$.
In this paper, we consider a general algorithm that iteratively increases the
number of components of the graph by $h_i-1$, where $h_1 \le h_2 \le ... \le
h_q$ and $\sum_{i=1}^q (h_i-1) = k-1$. We prove that the approximation ratio of
this general algorithm is $2 - (\sum_{i=1}^q {h_i \choose 2})/{k \choose 2}$,
which is tight. Our result implies that the approximation ratio of the simple
algorithm is $2-h/k + O(h^2/k^2)$ in general and $2-h/k$ if $k-1$ is a multiple
of $h-1$.
| [
{
"version": "v1",
"created": "Sun, 23 Nov 2008 03:47:50 GMT"
}
] | 2008-11-25T00:00:00 | [
[
"Xiao",
"Mingyu",
""
],
[
"Cai",
"Leizhen",
""
],
[
"Yao",
"Andrew C.",
""
]
] |
0811.3760 | Sebastien Tixeuil | St\'ephane Devismes, Toshimitsu Masuzawa, S\'ebastien Tixeuil (LIP6) | Communication Efficiency in Self-stabilizing Silent Protocols | null | null | null | RR-6731 | cs.DS cs.CC cs.DC cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-stabilization is a general paradigm to provide forward recovery
capabilities to distributed systems and networks. Intuitively, a protocol is
self-stabilizing if it is able to recover without external intervention from
any catastrophic transient failure. In this paper, our focus is to lower the
communication complexity of self-stabilizing protocols \emph{below} the need of
checking every neighbor forever. In more details, the contribution of the paper
is threefold: (i) We provide new complexity measures for communication
efficiency of self-stabilizing protocols, especially in the stabilized phase or
when there are no faults, (ii) On the negative side, we show that for
non-trivial problems such as coloring, maximal matching, and maximal
independent set, it is impossible to get (deterministic or probabilistic)
self-stabilizing solutions where every participant communicates with less than
every neighbor in the stabilized phase, and (iii) On the positive side, we
present protocols for coloring, maximal matching, and maximal independent set
such that a fraction of the participants communicates with exactly one neighbor
in the stabilized phase.
| [
{
"version": "v1",
"created": "Sun, 23 Nov 2008 17:29:25 GMT"
}
] | 2008-11-25T00:00:00 | [
[
"Devismes",
"Stéphane",
"",
"LIP6"
],
[
"Masuzawa",
"Toshimitsu",
"",
"LIP6"
],
[
"Tixeuil",
"Sébastien",
"",
"LIP6"
]
] |
0811.3779 | Reid Andersen | Reid Andersen and Yuval Peres | Finding Sparse Cuts Locally Using Evolving Sets | 20 pages, no figures | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A {\em local graph partitioning algorithm} finds a set of vertices with small
conductance (i.e. a sparse cut) by adaptively exploring part of a large graph
$G$, starting from a specified vertex. For the algorithm to be local, its
complexity must be bounded in terms of the size of the set that it outputs,
with at most a weak dependence on the number $n$ of vertices in $G$. Previous
local partitioning algorithms find sparse cuts using random walks and
personalized PageRank. In this paper, we introduce a randomized local
partitioning algorithm that finds a sparse cut by simulating the {\em
volume-biased evolving set process}, which is a Markov chain on sets of
vertices. We prove that for any set of vertices $A$ that has conductance at
most $\phi$, for at least half of the starting vertices in $A$ our algorithm
will output (with probability at least half), a set of conductance
$O(\phi^{1/2} \log^{1/2} n)$. We prove that for a given run of the algorithm,
the expected ratio between its computational complexity and the volume of the
set that it outputs is $O(\phi^{-1/2} polylog(n))$. In comparison, the best
previous local partitioning algorithm, due to Andersen, Chung, and Lang, has
the same approximation guarantee, but a larger ratio of $O(\phi^{-1}
polylog(n))$ between the complexity and output volume. Using our local
partitioning algorithm as a subroutine, we construct a fast algorithm for
finding balanced cuts. Given a fixed value of $\phi$, the resulting algorithm
has complexity $O((m+n\phi^{-1/2}) polylog(n))$ and returns a cut with
conductance $O(\phi^{1/2} \log^{1/2} n)$ and volume at least $v_{\phi}/2$,
where $v_{\phi}$ is the largest volume of any set with conductance at most
$\phi$.
| [
{
"version": "v1",
"created": "Sun, 23 Nov 2008 22:39:38 GMT"
}
] | 2008-11-25T00:00:00 | [
[
"Andersen",
"Reid",
""
],
[
"Peres",
"Yuval",
""
]
] |
0811.4007 | Krishnam Raju Jampani | Krishnam Raju Jampani and Anna Lubiw | The Simultaneous Membership Problem for Chordal, Comparability and
Permutation graphs | 15 pages, 1 figure | null | null | null | cs.DM cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we introduce the 'simultaneous membership problem', defined for
any graph class C characterized in terms of representations, e.g. any class of
intersection graphs. Two graphs G_1 and G_2, sharing some vertices X (and the
corresponding induced edges), are said to be 'simultaneous members' of graph
class C, if there exist representations R_1 and R_2 of G_1 and G_2 that are
"consistent" on X. Equivalently (for the classes C that we consider) there
exist edges E' between G_1-X and G_2-X such that G_1 \cup G_2 \cup E' belongs
to class C.
Simultaneous membership problems have application in any situation where it
is desirable to consistently represent two related graphs, for example:
interval graphs capturing overlaps of DNA fragments of two similar organisms;
or graphs connected in time, where one is an updated version of the other.
Simultaneous membership problems are related to simultaneous planar embeddings,
graph sandwich problems and probe graph recognition problems.
In this paper we give efficient algorithms for the simultaneous membership
problem on chordal, comparability and permutation graphs. These results imply
that graph sandwich problems for the above classes are tractable for an
interesting special case: when the set of optional edges form a complete
bipartite graph. Our results complement the recent polynomial time recognition
algorithms for probe chordal, comparability, and permutation graphs, where the
set of optional edges form a clique.
| [
{
"version": "v1",
"created": "Tue, 25 Nov 2008 02:54:32 GMT"
}
] | 2008-11-26T00:00:00 | [
[
"Jampani",
"Krishnam Raju",
""
],
[
"Lubiw",
"Anna",
""
]
] |
0811.4186 | Aleksandar Bradic M | Aleksandar Bradic | Search Result Clustering via Randomized Partitioning of Query-Induced
Subgraphs | 16th Telecommunications Forum TELFOR 2008 | null | null | null | cs.IR cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present an approach to search result clustering, using
partitioning of underlying link graph. We define the notion of "query-induced
subgraph" and formulate the problem of search result clustering as a problem of
efficient partitioning of given subgraph into topic-related clusters. Also, we
propose a novel algorithm for approximative partitioning of such graph, which
results in cluster quality comparable to the one obtained by deterministic
algorithms, while operating in more efficient computation time, suitable for
practical implementations. Finally, we present a practical clustering search
engine developed as a part of this research and use it to get results about
real-world performance of proposed concepts.
| [
{
"version": "v1",
"created": "Tue, 25 Nov 2008 23:11:55 GMT"
}
] | 2008-11-27T00:00:00 | [
[
"Bradic",
"Aleksandar",
""
]
] |
0811.4346 | Ke Yi | Ke Yi | Dynamic Indexability: The Query-Update Tradeoff for One-Dimensional
Range Queries | 13 pages | null | null | null | cs.DS cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The B-tree is a fundamental secondary index structure that is widely used for
answering one-dimensional range reporting queries. Given a set of $N$ keys, a
range query can be answered in $O(\log_B \nm + \frac{K}{B})$ I/Os, where $B$ is
the disk block size, $K$ the output size, and $M$ the size of the main memory
buffer. When keys are inserted or deleted, the B-tree is updated in $O(\log_B
N)$ I/Os, if we require the resulting changes to be committed to disk right
away. Otherwise, the memory buffer can be used to buffer the recent updates,
and changes can be written to disk in batches, which significantly lowers the
amortized update cost. A systematic way of batching up updates is to use the
logarithmic method, combined with fractional cascading, resulting in a dynamic
B-tree that supports insertions in $O(\frac{1}{B}\log\nm)$ I/Os and queries in
$O(\log\nm + \frac{K}{B})$ I/Os. Such bounds have also been matched by several
known dynamic B-tree variants in the database literature.
In this paper, we prove that for any dynamic one-dimensional range query
index structure with query cost $O(q+\frac{K}{B})$ and amortized insertion cost
$O(u/B)$, the tradeoff $q\cdot \log(u/q) = \Omega(\log B)$ must hold if
$q=O(\log B)$. For most reasonable values of the parameters, we have $\nm =
B^{O(1)}$, in which case our query-insertion tradeoff implies that the bounds
mentioned above are already optimal. Our lower bounds hold in a dynamic version
of the {\em indexability model}, which is of independent interests.
| [
{
"version": "v1",
"created": "Wed, 26 Nov 2008 15:36:14 GMT"
}
] | 2008-11-27T00:00:00 | [
[
"Yi",
"Ke",
""
]
] |
0811.4376 | Soubhik Chakraborty | Suman Kumar Sourabh and Soubhik Chakraborty | How robust is quicksort average complexity? | 15 pages;12figures;2 tables | null | null | null | cs.DS cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper questions the robustness of average case time complexity of the
fast and popular quicksort algorithm. Among the six standard probability
distributions examined in the paper, only continuous uniform, exponential and
standard normal are supporting it whereas the others are supporting the worst
case complexity measure. To the question -why are we getting the worst case
complexity measure each time the average case measure is discredited? -- one
logical answer is average case complexity under the universal distribution
equals worst case complexity. This answer, which is hard to challenge, however
gives no idea as to which of the standard probability distributions come under
the umbrella of universality. The morale is that average case complexity
measures, in cases where they are different from those in worst case, should be
deemed as robust provided only they get the support from at least the standard
probability distributions, both discrete and continuous. Regretfully, this is
not the case with quicksort.
| [
{
"version": "v1",
"created": "Wed, 26 Nov 2008 17:23:22 GMT"
}
] | 2016-11-27T00:00:00 | [
[
"Sourabh",
"Suman Kumar",
""
],
[
"Chakraborty",
"Soubhik",
""
]
] |
0811.4672 | Kui Wu | Emad Soroush, Kui Wu, Jian Pei | Fast and Quality-Guaranteed Data Streaming in Resource-Constrained
Sensor Networks | Published in ACM MobiHoc 2008 | null | null | null | cs.DS cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many emerging applications, data streams are monitored in a network
environment. Due to limited communication bandwidth and other resource
constraints, a critical and practical demand is to online compress data streams
continuously with quality guarantee. Although many data compression and digital
signal processing methods have been developed to reduce data volume, their
super-linear time and more-than-constant space complexity prevents them from
being applied directly on data streams, particularly over resource-constrained
sensor networks. In this paper, we tackle the problem of online quality
guaranteed compression of data streams using fast linear approximation (i.e.,
using line segments to approximate a time series). Technically, we address two
versions of the problem which explore quality guarantees in different forms. We
develop online algorithms with linear time complexity and constant cost in
space. Our algorithms are optimal in the sense they generate the minimum number
of segments that approximate a time series with the required quality guarantee.
To meet the resource constraints in sensor networks, we also develop a fast
algorithm which creates connecting segments with very simple computation. The
low cost nature of our methods leads to a unique edge on the applications of
massive and fast streaming environment, low bandwidth networks, and heavily
constrained nodes in computational power. We implement and evaluate our methods
in the application of an acoustic wireless sensor network.
| [
{
"version": "v1",
"created": "Fri, 28 Nov 2008 20:59:55 GMT"
}
] | 2008-12-01T00:00:00 | [
[
"Soroush",
"Emad",
""
],
[
"Wu",
"Kui",
""
],
[
"Pei",
"Jian",
""
]
] |
0811.4713 | Mamadou Moustapha Kant\'e | Bruno Courcelle (LaBRI, IUF), Cyril Gavoille (LaBRI, INRIA Futurs),
Mamadou Moustapha Kant\'e (LaBRI) | Compact Labelings For Efficient First-Order Model-Checking | null | Journal of Combinatorial Optimisation 21(1):19-46(2011) | 10.1007/s10878-009-9260-7 | null | cs.DS cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider graph properties that can be checked from labels, i.e., bit
sequences, of logarithmic length attached to vertices. We prove that there
exists such a labeling for checking a first-order formula with free set
variables in the graphs of every class that is \emph{nicely locally
cwd-decomposable}. This notion generalizes that of a \emph{nicely locally
tree-decomposable} class. The graphs of such classes can be covered by graphs
of bounded \emph{clique-width} with limited overlaps. We also consider such
labelings for \emph{bounded} first-order formulas on graph classes of
\emph{bounded expansion}. Some of these results are extended to counting
queries.
| [
{
"version": "v1",
"created": "Fri, 28 Nov 2008 13:29:15 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Jul 2014 11:29:27 GMT"
}
] | 2014-07-09T00:00:00 | [
[
"Courcelle",
"Bruno",
"",
"LaBRI, IUF"
],
[
"Gavoille",
"Cyril",
"",
"LaBRI, INRIA Futurs"
],
[
"Kanté",
"Mamadou Moustapha",
"",
"LaBRI"
]
] |
0812.0146 | Vladimir Pestov | Vladimir Pestov | Lower Bounds on Performance of Metric Tree Indexing Schemes for Exact
Similarity Search in High Dimensions | 21 pages, revised submission to Algorithmica, an improved and
extended journal version of the conference paper arXiv:0812.0146v3 [cs.DS],
with lower bounds strengthened, and the proof of the main Theorem 4
simplified | Algorithmica 66 (2013), 310-328 | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Within a mathematically rigorous model, we analyse the curse of
dimensionality for deterministic exact similarity search in the context of
popular indexing schemes: metric trees. The datasets $X$ are sampled randomly
from a domain $\Omega$, equipped with a distance, $\rho$, and an underlying
probability distribution, $\mu$. While performing an asymptotic analysis, we
send the intrinsic dimension $d$ of $\Omega$ to infinity, and assume that the
size of a dataset, $n$, grows superpolynomially yet subexponentially in $d$.
Exact similarity search refers to finding the nearest neighbour in the dataset
$X$ to a query point $\omega\in\Omega$, where the query points are subject to
the same probability distribution $\mu$ as datapoints. Let $\mathscr F$ denote
a class of all 1-Lipschitz functions on $\Omega$ that can be used as decision
functions in constructing a hierarchical metric tree indexing scheme. Suppose
the VC dimension of the class of all sets $\{\omega\colon f(\omega)\geq a\}$,
$a\in\R$ is $o(n^{1/4}/\log^2n)$. (In view of a 1995 result of Goldberg and
Jerrum, even a stronger complexity assumption $d^{O(1)}$ is reasonable.) We
deduce the $\Omega(n^{1/4})$ lower bound on the expected average case
performance of hierarchical metric-tree based indexing schemes for exact
similarity search in $(\Omega,X)$. In paricular, this bound is superpolynomial
in $d$.
| [
{
"version": "v1",
"created": "Sun, 30 Nov 2008 15:17:22 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Aug 2010 03:42:50 GMT"
},
{
"version": "v3",
"created": "Tue, 10 May 2011 16:17:39 GMT"
},
{
"version": "v4",
"created": "Fri, 24 Feb 2012 18:38:50 GMT"
}
] | 2013-03-27T00:00:00 | [
[
"Pestov",
"Vladimir",
""
]
] |
0812.0209 | Qin Zhang | Ke Yi, Qin Zhang | Optimal Tracking of Distributed Heavy Hitters and Quantiles | 10 pages, 1 figure | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the the problem of tracking heavy hitters and quantiles in the
distributed streaming model. The heavy hitters and quantiles are two important
statistics for characterizing a data distribution. Let $A$ be a multiset of
elements, drawn from the universe $U=\{1,...,u\}$. For a given $0 \le \phi \le
1$, the $\phi$-heavy hitters are those elements of $A$ whose frequency in $A$
is at least $\phi |A|$; the $\phi$-quantile of $A$ is an element $x$ of $U$
such that at most $\phi|A|$ elements of $A$ are smaller than $A$ and at most
$(1-\phi)|A|$ elements of $A$ are greater than $x$. Suppose the elements of $A$
are received at $k$ remote {\em sites} over time, and each of the sites has a
two-way communication channel to a designated {\em coordinator}, whose goal is
to track the set of $\phi$-heavy hitters and the $\phi$-quantile of $A$
approximately at all times with minimum communication. We give tracking
algorithms with worst-case communication cost $O(k/\eps \cdot \log n)$ for both
problems, where $n$ is the total number of items in $A$, and $\eps$ is the
approximation error. This substantially improves upon the previous known
algorithms. We also give matching lower bounds on the communication costs for
both problems, showing that our algorithms are optimal. We also consider a more
general version of the problem where we simultaneously track the
$\phi$-quantiles for all $0 \le \phi \le 1$.
| [
{
"version": "v1",
"created": "Mon, 1 Dec 2008 03:51:12 GMT"
}
] | 2008-12-02T00:00:00 | [
[
"Yi",
"Ke",
""
],
[
"Zhang",
"Qin",
""
]
] |
0812.0320 | Gwena\"el Joret | Gwena\"el Joret | Stackelberg Network Pricing is Hard to Approximate | null | Networks, vol. 57, no. 2, pp. 117--120, 2011 | 10.1002/net.20391 | null | cs.DS cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the Stackelberg Network Pricing problem, one has to assign tariffs to a
certain subset of the arcs of a given transportation network. The aim is to
maximize the amount paid by the user of the network, knowing that the user will
take a shortest st-path once the tariffs are fixed. Roch, Savard, and Marcotte
(Networks, Vol. 46(1), 57-67, 2005) proved that this problem is NP-hard, and
gave an O(log m)-approximation algorithm, where m denote the number of arcs to
be priced. In this note, we show that the problem is also APX-hard.
| [
{
"version": "v1",
"created": "Mon, 1 Dec 2008 16:15:58 GMT"
}
] | 2011-03-07T00:00:00 | [
[
"Joret",
"Gwenaël",
""
]
] |
0812.0382 | Andrea Vattani | Andrea Vattani | k-means requires exponentially many iterations even in the plane | Submitted to SoCG 2009 | null | null | null | cs.CG cs.DS cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The k-means algorithm is a well-known method for partitioning n points that
lie in the d-dimensional space into k clusters. Its main features are
simplicity and speed in practice. Theoretically, however, the best known upper
bound on its running time (i.e. O(n^{kd})) can be exponential in the number of
points. Recently, Arthur and Vassilvitskii [3] showed a super-polynomial
worst-case analysis, improving the best known lower bound from \Omega(n) to
2^{\Omega(\sqrt{n})} with a construction in d=\Omega(\sqrt{n}) dimensions. In
[3] they also conjectured the existence of superpolynomial lower bounds for any
d >= 2.
Our contribution is twofold: we prove this conjecture and we improve the
lower bound, by presenting a simple construction in the plane that leads to the
exponential lower bound 2^{\Omega(n)}.
| [
{
"version": "v1",
"created": "Mon, 1 Dec 2008 22:55:39 GMT"
}
] | 2008-12-03T00:00:00 | [
[
"Vattani",
"Andrea",
""
]
] |
0812.0387 | Kevin Buchin | Kevin Buchin | Delaunay Triangulations in Linear Time? (Part I) | 8 pages, no figures; added footnote about newer algorithm | null | null | null | cs.CG cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new and simple randomized algorithm for constructing the
Delaunay triangulation using nearest neighbor graphs for point location. Under
suitable assumptions, it runs in linear expected time for points in the plane
with polynomially bounded spread, i.e., if the ratio between the largest and
smallest pointwise distance is polynomially bounded. This also holds for point
sets with bounded spread in higher dimensions as long as the expected
complexity of the Delaunay triangulation of a sample of the points is linear in
the sample size.
| [
{
"version": "v1",
"created": "Mon, 1 Dec 2008 23:09:13 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Feb 2009 08:58:17 GMT"
},
{
"version": "v3",
"created": "Sun, 13 Dec 2009 21:54:48 GMT"
}
] | 2009-12-13T00:00:00 | [
[
"Buchin",
"Kevin",
""
]
] |
0812.0389 | Stefanie Jegelka | Stefanie Jegelka, Suvrit Sra, Arindam Banerjee | Approximation Algorithms for Bregman Co-clustering and Tensor Clustering | 18 pages; improved metric case | short version in ALT 2009 | null | null | cs.DS cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the past few years powerful generalizations to the Euclidean k-means
problem have been made, such as Bregman clustering [7], co-clustering (i.e.,
simultaneous clustering of rows and columns of an input matrix) [9,18], and
tensor clustering [8,34]. Like k-means, these more general problems also suffer
from the NP-hardness of the associated optimization. Researchers have developed
approximation algorithms of varying degrees of sophistication for k-means,
k-medians, and more recently also for Bregman clustering [2]. However, there
seem to be no approximation algorithms for Bregman co- and tensor clustering.
In this paper we derive the first (to our knowledge) guaranteed methods for
these increasingly important clustering settings. Going beyond Bregman
divergences, we also prove an approximation factor for tensor clustering with
arbitrary separable metrics. Through extensive experiments we evaluate the
characteristics of our method, and show that it also has practical impact.
| [
{
"version": "v1",
"created": "Mon, 1 Dec 2008 23:17:35 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Feb 2009 12:40:18 GMT"
},
{
"version": "v3",
"created": "Fri, 15 May 2009 22:23:02 GMT"
},
{
"version": "v4",
"created": "Mon, 9 Nov 2009 15:50:32 GMT"
}
] | 2009-11-09T00:00:00 | [
[
"Jegelka",
"Stefanie",
""
],
[
"Sra",
"Suvrit",
""
],
[
"Banerjee",
"Arindam",
""
]
] |
0812.0598 | Laura Poplawski | Laura J. Poplawski, Rajmohan Rajaraman, Ravi Sundaram and Shang-Hua
Teng | Preference Games and Personalized Equilibria, with Applications to
Fractional BGP | 25 pages, 3 figures, v2: minor editorial changes | null | null | null | cs.GT cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the complexity of computing equilibria in two classes of network
games based on flows - fractional BGP (Border Gateway Protocol) games and
fractional BBC (Bounded Budget Connection) games. BGP is the glue that holds
the Internet together and hence its stability, i.e. the equilibria of
fractional BGP games (Haxell, Wilfong), is a matter of practical importance.
BBC games (Laoutaris et al) follow in the tradition of the large body of work
on network formation games and capture a variety of applications ranging from
social networks and overlay networks to peer-to-peer networks.
The central result of this paper is that there are no fully polynomial-time
approximation schemes (unless PPAD is in FP) for computing equilibria in both
fractional BGP games and fractional BBC games. We obtain this result by proving
the hardness for a new and surprisingly simple game, the fractional preference
game, which is reducible to both fractional BGP and BBC games.
We define a new flow-based notion of equilibrium for matrix games --
personalized equilibria -- generalizing both fractional BBC and fractional BGP
games. We prove not just the existence, but the existence of rational
personalized equilibria for all matrix games, which implies the existence of
rational equilibria for fractional BGP and BBC games. In particular, this
provides an alternative proof and strengthening of the main result in [Haxell,
Wilfong]. For k-player matrix games, where k = 2, we provide a combinatorial
characterization leading to a polynomial-time algorithm for computing all
personalized equilibria. For k >= 5, we prove that personalized equilibria are
PPAD-hard to approximate in fully polynomial time. We believe that the concept
of personalized equilibria has potential for real-world significance.
| [
{
"version": "v1",
"created": "Tue, 2 Dec 2008 21:12:03 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Dec 2008 16:35:26 GMT"
}
] | 2008-12-05T00:00:00 | [
[
"Poplawski",
"Laura J.",
""
],
[
"Rajaraman",
"Rajmohan",
""
],
[
"Sundaram",
"Ravi",
""
],
[
"Teng",
"Shang-Hua",
""
]
] |
0812.0893 | Darren Strash | David Eppstein, Michael T. Goodrich and Darren Strash | Linear-Time Algorithms for Geometric Graphs with Sublinearly Many Edge
Crossings | Expanded version of a paper appearing at the 20th ACM-SIAM Symposium
on Discrete Algorithms (SODA09) | SIAM J. Computing 39(8): 3814-3829, 2010 | 10.1137/090759112 | null | cs.CG cs.DM cs.DS cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We provide linear-time algorithms for geometric graphs with sublinearly many
crossings. That is, we provide algorithms running in O(n) time on connected
geometric graphs having n vertices and k crossings, where k is smaller than n
by an iterated logarithmic factor. Specific problems we study include Voronoi
diagrams and single-source shortest paths. Our algorithms all run in linear
time in the standard comparison-based computational model; hence, we make no
assumptions about the distribution or bit complexities of edge weights, nor do
we utilize unusual bit-level operations on memory words. Instead, our
algorithms are based on a planarization method that "zeroes in" on edge
crossings, together with methods for extending planar separator decompositions
to geometric graphs with sublinearly many crossings. Incidentally, our
planarization algorithm also solves an open computational geometry problem of
Chazelle for triangulating a self-intersecting polygonal chain having n
segments and k crossings in linear time, for the case when k is sublinear in n
by an iterated logarithmic factor.
| [
{
"version": "v1",
"created": "Thu, 4 Dec 2008 10:29:00 GMT"
},
{
"version": "v2",
"created": "Thu, 14 May 2009 02:07:34 GMT"
}
] | 2010-12-16T00:00:00 | [
[
"Eppstein",
"David",
""
],
[
"Goodrich",
"Michael T.",
""
],
[
"Strash",
"Darren",
""
]
] |
0812.1012 | Kamesh Munagala | Sudipto Guha and Kamesh Munagala | Adaptive Uncertainty Resolution in Bayesian Combinatorial Optimization
Problems | Journal version of the paper "Model-driven Optimization using
Adaptive Probes" that appeared in the ACM-SIAM Symposium on Discrete
Algorithms (SODA), 2007 | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In several applications such as databases, planning, and sensor networks,
parameters such as selectivity, load, or sensed values are known only with some
associated uncertainty. The performance of such a system (as captured by some
objective function over the parameters) is significantly improved if some of
these parameters can be probed or observed. In a resource constrained
situation, deciding which parameters to observe in order to optimize system
performance itself becomes an interesting and important optimization problem.
This general problem is the focus of this paper.
One of the most important considerations in this framework is whether
adaptivity is required for the observations. Adaptive observations introduce
blocking or sequential operations in the system whereas non-adaptive
observations can be performed in parallel. One of the important questions in
this regard is to characterize the benefit of adaptivity for probes and
observation.
We present general techniques for designing constant factor approximations to
the optimal observation schemes for several widely used scheduling and metric
objective functions. We show a unifying technique that relates this
optimization problem to the outlier version of the corresponding deterministic
optimization. By making this connection, our technique shows constant factor
upper bounds for the benefit of adaptivity of the observation schemes. We show
that while probing yields significant improvement in the objective function,
being adaptive about the probing is not beneficial beyond constant factors.
| [
{
"version": "v1",
"created": "Thu, 4 Dec 2008 19:48:16 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Sep 2009 14:17:22 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Jan 2010 15:08:30 GMT"
}
] | 2010-01-28T00:00:00 | [
[
"Guha",
"Sudipto",
""
],
[
"Munagala",
"Kamesh",
""
]
] |
0812.1123 | Jinshan Zhang | Jinshan Zhang | Improved Approximation for the Number of Hamiltonian Cycles in Dense
Digraphs | 20 pages | null | null | null | cs.DS cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an improved algorithm for counting the number of Hamiltonian
cycles in a directed graph. The basic idea of the method is sequential
acceptance/rejection, which is successfully used in approximating the number of
perfect matchings in dense bipartite graphs. As a consequence, a new bound on
the number of Hamiltonian cycles in a directed graph is proved, by using the
ratio of the number of 1-factors. Based on this bound, we prove that our
algorithm runs in expected time of $O(n^{8.5})$ for dense problems. This
improves the Markov chain method, the most powerful existing method, a factor
of at least $n^{4.5}(\log n)^{4}$ in running time. This class of dense problems
is shown to be nontrivial in counting, in the sense that it is $#$P-Complete.
| [
{
"version": "v1",
"created": "Fri, 5 Dec 2008 12:28:57 GMT"
},
{
"version": "v2",
"created": "Sun, 7 Dec 2008 17:15:05 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Jan 2009 13:05:39 GMT"
},
{
"version": "v4",
"created": "Sat, 21 Nov 2009 08:13:36 GMT"
}
] | 2009-11-23T00:00:00 | [
[
"Zhang",
"Jinshan",
""
]
] |
0812.1126 | Dimitris Kalles | Dimitris Kalles, Alexis Kaporis | Emerge-Sort: Converging to Ordered Sequences by Simple Local Operators | Contains 16 pages, 17 figures, 1 table. Text updated as of March 10,
2009. Submitted to a journal | null | null | null | cs.AI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we examine sorting on the assumption that we do not know in
advance which way to sort a sequence of numbers and we set at work simple local
comparison and swap operators whose repeating application ends up in sorted
sequences. These are the basic elements of Emerge-Sort, our approach to
self-organizing sorting, which we then validate experimentally across a range
of samples. Observing an O(n2) run-time behaviour, we note that the n/logn
delay coefficient that differentiates Emerge-Sort from the classical comparison
based algorithms is an instantiation of the price of anarchy we pay for not
imposing a sorting order and for letting that order emerge through the local
interactions.
| [
{
"version": "v1",
"created": "Fri, 5 Dec 2008 12:57:00 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Mar 2009 22:42:43 GMT"
}
] | 2009-03-11T00:00:00 | [
[
"Kalles",
"Dimitris",
""
],
[
"Kaporis",
"Alexis",
""
]
] |
0812.1321 | Aleksandrs Slivkins | Matthew Andrews and Aleksandrs Slivkins | Oscillations with TCP-like Flow Control in Networks of Queues | Preliminary version has appeared in IEEE INFOCOM 2006. The current
version is dated November 2005, with a minor revision in December 2008 | null | null | null | cs.NI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a set of flows passing through a set of servers. The injection
rate into each flow is governed by a flow control that increases the injection
rate when all the servers on the flow's path are empty and decreases the
injection rate when some server is congested. We show that if each server's
congestion is governed by the arriving traffic at the server then the system
can *oscillate*. This is in contrast to previous work on flow control where
congestion was modeled as a function of the flow injection rates and the system
was shown to converge to a steady state that maximizes an overall network
utility.
| [
{
"version": "v1",
"created": "Sat, 6 Dec 2008 23:57:44 GMT"
}
] | 2008-12-09T00:00:00 | [
[
"Andrews",
"Matthew",
""
],
[
"Slivkins",
"Aleksandrs",
""
]
] |
0812.1385 | Javaid Aslam | Javaid Aslam | An Extension of the Permutation Group Enumeration Technique (Collapse of
the Polynomial Hierarchy: $\mathbf{NP = P}$) | Revisions: Some re-organization-- created a new Section 5 and minor
revisions | null | null | null | cs.CC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The distinguishing result of this paper is a $\mathbf{P}$-time enumerable
partition of all the potential perfect matchings in a bipartite graph. This
partition is a set of equivalence classes induced by the missing edges in the
potential perfect matchings.
We capture the behavior of these missing edges in a polynomially bounded
representation of the exponentially many perfect matchings by a graph theoretic
structure, called MinSet Sequence, where MinSet is a P-time enumerable
structure derived from a graph theoretic counterpart of a generating set of the
symmetric group. This leads to a polynomially bounded generating set of all the
classes, enabling the enumeration of perfect matchings in polynomial time. The
sequential time complexity of this $\mathbf{\#P}$-complete problem is shown to
be $O(n^{45}\log n)$.
And thus we prove a result even more surprising than $\mathbf{NP = P}$, that
is, $\mathbf{\#P}=\mathbf{FP}$, where $\mathbf{FP}$ is the class of functions,
$f: \{0, 1\}^* \rightarrow \mathbb{N} $, computable in polynomial time on a
deterministic model of computation.
| [
{
"version": "v1",
"created": "Sun, 7 Dec 2008 19:47:28 GMT"
},
{
"version": "v10",
"created": "Mon, 30 Mar 2009 19:41:25 GMT"
},
{
"version": "v11",
"created": "Tue, 7 Apr 2009 18:35:11 GMT"
},
{
"version": "v12",
"created": "Mon, 19 Jan 2015 20:21:59 GMT"
},
{
"version": "v13",
"created": "Thu, 22 Jan 2015 20:45:26 GMT"
},
{
"version": "v14",
"created": "Thu, 5 Feb 2015 20:56:47 GMT"
},
{
"version": "v15",
"created": "Sun, 22 Feb 2015 20:36:42 GMT"
},
{
"version": "v16",
"created": "Wed, 15 Jul 2015 18:44:43 GMT"
},
{
"version": "v17",
"created": "Thu, 30 Jul 2015 19:48:56 GMT"
},
{
"version": "v18",
"created": "Thu, 8 Oct 2015 19:04:26 GMT"
},
{
"version": "v19",
"created": "Mon, 12 Oct 2015 19:57:44 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Dec 2008 19:30:19 GMT"
},
{
"version": "v20",
"created": "Thu, 15 Oct 2015 19:48:04 GMT"
},
{
"version": "v21",
"created": "Sun, 18 Oct 2015 19:20:04 GMT"
},
{
"version": "v22",
"created": "Sat, 2 Jan 2016 01:31:54 GMT"
},
{
"version": "v23",
"created": "Thu, 3 Mar 2016 20:53:32 GMT"
},
{
"version": "v24",
"created": "Sat, 26 Aug 2017 06:08:03 GMT"
},
{
"version": "v25",
"created": "Sun, 17 Sep 2017 22:52:14 GMT"
},
{
"version": "v26",
"created": "Mon, 30 Oct 2017 08:01:46 GMT"
},
{
"version": "v3",
"created": "Thu, 25 Dec 2008 20:43:33 GMT"
},
{
"version": "v4",
"created": "Mon, 12 Jan 2009 17:03:53 GMT"
},
{
"version": "v5",
"created": "Tue, 20 Jan 2009 21:05:32 GMT"
},
{
"version": "v6",
"created": "Mon, 26 Jan 2009 20:56:54 GMT"
},
{
"version": "v7",
"created": "Wed, 28 Jan 2009 20:50:44 GMT"
},
{
"version": "v8",
"created": "Fri, 6 Feb 2009 20:43:25 GMT"
},
{
"version": "v9",
"created": "Mon, 9 Mar 2009 18:58:19 GMT"
}
] | 2017-10-31T00:00:00 | [
[
"Aslam",
"Javaid",
""
]
] |
0812.1587 | Radu Mihaescu | Radu Mihaescu, Cameron Hill, Satish Rao | Fast phylogeny reconstruction through learning of ancestral sequences | null | null | null | null | cs.DS cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given natural limitations on the length DNA sequences, designing phylogenetic
reconstruction methods which are reliable under limited information is a
crucial endeavor. There have been two approaches to this problem:
reconstructing partial but reliable information about the tree (\cite{Mo07,
DMR08,DHJ06,GMS08}), and reaching "deeper" in the tree through reconstruction
of ancestral sequences. In the latter category, \cite{DMR06} settled an
important conjecture of M.Steel, showing that, under the CFN model of
evolution, all trees on $n$ leaves with edge lengths bounded by the Ising model
phase transition can be recovered with high probability from genomes of length
$O(\log n)$ with a polynomial time algorithm. Their methods had a running time
of $O(n^{10})$.
Here we enhance our methods from \cite{DHJ06} with the learning of ancestral
sequences and provide an algorithm for reconstructing a sub-forest of the tree
which is reliable given available data, without requiring a-priori known bounds
on the edge lengths of the tree. Our methods are based on an intuitive minimum
spanning tree approach and run in $O(n^3)$ time. For the case of full
reconstruction of trees with edges under the phase transition, we maintain the
same sequence length requirements as \cite{DMR06}, despite the considerably
faster running time.
| [
{
"version": "v1",
"created": "Mon, 8 Dec 2008 22:51:02 GMT"
}
] | 2008-12-10T00:00:00 | [
[
"Mihaescu",
"Radu",
""
],
[
"Hill",
"Cameron",
""
],
[
"Rao",
"Satish",
""
]
] |
0812.1595 | Aparna Das | Aparna Das, Claire Mathieu | A quasi-polynomial time approximation scheme for Euclidean capacitated
vehicle routing | null | null | null | null | cs.DM cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the capacitated vehicle routing problem, introduced by Dantzig and Ramser
in 1959, we are given the locations of n customers and a depot, along with a
vehicle of capacity k, and wish to find a minimum length collection of tours,
each starting from the depot and visiting at most k customers, whose union
covers all the customers. We give a quasi-polynomial time approximation scheme
for the setting where the customers and the depot are on the plane, and
distances are given by the Euclidean metric.
| [
{
"version": "v1",
"created": "Mon, 8 Dec 2008 23:58:17 GMT"
}
] | 2008-12-10T00:00:00 | [
[
"Das",
"Aparna",
""
],
[
"Mathieu",
"Claire",
""
]
] |
0812.1628 | Masoud Farivar | Masoud Farivar, Behzad Mehrdad, Farid Ashtiani | Two Dimensional Connectivity for Vehicular Ad-Hoc Networks | 9 Pages, 10 figures,Submitted to INFOCOM 2009 | null | null | null | cs.NI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we focus on two-dimensional connectivity in sparse vehicular
ad hoc networks (VANETs). In this respect, we find thresholds for the arrival
rates of vehicles at entrances of a block of streets such that the connectivity
is guaranteed for any desired probability. To this end, we exploit a mobility
model recently proposed for sparse VANETs, based on BCMP open queuing networks
and solve the related traffic equations to find the traffic characteristics of
each street and use the results to compute the exact probability of
connectivity along these streets. Then, we use the results from percolation
theory and the proposed fast algorithms for evaluation of bond percolation
problem in a random graph corresponding to the block of the streets. We then
find sufficiently accurate two dimensional connectivity-related parameters,
such as the average number of intersections connected to each other and the
size of the largest set of inter-connected intersections. We have also proposed
lower bounds for the case of heterogeneous network with two transmission
ranges. In the last part of the paper, we apply our method to several numerical
examples and confirm our results by simulations.
| [
{
"version": "v1",
"created": "Tue, 9 Dec 2008 07:16:10 GMT"
}
] | 2008-12-10T00:00:00 | [
[
"Farivar",
"Masoud",
""
],
[
"Mehrdad",
"Behzad",
""
],
[
"Ashtiani",
"Farid",
""
]
] |
0812.1915 | Marcel Marquardt | Wouter Gelade, Marcel Marquardt, Thomas Schwentick | Dynamic Complexity of Formal Languages | Contains the material presenten at STACS 2009, extendes with proofs
and examples which were omitted due lack of space | null | null | null | cs.CC cs.DS cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper investigates the power of the dynamic complexity classes DynFO,
DynQF and DynPROP over string languages. The latter two classes contain
problems that can be maintained using quantifier-free first-order updates, with
and without auxiliary functions, respectively. It is shown that the languages
maintainable in DynPROP exactly are the regular languages, even when allowing
arbitrary precomputation. This enables lower bounds for DynPROP and separates
DynPROP from DynQF and DynFO. Further, it is shown that any context-free
language can be maintained in DynFO and a number of specific context-free
languages, for example all Dyck-languages, are maintainable in DynQF.
Furthermore, the dynamic complexity of regular tree languages is investigated
and some results concerning arbitrary structures are obtained: there exist
first-order definable properties which are not maintainable in DynPROP. On the
other hand any existential first-order property can be maintained in DynQF when
allowing precomputation.
| [
{
"version": "v1",
"created": "Wed, 10 Dec 2008 14:13:57 GMT"
}
] | 2008-12-11T00:00:00 | [
[
"Gelade",
"Wouter",
""
],
[
"Marquardt",
"Marcel",
""
],
[
"Schwentick",
"Thomas",
""
]
] |
0812.1951 | Jerome Leroux | Alain Finkel (LSV), J\'er\^ome Leroux (LaBRI) | The convex hull of a regular set of integer vectors is polyhedral and
effectively computable | null | Information Processing Letters 96, 1 (2005) 30 - 35 | 10.1016/j.ipl.2005.04.004 | null | cs.CG cs.DS cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Number Decision Diagrams (NDD) provide a natural finite symbolic
representation for regular set of integer vectors encoded as strings of digit
vectors (least or most significant digit first). The convex hull of the set of
vectors represented by a NDD is proved to be an effectively computable convex
polyhedron.
| [
{
"version": "v1",
"created": "Wed, 10 Dec 2008 16:26:36 GMT"
}
] | 2008-12-13T00:00:00 | [
[
"Finkel",
"Alain",
"",
"LSV"
],
[
"Leroux",
"Jérôme",
"",
"LaBRI"
]
] |
0812.2011 | Jerome Leroux | J\'er\^ome Leroux (LaBRI), Gregoire Sutre (LaBRI) | Accelerated Data-Flow Analysis | null | Static Analysis, Kongens Lyngby : Danemark (2007) | 10.1007/978-3-540-74061-2_12 | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Acceleration in symbolic verification consists in computing the exact effect
of some control-flow loops in order to speed up the iterative fix-point
computation of reachable states. Even if no termination guarantee is provided
in theory, successful results were obtained in practice by different tools
implementing this framework. In this paper, the acceleration framework is
extended to data-flow analysis. Compared to a classical
widening/narrowing-based abstract interpretation, the loss of precision is
controlled here by the choice of the abstract domain and does not depend on the
way the abstract value is computed. Our approach is geared towards precision,
but we don't loose efficiency on the way. Indeed, we provide a cubic-time
acceleration-based algorithm for solving interval constraints with full
multiplication.
| [
{
"version": "v1",
"created": "Wed, 10 Dec 2008 20:08:08 GMT"
}
] | 2008-12-11T00:00:00 | [
[
"Leroux",
"Jérôme",
"",
"LaBRI"
],
[
"Sutre",
"Gregoire",
"",
"LaBRI"
]
] |
0812.2014 | Jerome Leroux | J\'er\^ome Leroux (LaBRI) | Convex Hull of Arithmetic Automata | null | Static Analysis, Valencia : Espagne (2008) | 10.1007/978-3-540-69166-2_4 | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Arithmetic automata recognize infinite words of digits denoting
decompositions of real and integer vectors. These automata are known expressive
and efficient enough to represent the whole set of solutions of complex linear
constraints combining both integral and real variables. In this paper, the
closed convex hull of arithmetic automata is proved rational polyhedral.
Moreover an algorithm computing the linear constraints defining these convex
set is provided. Such an algorithm is useful for effectively extracting
geometrical properties of the whole set of solutions of complex constraints
symbolically represented by arithmetic automata.
| [
{
"version": "v1",
"created": "Wed, 10 Dec 2008 20:33:27 GMT"
}
] | 2008-12-11T00:00:00 | [
[
"Leroux",
"Jérôme",
"",
"LaBRI"
]
] |
0812.2115 | Gabrio Curzio Caimi | Gabrio Caimi, Holger Flier, Martin Fuchsberger, Marc Nunkesser | Performance of a greedy algorithm for edge covering by cliques in
interval graphs | 8 pages, 3 pictures, technical report | null | null | null | cs.DM cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper a greedy algorithm to detect conflict cliques in interval
graphs and circular-arc graphs is analyzed. In a graph, a stable set requires
that at most one vertex is chosen for each edge. It is equivalent to requiring
that at most one vertex for each maximal clique is chosen. We show that this
algorithm finds all maximal cliques for interval graphs, i.e. it can compute
the convex hull of the stable set polytope. In case of circular-arc graphs, the
algorithm is not able to detect all maximal cliques, yet remaining correct.
This problem occurs in the context of railway scheduling. A train requests the
allocation of a railway infrastructure resource for a specific time interval.
As one is looking for conflict-free train schedules, the used resource
allocation intervals in a schedule must not overlap. The conflict-free choices
of used intervals for each resource correspond to stable sets in the interval
graph associated to the allocation time intervals.
| [
{
"version": "v1",
"created": "Thu, 11 Dec 2008 15:35:45 GMT"
}
] | 2008-12-12T00:00:00 | [
[
"Caimi",
"Gabrio",
""
],
[
"Flier",
"Holger",
""
],
[
"Fuchsberger",
"Martin",
""
],
[
"Nunkesser",
"Marc",
""
]
] |
0812.2137 | Marek Karpinski | Piotr Berman, Marek Karpinski, Alex Zelikovsky | A Factor 3/2 Approximation for Generalized Steiner Tree Problem with
Distances One and Two | null | null | null | null | cs.CC cs.DM cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We design a 3/2 approximation algorithm for the Generalized Steiner Tree
problem (GST) in metrics with distances 1 and 2. This is the first polynomial
time approximation algorithm for a wide class of non-geometric metric GST
instances with approximation factor below 2.
| [
{
"version": "v1",
"created": "Thu, 11 Dec 2008 12:50:54 GMT"
}
] | 2008-12-12T00:00:00 | [
[
"Berman",
"Piotr",
""
],
[
"Karpinski",
"Marek",
""
],
[
"Zelikovsky",
"Alex",
""
]
] |
0812.2291 | Aleksandrs Slivkins | Moshe Babaioff, Yogeshwer Sharma, Aleksandrs Slivkins | Characterizing Truthful Multi-Armed Bandit Mechanisms | This is the full version of a conference paper published in ACM EC
2009. This revision is re-focused to emphasize the results that do not rely
on the "IIA assumption" (see the paper for the definition) | null | null | null | cs.DS cs.GT cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a multi-round auction setting motivated by pay-per-click auctions
for Internet advertising. In each round the auctioneer selects an advertiser
and shows her ad, which is then either clicked or not. An advertiser derives
value from clicks; the value of a click is her private information. Initially,
neither the auctioneer nor the advertisers have any information about the
likelihood of clicks on the advertisements. The auctioneer's goal is to design
a (dominant strategies) truthful mechanism that (approximately) maximizes the
social welfare.
If the advertisers bid their true private values, our problem is equivalent
to the "multi-armed bandit problem", and thus can be viewed as a strategic
version of the latter. In particular, for both problems the quality of an
algorithm can be characterized by "regret", the difference in social welfare
between the algorithm and the benchmark which always selects the same "best"
advertisement. We investigate how the design of multi-armed bandit algorithms
is affected by the restriction that the resulting mechanism must be truthful.
We find that truthful mechanisms have certain strong structural properties --
essentially, they must separate exploration from exploitation -- and they incur
much higher regret than the optimal multi-armed bandit algorithms. Moreover, we
provide a truthful mechanism which (essentially) matches our lower bound on
regret.
| [
{
"version": "v1",
"created": "Fri, 12 Dec 2008 04:13:01 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jan 2009 01:56:08 GMT"
},
{
"version": "v3",
"created": "Fri, 20 Feb 2009 18:10:47 GMT"
},
{
"version": "v4",
"created": "Tue, 23 Jun 2009 02:21:56 GMT"
},
{
"version": "v5",
"created": "Fri, 18 Sep 2009 00:17:44 GMT"
},
{
"version": "v6",
"created": "Tue, 15 May 2012 22:57:53 GMT"
},
{
"version": "v7",
"created": "Mon, 3 Jun 2013 21:03:36 GMT"
}
] | 2013-06-05T00:00:00 | [
[
"Babaioff",
"Moshe",
""
],
[
"Sharma",
"Yogeshwer",
""
],
[
"Slivkins",
"Aleksandrs",
""
]
] |
0812.2298 | Francois Le Gall | Francois Le Gall | Efficient Isomorphism Testing for a Class of Group Extensions | 17 pages, accepted to the STACS 2009 conference | Proceedings of the 26th International Symposium on Theoretical
Aspects of Computer Science (STACS 2009), pp. 625-636, 2009 | 10.4230/LIPIcs.STACS.2009.1830 | null | cs.DS cs.CC math.GR quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The group isomorphism problem asks whether two given groups are isomorphic or
not. Whereas the case where both groups are abelian is well understood and can
be solved efficiently, very little is known about the complexity of isomorphism
testing for nonabelian groups. In this paper we study this problem for a class
of groups corresponding to one of the simplest ways of constructing nonabelian
groups from abelian groups: the groups that are extensions of an abelian group
A by a cyclic group of order m. We present an efficient algorithm solving the
group isomorphism problem for all the groups of this class such that the order
of A is coprime with m. More precisely, our algorithm runs in time almost
linear in the orders of the input groups and works in the general setting where
the groups are given as black-boxes.
| [
{
"version": "v1",
"created": "Fri, 12 Dec 2008 09:39:02 GMT"
}
] | 2021-10-05T00:00:00 | [
[
"Gall",
"Francois Le",
""
]
] |
0812.2599 | Sewoong Oh | Raghunandan H. Keshavan, Andrea Montanari, Sewoong Oh | Learning Low Rank Matrices from O(n) Entries | 8 pages, 11 figures, Forty-sixth Allerton Conference on
Communication, Control and Computing, invited paper | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How many random entries of an n by m, rank r matrix are necessary to
reconstruct the matrix within an accuracy d? We address this question in the
case of a random matrix with bounded rank, whereby the observed entries are
chosen uniformly at random. We prove that, for any d>0, C(r,d)n observations
are sufficient. Finally we discuss the question of reconstructing the matrix
efficiently, and demonstrate through extensive simulations that this task can
be accomplished in nPoly(log n) operations, for small rank.
| [
{
"version": "v1",
"created": "Sun, 14 Dec 2008 18:30:44 GMT"
}
] | 2008-12-16T00:00:00 | [
[
"Keshavan",
"Raghunandan H.",
""
],
[
"Montanari",
"Andrea",
""
],
[
"Oh",
"Sewoong",
""
]
] |
0812.2636 | Tobias Friedrich | Karl Bringmann, Tobias Friedrich | Approximating the least hypervolume contributor: NP-hard in general, but
fast in practice | 22 pages, to appear in Theoretical Computer Science | null | 10.1016/j.tcs.2010.09.026 | null | cs.DS cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The hypervolume indicator is an increasingly popular set measure to compare
the quality of two Pareto sets. The basic ingredient of most hypervolume
indicator based optimization algorithms is the calculation of the hypervolume
contribution of single solutions regarding a Pareto set. We show that exact
calculation of the hypervolume contribution is #P-hard while its approximation
is NP-hard. The same holds for the calculation of the minimal contribution. We
also prove that it is NP-hard to decide whether a solution has the least
hypervolume contribution. Even deciding whether the contribution of a solution
is at most $(1+\eps)$ times the minimal contribution is NP-hard. This implies
that it is neither possible to efficiently find the least contributing solution
(unless $P = NP$) nor to approximate it (unless $NP = BPP$).
Nevertheless, in the second part of the paper we present a fast approximation
algorithm for this problem. We prove that for arbitrarily given $\eps,\delta>0$
it calculates a solution with contribution at most $(1+\eps)$ times the minimal
contribution with probability at least $(1-\delta)$. Though it cannot run in
polynomial time for all instances, it performs extremely fast on various
benchmark datasets. The algorithm solves very large problem instances which are
intractable for exact algorithms (e.g., 10000 solutions in 100 dimensions)
within a few seconds.
| [
{
"version": "v1",
"created": "Sun, 14 Dec 2008 13:57:10 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Sep 2010 20:43:10 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Bringmann",
"Karl",
""
],
[
"Friedrich",
"Tobias",
""
]
] |
0812.2775 | Johannes Fischer | Johannes Fischer | Optimal Succinctness for Range Minimum Queries | 12 pages; to appear in Proc. LATIN'10 | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For a static array A of n ordered objects, a range minimum query asks for the
position of the minimum between two specified array indices. We show how to
preprocess A into a scheme of size 2n+o(n) bits that allows to answer range
minimum queries on A in constant time. This space is asymptotically optimal in
the important setting where access to A is not permitted after the
preprocessing step. Our scheme can be computed in linear time, using only n +
o(n) additional bits at construction time. In interesting by-product is that we
also improve on LCA-computation in BPS- or DFUDS-encoded trees.
| [
{
"version": "v1",
"created": "Mon, 15 Dec 2008 12:03:31 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Apr 2009 07:35:41 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Dec 2009 09:22:49 GMT"
}
] | 2009-12-02T00:00:00 | [
[
"Fischer",
"Johannes",
""
]
] |
0812.2851 | Amr Elmasry | Amr Elmasry | The Violation Heap: A Relaxed Fibonacci-Like Heap | 10 pages | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We give a priority queue that achieves the same amortized bounds as Fibonacci
heaps. Namely, find-min requires O(1) worst-case time, insert, meld and
decrease-key require O(1) amortized time, and delete-min requires $O(\log n)$
amortized time. Our structure is simple and promises an efficient practical
behavior when compared to other known Fibonacci-like heaps. The main idea
behind our construction is to propagate rank updates instead of performing
cascaded cuts following a decrease-key operation, allowing for a relaxed
structure.
| [
{
"version": "v1",
"created": "Mon, 15 Dec 2008 16:16:58 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Feb 2010 12:24:07 GMT"
}
] | 2010-02-11T00:00:00 | [
[
"Elmasry",
"Amr",
""
]
] |
0812.2868 | Travis Gagie | Pawel Gawrychowski and Travis Gagie | Minimax Trees in Linear Time | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A minimax tree is similar to a Huffman tree except that, instead of
minimizing the weighted average of the leaves' depths, it minimizes the maximum
of any leaf's weight plus its depth. Golumbic (1976) introduced minimax trees
and gave a Huffman-like, $\Oh{n \log n}$-time algorithm for building them.
Drmota and Szpankowski (2002) gave another $\Oh{n \log n}$-time algorithm,
which checks the Kraft Inequality in each step of a binary search. In this
paper we show how Drmota and Szpankowski's algorithm can be made to run in
linear time on a word RAM with (\Omega (\log n))-bit words. We also discuss how
our solution applies to problems in data compression, group testing and circuit
design.
| [
{
"version": "v1",
"created": "Mon, 15 Dec 2008 17:15:51 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jan 2009 13:45:39 GMT"
}
] | 2009-01-28T00:00:00 | [
[
"Gawrychowski",
"Pawel",
""
],
[
"Gagie",
"Travis",
""
]
] |
0812.3137 | Olga Holtz | Olga Holtz | Compressive sensing: a paradigm shift in signal processing | A short survey of compressive sensing | null | null | null | math.HO cs.DS cs.NA math.NA math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We survey a new paradigm in signal processing known as "compressive sensing".
Contrary to old practices of data acquisition and reconstruction based on the
Shannon-Nyquist sampling principle, the new theory shows that it is possible to
reconstruct images or signals of scientific interest accurately and even
exactly from a number of samples which is far smaller than the desired
resolution of the image/signal, e.g., the number of pixels in the image. This
new technique draws from results in several fields of mathematics, including
algebra, optimization, probability theory, and harmonic analysis. We will
discuss some of the key mathematical ideas behind compressive sensing, as well
as its implications to other fields: numerical analysis, information theory,
theoretical computer science, and engineering.
| [
{
"version": "v1",
"created": "Tue, 16 Dec 2008 19:53:30 GMT"
}
] | 2009-03-13T00:00:00 | [
[
"Holtz",
"Olga",
""
]
] |
0812.3702 | Michael Mahoney | Michael W. Mahoney, Lek-Heng Lim, and Gunnar E. Carlsson | Algorithmic and Statistical Challenges in Modern Large-Scale Data
Analysis are the Focus of MMDS 2008 | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The 2008 Workshop on Algorithms for Modern Massive Data Sets (MMDS 2008),
sponsored by the NSF, DARPA, LinkedIn, and Yahoo!, was held at Stanford
University, June 25--28. The goals of MMDS 2008 were (1) to explore novel
techniques for modeling and analyzing massive, high-dimensional, and
nonlinearly-structured scientific and internet data sets; and (2) to bring
together computer scientists, statisticians, mathematicians, and data analysis
practitioners to promote cross-fertilization of ideas.
| [
{
"version": "v1",
"created": "Fri, 19 Dec 2008 03:53:03 GMT"
}
] | 2008-12-22T00:00:00 | [
[
"Mahoney",
"Michael W.",
""
],
[
"Lim",
"Lek-Heng",
""
],
[
"Carlsson",
"Gunnar E.",
""
]
] |
0812.3933 | Masud Hasan | Masud Hasan, Atif Rahman, M. Sohel Rahman, Mahfuza Sharmin, and
Rukhsana Yeasmin | Pancake Flipping with Two Spatulas | 10 pages, 3 figures | null | null | null | cs.DS cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we study several variations of the \emph{pancake flipping
problem}, which is also well known as the problem of \emph{sorting by prefix
reversals}. We consider the variations in the sorting process by adding with
prefix reversals other similar operations such as prefix transpositions and
prefix transreversals. These type of sorting problems have applications in
interconnection networks and computational biology. We first study the problem
of sorting unsigned permutations by prefix reversals and prefix transpositions
and present a 3-approximation algorithm for this problem. Then we give a
2-approximation algorithm for sorting by prefix reversals and prefix
transreversals. We also provide a 3-approximation algorithm for sorting by
prefix reversals and prefix transpositions where the operations are always
applied at the unsorted suffix of the permutation. We further analyze the
problem in more practical way and show quantitatively how approximation ratios
of our algorithms improve with the increase of number of prefix reversals
applied by optimal algorithms. Finally, we present experimental results to
support our analysis.
| [
{
"version": "v1",
"created": "Sat, 20 Dec 2008 03:10:11 GMT"
},
{
"version": "v2",
"created": "Mon, 4 May 2009 12:06:02 GMT"
}
] | 2009-05-04T00:00:00 | [
[
"Hasan",
"Masud",
""
],
[
"Rahman",
"Atif",
""
],
[
"Rahman",
"M. Sohel",
""
],
[
"Sharmin",
"Mahfuza",
""
],
[
"Yeasmin",
"Rukhsana",
""
]
] |
0812.3946 | Stephane Vialette | Guillaume Blin (IGM), Sylvie Hamel (DIRO), St\'ephane Vialette (IGM) | Comparing RNA structures using a full set of biologically relevant edit
operations is intractable | 7 pages | null | null | null | cs.DS q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Arc-annotated sequences are useful for representing structural information of
RNAs and have been extensively used for comparing RNA structures in both terms
of sequence and structural similarities. Among the many paradigms referring to
arc-annotated sequences and RNA structures comparison (see
\cite{IGMA_BliDenDul08} for more details), the most important one is the
general edit distance. The problem of computing an edit distance between two
non-crossing arc-annotated sequences was introduced in \cite{Evans99}. The
introduced model uses edit operations that involve either single letters or
pairs of letters (never considered separately) and is solvable in
polynomial-time \cite{ZhangShasha:1989}. To account for other possible RNA
structural evolutionary events, new edit operations, allowing to consider
either silmutaneously or separately letters of a pair were introduced in
\cite{jiangli}; unfortunately at the cost of computational tractability. It has
been proved that comparing two RNA secondary structures using a full set of
biologically relevant edit operations is {\sf\bf NP}-complete. Nevertheless, in
\cite{DBLP:conf/spire/GuignonCH05}, the authors have used a strong
combinatorial restriction in order to compare two RNA stem-loops with a full
set of biologically relevant edit operations; which have allowed them to design
a polynomial-time and space algorithm for comparing general secondary RNA
structures. In this paper we will prove theoretically that comparing two RNA
structures using a full set of biologically relevant edit operations cannot be
done without strong combinatorial restrictions.
| [
{
"version": "v1",
"created": "Sat, 20 Dec 2008 08:04:25 GMT"
}
] | 2008-12-23T00:00:00 | [
[
"Blin",
"Guillaume",
"",
"IGM"
],
[
"Hamel",
"Sylvie",
"",
"DIRO"
],
[
"Vialette",
"Stéphane",
"",
"IGM"
]
] |
0812.4073 | Andreas Noack | Andreas Noack, Randolf Rotta | Multi-level algorithms for modularity clustering | 12 pages, 10 figures, see
http://www.informatik.tu-cottbus.de/~rrotta/ for downloading the graph
clustering software | Proceedings of the 8th International Symposium on Experimental
Algorithms (SEA 2009). Lecture Notes in Computer Science 5526, Springer
(2009) 257-268 | null | null | cs.DS cond-mat.stat-mech cs.DM physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modularity is one of the most widely used quality measures for graph
clusterings. Maximizing modularity is NP-hard, and the runtime of exact
algorithms is prohibitive for large graphs. A simple and effective class of
heuristics coarsens the graph by iteratively merging clusters (starting from
singletons), and optionally refines the resulting clustering by iteratively
moving individual vertices between clusters. Several heuristics of this type
have been proposed in the literature, but little is known about their relative
performance.
This paper experimentally compares existing and new coarsening- and
refinement-based heuristics with respect to their effectiveness (achieved
modularity) and efficiency (runtime). Concerning coarsening, it turns out that
the most widely used criterion for merging clusters (modularity increase) is
outperformed by other simple criteria, and that a recent algorithm by Schuetz
and Caflisch is no improvement over simple greedy coarsening for these
criteria. Concerning refinement, a new multi-level algorithm is shown to
produce significantly better clusterings than conventional single-level
algorithms. A comparison with published benchmark results and algorithm
implementations shows that combinations of coarsening and multi-level
refinement are competitive with the best algorithms in the literature.
| [
{
"version": "v1",
"created": "Mon, 22 Dec 2008 15:32:10 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Dec 2008 21:56:37 GMT"
}
] | 2009-09-22T00:00:00 | [
[
"Noack",
"Andreas",
""
],
[
"Rotta",
"Randolf",
""
]
] |
0812.4293 | Michael Mahoney | Christos Boutsidis, Michael W. Mahoney, and Petros Drineas | An Improved Approximation Algorithm for the Column Subset Selection
Problem | 17 pages; corrected a bug in the spectral norm bound of the previous
version | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of selecting the best subset of exactly $k$ columns
from an $m \times n$ matrix $A$. We present and analyze a novel two-stage
algorithm that runs in $O(\min\{mn^2,m^2n\})$ time and returns as output an $m
\times k$ matrix $C$ consisting of exactly $k$ columns of $A$. In the first
(randomized) stage, the algorithm randomly selects $\Theta(k \log k)$ columns
according to a judiciously-chosen probability distribution that depends on
information in the top-$k$ right singular subspace of $A$. In the second
(deterministic) stage, the algorithm applies a deterministic column-selection
procedure to select and return exactly $k$ columns from the set of columns
selected in the first stage. Let $C$ be the $m \times k$ matrix containing
those $k$ columns, let $P_C$ denote the projection matrix onto the span of
those columns, and let $A_k$ denote the best rank-$k$ approximation to the
matrix $A$. Then, we prove that, with probability at least 0.8, $$ \FNorm{A -
P_CA} \leq \Theta(k \log^{1/2} k) \FNorm{A-A_k}. $$ This Frobenius norm bound
is only a factor of $\sqrt{k \log k}$ worse than the best previously existing
existential result and is roughly $O(\sqrt{k!})$ better than the best previous
algorithmic result for the Frobenius norm version of this Column Subset
Selection Problem (CSSP). We also prove that, with probability at least 0.8, $$
\TNorm{A - P_CA} \leq \Theta(k \log^{1/2} k)\TNorm{A-A_k} +
\Theta(k^{3/4}\log^{1/4}k)\FNorm{A-A_k}. $$ This spectral norm bound is not
directly comparable to the best previously existing bounds for the spectral
norm version of this CSSP. Our bound depends on $\FNorm{A-A_k}$, whereas
previous results depend on $\sqrt{n-k}\TNorm{A-A_k}$; if these two quantities
are comparable, then our bound is asymptotically worse by a $(k \log k)^{1/4}$
factor.
| [
{
"version": "v1",
"created": "Mon, 22 Dec 2008 21:16:55 GMT"
},
{
"version": "v2",
"created": "Wed, 12 May 2010 02:44:25 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Boutsidis",
"Christos",
""
],
[
"Mahoney",
"Michael W.",
""
],
[
"Drineas",
"Petros",
""
]
] |
0812.4442 | Sanjeev Khanna | Julia Chuzhoy and Sanjeev Khanna | An $O(k^{3} log n)$-Approximation Algorithm for Vertex-Connectivity
Survivable Network Design | 8 pages | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the Survivable Network Design problem (SNDP), we are given an undirected
graph $G(V,E)$ with costs on edges, along with a connectivity requirement
$r(u,v)$ for each pair $u,v$ of vertices. The goal is to find a minimum-cost
subset $E^*$ of edges, that satisfies the given set of pairwise connectivity
requirements. In the edge-connectivity version we need to ensure that there are
$r(u,v)$ edge-disjoint paths for every pair $u, v$ of vertices, while in the
vertex-connectivity version the paths are required to be vertex-disjoint. The
edge-connectivity version of SNDP is known to have a 2-approximation. However,
no non-trivial approximation algorithm has been known so far for the vertex
version of SNDP, except for special cases of the problem. We present an
extremely simple algorithm to achieve an $O(k^3 \log n)$-approximation for this
problem, where $k$ denotes the maximum connectivity requirement, and $n$
denotes the number of vertices. We also give a simple proof of the recently
discovered $O(k^2 \log n)$-approximation result for the single-source version
of vertex-connectivity SNDP. We note that in both cases, our analysis in fact
yields slightly better guarantees in that the $\log n$ term in the
approximation guarantee can be replaced with a $\log \tau$ term where $\tau$
denotes the number of distinct vertices that participate in one or more pairs
with a positive connectivity requirement.
| [
{
"version": "v1",
"created": "Tue, 23 Dec 2008 19:04:25 GMT"
}
] | 2008-12-24T00:00:00 | [
[
"Chuzhoy",
"Julia",
""
],
[
"Khanna",
"Sanjeev",
""
]
] |
0812.4547 | Christos Boutsidis | Christos Boutsidis, Petros Drineas | Random Projections for the Nonnegative Least-Squares Problem | to appear in Linear Algebra and its Applications | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Constrained least-squares regression problems, such as the Nonnegative Least
Squares (NNLS) problem, where the variables are restricted to take only
nonnegative values, often arise in applications. Motivated by the recent
development of the fast Johnson-Lindestrauss transform, we present a fast
random projection type approximation algorithm for the NNLS problem. Our
algorithm employs a randomized Hadamard transform to construct a much smaller
NNLS problem and solves this smaller problem using a standard NNLS solver. We
prove that our approach finds a nonnegative solution vector that, with high
probability, is close to the optimum nonnegative solution in a relative error
approximation sense. We experimentally evaluate our approach on a large
collection of term-document data and verify that it does offer considerable
speedups without a significant loss in accuracy. Our analysis is based on a
novel random projection type result that might be of independent interest. In
particular, given a tall and thin matrix $\Phi \in \mathbb{R}^{n \times d}$ ($n
\gg d$) and a vector $y \in \mathbb{R}^d$, we prove that the Euclidean length
of $\Phi y$ can be estimated very accurately by the Euclidean length of
$\tilde{\Phi}y$, where $\tilde{\Phi}$ consists of a small subset of
(appropriately rescaled) rows of $\Phi$.
| [
{
"version": "v1",
"created": "Wed, 24 Dec 2008 16:43:22 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Mar 2009 20:17:36 GMT"
}
] | 2009-03-13T00:00:00 | [
[
"Boutsidis",
"Christos",
""
],
[
"Drineas",
"Petros",
""
]
] |
0812.4893 | Jukka Suomela | Patrik Flor\'een, Petteri Kaski, Valentin Polishchuk, Jukka Suomela | Almost stable matchings in constant time | 20 pages | Algorithmica 58 (2010) 102-118 | 10.1007/s00453-009-9353-9 | null | cs.DS cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that the ratio of matched individuals to blocking pairs grows
linearly with the number of propose--accept rounds executed by the
Gale--Shapley algorithm for the stable marriage problem. Consequently, the
participants can arrive at an almost stable matching even without full
information about the problem instance; for each participant, knowing only its
local neighbourhood is enough. In distributed-systems parlance, this means that
if each person has only a constant number of acceptable partners, an almost
stable matching emerges after a constant number of synchronous communication
rounds. This holds even if ties are present in the preference lists.
We apply our results to give a distributed $(2+\epsilon)$-approximation
algorithm for maximum-weight matching in bicoloured graphs and a centralised
randomised constant-time approximation scheme for estimating the size of a
stable matching.
| [
{
"version": "v1",
"created": "Mon, 29 Dec 2008 11:04:46 GMT"
}
] | 2012-05-15T00:00:00 | [
[
"Floréen",
"Patrik",
""
],
[
"Kaski",
"Petteri",
""
],
[
"Polishchuk",
"Valentin",
""
],
[
"Suomela",
"Jukka",
""
]
] |
0812.4905 | Jure Leskovec | Jure Leskovec, Deepayan Chakrabarti, Jon Kleinberg, Christos Faloutsos
and Zoubin Ghahramani | Kronecker Graphs: An Approach to Modeling Networks | null | null | null | null | stat.ML cs.DS physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How can we model networks with a mathematically tractable model that allows
for rigorous analysis of network properties? Networks exhibit a long list of
surprising properties: heavy tails for the degree distribution; small
diameters; and densification and shrinking diameters over time. Most present
network models either fail to match several of the above properties, are
complicated to analyze mathematically, or both. In this paper we propose a
generative model for networks that is both mathematically tractable and can
generate networks that have the above mentioned properties. Our main idea is to
use the Kronecker product to generate graphs that we refer to as "Kronecker
graphs".
First, we prove that Kronecker graphs naturally obey common network
properties. We also provide empirical evidence showing that Kronecker graphs
can effectively model the structure of real networks.
We then present KronFit, a fast and scalable algorithm for fitting the
Kronecker graph generation model to large real networks. A naive approach to
fitting would take super- exponential time. In contrast, KronFit takes linear
time, by exploiting the structure of Kronecker matrix multiplication and by
using statistical simulation techniques.
Experiments on large real and synthetic networks show that KronFit finds
accurate parameters that indeed very well mimic the properties of target
networks. Once fitted, the model parameters can be used to gain insights about
the network structure, and the resulting synthetic graphs can be used for null-
models, anonymization, extrapolations, and graph summarization.
| [
{
"version": "v1",
"created": "Mon, 29 Dec 2008 13:22:23 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Aug 2009 21:52:11 GMT"
}
] | 2009-08-22T00:00:00 | [
[
"Leskovec",
"Jure",
""
],
[
"Chakrabarti",
"Deepayan",
""
],
[
"Kleinberg",
"Jon",
""
],
[
"Faloutsos",
"Christos",
""
],
[
"Ghahramani",
"Zoubin",
""
]
] |
0812.4919 | Ildik\'o Schlotter | D\'aniel Marx, Ildik\'o Schlotter | Obtaining a Planar Graph by Vertex Deletion | 16 pages, 4 figures. A preliminary version of this paper appeared in
the proceedings of WG 2007 (33rd International Workshop on Graph-Theoretic
Concepts in Computer Science). The paper has been submitted to Algorithmica | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the k-Apex problem the task is to find at most k vertices whose deletion
makes the given graph planar. The graphs for which there exists a solution form
a minor closed class of graphs, hence by the deep results of Robertson and
Seymour, there is an O(n^3) time algorithm for every fixed value of k. However,
the proof is extremely complicated and the constants hidden by the big-O
notation are huge. Here we give a much simpler algorithm for this problem with
quadratic running time, by iteratively reducing the input graph and then
applying techniques for graphs of bounded treewidth.
| [
{
"version": "v1",
"created": "Mon, 29 Dec 2008 14:57:14 GMT"
}
] | 2008-12-31T00:00:00 | [
[
"Marx",
"Dániel",
""
],
[
"Schlotter",
"Ildikó",
""
]
] |
0812.5101 | Katarzyna Paluch | Katarzyna Paluch, Marcin Mucha, Aleksander Madry | A 7/9 - Approximation Algorithm for the Maximum Traveling Salesman
Problem | 6 figures | null | 10.1007/978-3-642-03685-9_23 | null | cs.GT cs.DM cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We give a 7/9 - Approximation Algorithm for the Maximum Traveling Salesman
Problem.
| [
{
"version": "v1",
"created": "Tue, 30 Dec 2008 19:11:48 GMT"
}
] | 2015-05-13T00:00:00 | [
[
"Paluch",
"Katarzyna",
""
],
[
"Mucha",
"Marcin",
""
],
[
"Madry",
"Aleksander",
""
]
] |
0901.0205 | Julia Chuzhoy | Deeparnab Chakrabarty, Julia Chuzhoy, Sanjeev Khanna | On Allocating Goods to Maximize Fairness | 35 pages | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a set of $m$ agents and a set of $n$ items, where agent $A$ has utility
$u_{A,i}$ for item $i$, our goal is to allocate items to agents to maximize
fairness. Specifically, the utility of an agent is the sum of its utilities for
items it receives, and we seek to maximize the minimum utility of any agent.
While this problem has received much attention recently, its approximability
has not been well-understood thus far: the best known approximation algorithm
achieves an $\tilde{O}(\sqrt{m})$-approximation, and in contrast, the best
known hardness of approximation stands at 2.
Our main result is an approximation algorithm that achieves an
$\tilde{O}(n^{\eps})$ approximation for any $\eps=\Omega(\log\log n/\log n)$ in
time $n^{O(1/\eps)}$. In particular, we obtain poly-logarithmic approximation
in quasi-polynomial time, and for any constant $\eps > 0$, we obtain
$O(n^{\eps})$ approximation in polynomial time. An interesting aspect of our
algorithm is that we use as a building block a linear program whose integrality
gap is $\Omega(\sqrt m)$. We bypass this obstacle by iteratively using the
solutions produced by the LP to construct new instances with significantly
smaller integrality gaps, eventually obtaining the desired approximation.
We also investigate the special case of the problem, where every item has a
non-zero utility for at most two agents. We show that even in this restricted
setting the problem is hard to approximate upto any factor better tha 2, and
show a factor $(2+\eps)$-approximation algorithm running in time
$poly(n,1/\eps)$ for any $\eps>0$. This special case can be cast as a graph
edge orientation problem, and our algorithm can be viewed as a generalization
of Eulerian orientations to weighted graphs.
| [
{
"version": "v1",
"created": "Fri, 2 Jan 2009 01:24:26 GMT"
}
] | 2009-01-05T00:00:00 | [
[
"Chakrabarty",
"Deeparnab",
""
],
[
"Chuzhoy",
"Julia",
""
],
[
"Khanna",
"Sanjeev",
""
]
] |
0901.0290 | Mugurel Ionut Andreica | Mugurel Ionut Andreica, Nicolae Tapus | Offline Algorithmic Techniques for Several Content Delivery Problems in
Some Restricted Types of Distributed Systems | Proceedings of the International Workshop on High Performance Grid
Middleware (HiPerGrid), pp. 65-72, Bucharest, Romania, 21-22 November, 2008.
(ISSN: 2065-0701) | Proceedings of the International Workshop on High Performance Grid
Middleware (HiPerGrid), pp. 65-72, Bucharest, Romania, 2008. (ISSN:
2065-0701) | null | null | cs.DS cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider several content delivery problems (broadcast and
multicast, in particular) in some restricted types of distributed systems (e.g.
optical Grids and wireless sensor networks with tree-like topologies). For each
problem we provide efficient algorithmic techniques for computing optimal
content delivery strategies. The techniques we present are offline, which means
that they can be used only when full information is available and the problem
parameters do not fluctuate too much.
| [
{
"version": "v1",
"created": "Fri, 2 Jan 2009 21:53:57 GMT"
}
] | 2009-01-06T00:00:00 | [
[
"Andreica",
"Mugurel Ionut",
""
],
[
"Tapus",
"Nicolae",
""
]
] |
0901.0291 | Mugurel Ionut Andreica | Alexandra Carpen-Amarie, Mugurel Ionut Andreica, Valentin Cristea | An Algorithm for File Transfer Scheduling in Grid Environments | Proceedings of the International Workshop on High Performance Grid
Middleware (HiPerGrid), pp. 33-40, Bucharest, Romania, 21-22 November, 2008.
(ISSN: 2065-0701) | Proceedings of the International Workshop on High Performance Grid
Middleware (HiPerGrid), pp. 33-40, Bucharest, Romania, 2008. (ISSN:
2065-0701) | null | null | cs.NI cs.DC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the data transfer scheduling problem for Grid
environments, presenting a centralized scheduler developed with dynamic and
adaptive features. The algorithm offers a reservation system for user transfer
requests that allocates them transfer times and bandwidth, according to the
network topology and the constraints the user specified for the requests. This
paper presents the projects related to the data transfer field, the design of
the framework for which the scheduler was built, the main features of the
scheduler, the steps for transfer requests rescheduling and two tests that
illustrate the system's behavior for different types of transfer requests.
| [
{
"version": "v1",
"created": "Fri, 2 Jan 2009 22:03:02 GMT"
}
] | 2009-01-06T00:00:00 | [
[
"Carpen-Amarie",
"Alexandra",
""
],
[
"Andreica",
"Mugurel Ionut",
""
],
[
"Cristea",
"Valentin",
""
]
] |
0901.0501 | Stefan Kiefer | Morten K\"uhnrich, Stefan Schwoon, Ji\v{r}\'i Srba, Stefan Kiefer | Interprocedural Dataflow Analysis over Weight Domains with Infinite
Descending Chains | technical report for a FOSSACS'09 publication | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study generalized fixed-point equations over idempotent semirings and
provide an efficient algorithm for the detection whether a sequence of Kleene's
iterations stabilizes after a finite number of steps. Previously known
approaches considered only bounded semirings where there are no infinite
descending chains. The main novelty of our work is that we deal with semirings
without the boundedness restriction. Our study is motivated by several
applications from interprocedural dataflow analysis. We demonstrate how the
reachability problem for weighted pushdown automata can be reduced to solving
equations in the framework mentioned above and we describe a few applications
to demonstrate its usability.
| [
{
"version": "v1",
"created": "Mon, 5 Jan 2009 16:47:21 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jan 2009 16:00:09 GMT"
}
] | 2009-01-06T00:00:00 | [
[
"Kühnrich",
"Morten",
""
],
[
"Schwoon",
"Stefan",
""
],
[
"Srba",
"Jiří",
""
],
[
"Kiefer",
"Stefan",
""
]
] |
0901.0930 | Jan Tusch | Marc M\"orig, Dieter Rautenbach, Michiel Smid, Jan Tusch | An \Omega(n log n) lower bound for computing the sum of even-ranked
elements | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a sequence A of 2n real numbers, the Even-Rank-Sum problem asks for the
sum of the n values that are at the even positions in the sorted order of the
elements in A. We prove that, in the algebraic computation-tree model, this
problem has time complexity \Theta(n log n). This solves an open problem posed
by Michael Shamos at the Canadian Conference on Computational Geometry in 2008.
| [
{
"version": "v1",
"created": "Wed, 7 Jan 2009 21:55:59 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Mar 2009 09:53:31 GMT"
}
] | 2009-03-23T00:00:00 | [
[
"Mörig",
"Marc",
""
],
[
"Rautenbach",
"Dieter",
""
],
[
"Smid",
"Michiel",
""
],
[
"Tusch",
"Jan",
""
]
] |
0901.1140 | Khaled Elbassioni | Khaled Elbassioni, Rajiv Raman, Saurabh Ray, Rene Sitters | On Profit-Maximizing Pricing for the Highway and Tollbooth Problems | null | null | 10.1007/978-3-642-04645-2_25 | null | cs.DS cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the \emph{tollbooth problem}, we are given a tree $\bT=(V,E)$ with $n$
edges, and a set of $m$ customers, each of whom is interested in purchasing a
path on the tree. Each customer has a fixed budget, and the objective is to
price the edges of $\bT$ such that the total revenue made by selling the paths
to the customers that can afford them is maximized. An important special case
of this problem, known as the \emph{highway problem}, is when $\bT$ is
restricted to be a line.
For the tollbooth problem, we present a randomized $O(\log n)$-approximation,
improving on the current best $O(\log m)$-approximation. We also study a
special case of the tollbooth problem, when all the paths that customers are
interested in purchasing go towards a fixed root of $\bT$. In this case, we
present an algorithm that returns a $(1-\epsilon)$-approximation, for any
$\epsilon > 0$, and runs in quasi-polynomial time. On the other hand, we rule
out the existence of an FPTAS by showing that even for the line case, the
problem is strongly NP-hard. Finally, we show that in the \emph{coupon model},
when we allow some items to be priced below zero to improve the overall profit,
the problem becomes even APX-hard.
| [
{
"version": "v1",
"created": "Thu, 8 Jan 2009 21:23:37 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Feb 2009 14:06:17 GMT"
},
{
"version": "v3",
"created": "Wed, 18 Mar 2009 16:35:48 GMT"
}
] | 2015-05-13T00:00:00 | [
[
"Elbassioni",
"Khaled",
""
],
[
"Raman",
"Rajiv",
""
],
[
"Ray",
"Saurabh",
""
],
[
"Sitters",
"Rene",
""
]
] |
0901.1155 | Itai Benjamini | Itai Benjamini, Yury Makarychev | Balanced allocation: Memory performance tradeoffs | Published in at http://dx.doi.org/10.1214/11-AAP804 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org) | Annals of Applied Probability 2012, Vol. 22, No. 4, 1642-1649 | 10.1214/11-AAP804 | IMS-AAP-AAP804 | cs.DS cs.DM math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Suppose we sequentially put $n$ balls into $n$ bins. If we put each ball into
a random bin then the heaviest bin will contain ${\sim}\log n/\log\log n$ balls
with high probability. However, Azar, Broder, Karlin and Upfal [SIAM J. Comput.
29 (1999) 180--200] showed that if each time we choose two bins at random and
put the ball in the least loaded bin among the two, then the heaviest bin will
contain only ${\sim}\log\log n$ balls with high probability. How much memory do
we need to implement this scheme? We need roughly $\log\log\log n$ bits per
bin, and $n\log\log\log n$ bits in total. Let us assume now that we have
limited amount of memory. For each ball, we are given two random bins and we
have to put the ball into one of them. Our goal is to minimize the load of the
heaviest bin. We prove that if we have $n^{1-\delta}$ bits then the heaviest
bin will contain at least $\Omega(\delta\log n/\log\log n)$ balls with high
probability. The bound is tight in the communication complexity model.
| [
{
"version": "v1",
"created": "Fri, 9 Jan 2009 00:23:33 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Sep 2012 07:22:48 GMT"
}
] | 2012-09-13T00:00:00 | [
[
"Benjamini",
"Itai",
""
],
[
"Makarychev",
"Yury",
""
]
] |
0901.1427 | Sourav Chakraborty | Sourav Chakraborty, Nikhil Devanur | An Online Multi-unit Auction with Improved Competitive Ratio | null | null | null | null | cs.GT cs.CC cs.DM cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We improve the best known competitive ratio (from 1/4 to 1/2), for the online
multi-unit allocation problem, where the objective is to maximize the
single-price revenue. Moreover, the competitive ratio of our algorithm tends to
1, as the bid-profile tends to ``smoothen''. This algorithm is used as a
subroutine in designing truthful auctions for the same setting: the allocation
has to be done online, while the payments can be decided at the end of the day.
Earlier, a reduction from the auction design problem to the allocation problem
was known only for the unit-demand case. We give a reduction for the general
case when the bidders have decreasing marginal utilities. The problem is
inspired by sponsored search auctions.
| [
{
"version": "v1",
"created": "Sun, 11 Jan 2009 10:00:38 GMT"
}
] | 2009-01-13T00:00:00 | [
[
"Chakraborty",
"Sourav",
""
],
[
"Devanur",
"Nikhil",
""
]
] |
0901.1563 | Bruno Escoffier | Nicolas Bourgeois, Bruno Escoffier, Vangelis Th. Paschos, Johan M.M
van Rooij | Fast Algorithms for Max Independent Set in Graphs of Small Average
Degree | null | null | null | null | cs.DM cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Max Independent Set (MIS) is a paradigmatic problem in theoretical computer
science and numerous studies tackle its resolution by exact algorithms with
non-trivial worst-case complexity. The best such complexity is, to our
knowledge, the $O^*(1.1889^n)$ algorithm claimed by J.M. Robson (T.R. 1251-01,
LaBRI, Univ. Bordeaux I, 2001) in his unpublished technical report. We also
quote the $O^*(1.2210^n)$ algorithm by Fomin and al. (in Proc. SODA'06, pages
18-25, 2006), that is the best published result about MIS.
In this paper we settle MIS in (connected) graphs with "small" average
degree, more precisely with average degree at most 3, 4, 5 and 6. Dealing with
graphs of average degree at most 3, the best bound known is the recent
$O^*(1.0977^n)$ bound by N. Bourgeois and al. in Proc. IWPEC'08, pages 55-65,
2008). Here we improve this result down to $O^*(1.0854^n)$ by proposing finer
and more powerful reduction rules.
We then propose a generic method showing how improvement of the worst-case
complexity for MIS in graphs of average degree $d$ entails improvement of it in
any graph of average degree greater than $d$ and, based upon it, we tackle MIS
in graphs of average degree 4, 5 and 6.
For MIS in graphs with average degree 4, we provide an upper complexity bound
of $O^*(1.1571^n)$ that outperforms the best known bound of $O^*(1.1713^n)$ by
R. Beigel (Proc. SODA'99, pages 856-857, 1999).
For MIS in graphs of average degree at most 5 and 6, we provide bounds of
$O^*(1.1969^n)$ and $O^*(1.2149^n)$, respectively, that improve upon the
corresponding bounds of $O^*(1.2023^n)$ and $O^*(1.2172^n)$ in graphs of
maximum degree 5 and 6 by (Fomin et al., 2006).
| [
{
"version": "v1",
"created": "Mon, 12 Jan 2009 12:40:32 GMT"
}
] | 2009-01-13T00:00:00 | [
[
"Bourgeois",
"Nicolas",
""
],
[
"Escoffier",
"Bruno",
""
],
[
"Paschos",
"Vangelis Th.",
""
],
[
"van Rooij",
"Johan M. M",
""
]
] |
0901.1684 | Riccardo Zecchina | M. Bayati, A. Braunstein, R. Zecchina | A rigorous analysis of the cavity equations for the minimum spanning
tree | 5 pages, 1 figure | J. Math. Phys. 49, 125206 (2008) | 10.1063/1.2982805 | null | cond-mat.stat-mech cond-mat.dis-nn cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze a new general representation for the Minimum Weight Steiner Tree
(MST) problem which translates the topological connectivity constraint into a
set of local conditions which can be analyzed by the so called cavity equations
techniques. For the limit case of the Spanning tree we prove that the fixed
point of the algorithm arising from the cavity equations leads to the global
optimum.
| [
{
"version": "v1",
"created": "Mon, 12 Jan 2009 23:07:59 GMT"
}
] | 2009-01-14T00:00:00 | [
[
"Bayati",
"M.",
""
],
[
"Braunstein",
"A.",
""
],
[
"Zecchina",
"R.",
""
]
] |
0901.1696 | Julien Langou | Fred G. Gustavson, Jerzy Wasniewski, Jack J. Dongarra and Julien
Langou | Rectangular Full Packed Format for Cholesky's Algorithm: Factorization,
Solution and Inversion | null | null | null | null | cs.MS cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a new data format for storing triangular, symmetric, and
Hermitian matrices called RFPF (Rectangular Full Packed Format). The standard
two dimensional arrays of Fortran and C (also known as full format) that are
used to represent triangular and symmetric matrices waste nearly half of the
storage space but provide high performance via the use of Level 3 BLAS.
Standard packed format arrays fully utilize storage (array space) but provide
low performance as there is no Level 3 packed BLAS. We combine the good
features of packed and full storage using RFPF to obtain high performance via
using Level 3 BLAS as RFPF is a standard full format representation. Also, RFPF
requires exactly the same minimal storage as packed format. Each LAPACK full
and/or packed triangular, symmetric, and Hermitian routine becomes a single new
RFPF routine based on eight possible data layouts of RFPF. This new RFPF
routine usually consists of two calls to the corresponding LAPACK full format
routine and two calls to Level 3 BLAS routines. This means {\it no} new
software is required. As examples, we present LAPACK routines for Cholesky
factorization, Cholesky solution and Cholesky inverse computation in RFPF to
illustrate this new work and to describe its performance on several commonly
used computer platforms. Performance of LAPACK full routines using RFPF versus
LAPACK full routines using standard format for both serial and SMP parallel
processing is about the same while using half the storage. Performance gains
are roughly one to a factor of 43 for serial and one to a factor of 97 for SMP
parallel times faster using vendor LAPACK full routines with RFPF than with
using vendor and/or reference packed routines.
| [
{
"version": "v1",
"created": "Tue, 13 Jan 2009 01:08:27 GMT"
}
] | 2009-01-14T00:00:00 | [
[
"Gustavson",
"Fred G.",
""
],
[
"Wasniewski",
"Jerzy",
""
],
[
"Dongarra",
"Jack J.",
""
],
[
"Langou",
"Julien",
""
]
] |
0901.1761 | Beat Gfeller | Beat Gfeller, Peter Sanders | Towards Optimal Range Medians | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the following problem: given an unsorted array of $n$ elements,
and a sequence of intervals in the array, compute the median in each of the
subarrays defined by the intervals. We describe a simple algorithm which uses
O(n) space and needs $O(n\log k + k\log n)$ time to answer the first $k$
queries. This improves previous algorithms by a logarithmic factor and matches
a lower bound for $k=O(n)$.
Since the algorithm decomposes the range of element values rather than the
array, it has natural generalizations to higher dimensional problems -- it
reduces a range median query to a logarithmic number of range counting queries.
| [
{
"version": "v1",
"created": "Tue, 13 Jan 2009 14:46:50 GMT"
}
] | 2009-01-14T00:00:00 | [
[
"Gfeller",
"Beat",
""
],
[
"Sanders",
"Peter",
""
]
] |
0901.1849 | David Doty | David Doty | Randomized Self-Assembly for Exact Shapes | Conference version accepted to FOCS 2009. Present version accepted to
SIAM Journal on Computing, which adds new sections on arbitrary scaled
shapes, smooth trade-off between specifying bits of n through concentrations
versus hardcoded tile types, and construction that uses concentrations
arbitrarily close to uniform to fix potential thermodynamic problems with
model | null | null | null | cs.CC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Working in Winfree's abstract tile assembly model, we show that a
constant-size tile assembly system can be programmed through relative tile
concentrations to build an n x n square with high probability, for any
sufficiently large n. This answers an open question of Kao and Schweller
(Randomized Self-Assembly for Approximate Shapes, ICALP 2008), who showed how
to build an approximately n x n square using tile concentration programming,
and asked whether the approximation could be made exact with high probability.
We show how this technique can be modified to answer another question of Kao
and Schweller, by showing that a constant-size tile assembly system can be
programmed through tile concentrations to assemble arbitrary finite *scaled
shapes*, which are shapes modified by replacing each point with a c x c block
of points, for some integer c. Furthermore, we exhibit a smooth tradeoff
between specifying bits of n via tile concentrations versus specifying them via
hard-coded tile types, which allows tile concentration programming to be
employed for specifying a fraction of the bits of "input" to a tile assembly
system, under the constraint that concentrations can only be specified to a
limited precision. Finally, to account for some unrealistic aspects of the tile
concentration programming model, we show how to modify the construction to use
only concentrations that are arbitrarily close to uniform.
| [
{
"version": "v1",
"created": "Tue, 13 Jan 2009 20:55:01 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jan 2009 20:06:49 GMT"
},
{
"version": "v3",
"created": "Fri, 27 Feb 2009 05:57:06 GMT"
},
{
"version": "v4",
"created": "Mon, 5 Oct 2009 18:10:14 GMT"
},
{
"version": "v5",
"created": "Mon, 5 Oct 2009 20:16:03 GMT"
},
{
"version": "v6",
"created": "Fri, 16 Jul 2010 18:24:31 GMT"
}
] | 2015-03-13T00:00:00 | [
[
"Doty",
"David",
""
]
] |
0901.1886 | Frederic Didier | Frederic Didier | Efficient erasure decoding of Reed-Solomon codes | 4 pages, submitted to ISIT 2009 | null | null | null | cs.IT cs.DS math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a practical algorithm to decode erasures of Reed-Solomon codes
over the q elements binary field in O(q \log_2^2 q) time where the constant
implied by the O-notation is very small. Asymptotically fast algorithms based
on fast polynomial arithmetic were already known, but even if their complexity
is similar, they are mostly impractical. By comparison our algorithm uses only
a few Walsh transforms and has been easily implemented.
| [
{
"version": "v1",
"created": "Wed, 14 Jan 2009 17:05:50 GMT"
}
] | 2009-01-15T00:00:00 | [
[
"Didier",
"Frederic",
""
]
] |
0901.1908 | Pat Morin | Sebastien Collette, Vida Dujmovic, John Iacono, Stefan Langerman, and
Pat Morin | Entropy, Triangulation, and Point Location in Planar Subdivisions | 19 pages, 4 figures, lots of formulas | ACM Transactions on Algorithms (TALG), Volume 8 Issue 3, July 2012
Article No. 29 | 10.1145/2229163.2229173 | null | cs.CG cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A data structure is presented for point location in connected planar
subdivisions when the distribution of queries is known in advance. The data
structure has an expected query time that is within a constant factor of
optimal. More specifically, an algorithm is presented that preprocesses a
connected planar subdivision G of size n and a query distribution D to produce
a point location data structure for G. The expected number of point-line
comparisons performed by this data structure, when the queries are distributed
according to D, is H + O(H^{2/3}+1) where H=H(G,D) is a lower bound on the
expected number of point-line comparisons performed by any linear decision tree
for point location in G under the query distribution D. The preprocessing
algorithm runs in O(n log n) time and produces a data structure of size O(n).
These results are obtained by creating a Steiner triangulation of G that has
near-minimum entropy.
| [
{
"version": "v1",
"created": "Tue, 13 Jan 2009 23:39:46 GMT"
}
] | 2013-03-12T00:00:00 | [
[
"Collette",
"Sebastien",
""
],
[
"Dujmovic",
"Vida",
""
],
[
"Iacono",
"John",
""
],
[
"Langerman",
"Stefan",
""
],
[
"Morin",
"Pat",
""
]
] |
0901.2151 | Bogdan Danila | Yudong Sun, Bogdan Danila, Kresimir Josic, and Kevin E. Bassler | Improved community structure detection using a modified fine tuning
strategy | 6 pages, 3 figures, 1 table | null | 10.1209/0295-5075/86/28004 | null | cs.CY cond-mat.stat-mech cs.DS physics.comp-ph physics.soc-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The community structure of a complex network can be determined by finding the
partitioning of its nodes that maximizes modularity. Many of the proposed
algorithms for doing this work by recursively bisecting the network. We show
that this unduely constrains their results, leading to a bias in the size of
the communities they find and limiting their effectivness. To solve this
problem, we propose adding a step to the existing algorithms that does not
increase the order of their computational complexity. We show that, if this
step is combined with a commonly used method, the identified constraint and
resulting bias are removed, and its ability to find the optimal partitioning is
improved. The effectiveness of this combined algorithm is also demonstrated by
using it on real-world example networks. For a number of these examples, it
achieves the best results of any known algorithm.
| [
{
"version": "v1",
"created": "Thu, 15 Jan 2009 00:40:26 GMT"
}
] | 2015-05-13T00:00:00 | [
[
"Sun",
"Yudong",
""
],
[
"Danila",
"Bogdan",
""
],
[
"Josic",
"Kresimir",
""
],
[
"Bassler",
"Kevin E.",
""
]
] |
Subsets and Splits