id
stringlengths
9
16
submitter
stringlengths
4
52
authors
stringlengths
4
937
title
stringlengths
7
243
comments
stringlengths
1
472
journal-ref
stringlengths
4
244
doi
stringlengths
14
55
report-no
stringlengths
3
125
categories
stringlengths
5
97
license
stringclasses
9 values
abstract
stringlengths
33
2.95k
versions
list
update_date
timestamp[s]
authors_parsed
sequence
0710.3642
Florent Bouchez
Florent Bouchez (LIP), Alain Darte (LIP), Fabrice Rastello (LIP)
On the Complexity of Spill Everywhere under SSA Form
10 pages
ACM SIGPLAN Notices Issue 7, Volume 42 (2007) 103 - 112
10.1145/1254766.1254782
null
cs.DS cs.CC
null
Compilation for embedded processors can be either aggressive (time consuming cross-compilation) or just in time (embedded and usually dynamic). The heuristics used in dynamic compilation are highly constrained by limited resources, time and memory in particular. Recent results on the SSA form open promising directions for the design of new register allocation heuristics for embedded systems and especially for embedded compilation. In particular, heuristics based on tree scan with two separated phases -- one for spilling, then one for coloring/coalescing -- seem good candidates for designing memory-friendly, fast, and competitive register allocators. Still, also because of the side effect on power consumption, the minimization of loads and stores overhead (spilling problem) is an important issue. This paper provides an exhaustive study of the complexity of the ``spill everywhere'' problem in the context of the SSA form. Unfortunately, conversely to our initial hopes, many of the questions we raised lead to NP-completeness results. We identify some polynomial cases but that are impractical in JIT context. Nevertheless, they can give hints to simplify formulations for the design of aggressive allocators.
[ { "version": "v1", "created": "Fri, 19 Oct 2007 07:24:58 GMT" } ]
2009-09-18T00:00:00
[ [ "Bouchez", "Florent", "", "LIP" ], [ "Darte", "Alain", "", "LIP" ], [ "Rastello", "Fabrice", "", "LIP" ] ]
0710.3824
Sebastien Tixeuil
Sylvie Dela\"et (LRI), Partha Sarathi Mandal (INRIA Futurs), Mariusz Rokicki (LRI), S\'ebastien Tixeuil (INRIA Futurs, LIP6)
Deterministic Secure Positioning in Wireless Sensor Networks
null
null
null
null
cs.CR cs.DC cs.DS cs.NI
null
Properly locating sensor nodes is an important building block for a large subset of wireless sensor networks (WSN) applications. As a result, the performance of the WSN degrades significantly when misbehaving nodes report false location and distance information in order to fake their actual location. In this paper we propose a general distributed deterministic protocol for accurate identification of faking sensors in a WSN. Our scheme does \emph{not} rely on a subset of \emph{trusted} nodes that are not allowed to misbehave and are known to every node in the network. Thus, any subset of nodes is allowed to try faking its position. As in previous approaches, our protocol is based on distance evaluation techniques developed for WSN. On the positive side, we show that when the received signal strength (RSS) technique is used, our protocol handles at most $\lfloor \frac{n}{2} \rfloor-2$ faking sensors. Also, when the time of flight (ToF) technique is used, our protocol manages at most $\lfloor \frac{n}{2} \rfloor - 3$ misbehaving sensors. On the negative side, we prove that no deterministic protocol can identify faking sensors if their number is $\lceil \frac{n}{2}\rceil -1$. Thus our scheme is almost optimal with respect to the number of faking sensors. We discuss application of our technique in the trusted sensor model. More precisely our results can be used to minimize the number of trusted sensors that are needed to defeat faking ones.
[ { "version": "v1", "created": "Mon, 22 Oct 2007 07:29:13 GMT" } ]
2007-10-23T00:00:00
[ [ "Delaët", "Sylvie", "", "LRI" ], [ "Mandal", "Partha Sarathi", "", "INRIA Futurs" ], [ "Rokicki", "Mariusz", "", "LRI" ], [ "Tixeuil", "Sébastien", "", "INRIA Futurs, LIP6" ] ]
0710.4410
Paul Zimmermann
Richard Brent, Paul Zimmermann (INRIA Lorraine - LORIA)
A Multi-level Blocking Distinct Degree Factorization Algorithm
null
Contemporary Mathematics 461 (2008) 47-58
null
INRIA Tech. Report RR-6331, Oct. 2007
cs.DS
null
We give a new algorithm for performing the distinct-degree factorization of a polynomial P(x) over GF(2), using a multi-level blocking strategy. The coarsest level of blocking replaces GCD computations by multiplications, as suggested by Pollard (1975), von zur Gathen and Shoup (1992), and others. The novelty of our approach is that a finer level of blocking replaces multiplications by squarings, which speeds up the computation in GF(2)[x]/P(x) of certain interval polynomials when P(x) is sparse. As an application we give a fast algorithm to search for all irreducible trinomials x^r + x^s + 1 of degree r over GF(2), while producing a certificate that can be checked in less time than the full search. Naive algorithms cost O(r^2) per trinomial, thus O(r^3) to search over all trinomials of given degree r. Under a plausible assumption about the distribution of factors of trinomials, the new algorithm has complexity O(r^2 (log r)^{3/2}(log log r)^{1/2}) for the search over all trinomials of degree r. Our implementation achieves a speedup of greater than a factor of 560 over the naive algorithm in the case r = 24036583 (a Mersenne exponent). Using our program, we have found two new primitive trinomials of degree 24036583 over GF(2) (the previous record degree was 6972593).
[ { "version": "v1", "created": "Wed, 24 Oct 2007 09:18:33 GMT" } ]
2010-04-20T00:00:00
[ [ "Brent", "Richard", "", "INRIA Lorraine - LORIA" ], [ "Zimmermann", "Paul", "", "INRIA Lorraine - LORIA" ] ]
0710.5547
Miguel Angel Miron C.E.
M. Miron Bernal, H. Coyote Estrada, J. Figueroa Nazuno
Code Similarity on High Level Programs
Proceedings of the 18th Autumn Meeting on Communications, Computers, Electronics and Industrial Exposition. (IEEE - ROCC07). Acapulco, Guerrero, Mexico. 2007
null
null
null
cs.CV cs.DS
null
This paper presents a new approach for code similarity on High Level programs. Our technique is based on Fast Dynamic Time Warping, that builds a warp path or points relation with local restrictions. The source code is represented into Time Series using the operators inside programming languages that makes possible the comparison. This makes possible subsequence detection that represent similar code instructions. In contrast with other code similarity algorithms, we do not make features extraction. The experiments show that two source codes are similar when their respective Time Series are similar.
[ { "version": "v1", "created": "Mon, 29 Oct 2007 22:39:21 GMT" } ]
2007-10-31T00:00:00
[ [ "Bernal", "M. Miron", "" ], [ "Estrada", "H. Coyote", "" ], [ "Nazuno", "J. Figueroa", "" ] ]
0711.0086
Sergey Gubin
Sergey Gubin
Convex and linear models of NP-problems
In part, the results were presented on WCECS 2007/ICCSA 2007. V2 edited
null
null
null
cs.DM cs.CC cs.DS math.CO
null
Reducing the NP-problems to the convex/linear analysis on the Birkhoff polytope.
[ { "version": "v1", "created": "Thu, 1 Nov 2007 08:33:07 GMT" }, { "version": "v2", "created": "Sun, 4 Nov 2007 06:11:22 GMT" } ]
2007-11-04T00:00:00
[ [ "Gubin", "Sergey", "" ] ]
0711.0189
Ulrike von Luxburg
Ulrike von Luxburg
A Tutorial on Spectral Clustering
null
Statistics and Computing 17(4), 2007
null
null
cs.DS cs.LG
null
In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.
[ { "version": "v1", "created": "Thu, 1 Nov 2007 19:04:43 GMT" } ]
2007-11-02T00:00:00
[ [ "von Luxburg", "Ulrike", "" ] ]
0711.0251
Rogers Mathew
Telikepalli Kavitha and Rogers Mathew
Faster Algorithms for Online Topological Ordering
null
null
null
IISC-CSA-TR-2007-12
cs.DS
null
We present two algorithms for maintaining the topological order of a directed acyclic graph with n vertices, under an online edge insertion sequence of m edges. Efficient algorithms for online topological ordering have many applications, including online cycle detection, which is to discover the first edge that introduces a cycle under an arbitrary sequence of edge insertions in a directed graph. In this paper we present efficient algorithms for the online topological ordering problem. We first present a simple algorithm with running time O(n^{5/2}) for the online topological ordering problem. This is the current fastest algorithm for this problem on dense graphs, i.e., when m > n^{5/3}. We then present an algorithm with running time O((m + nlog n)\sqrt{m}); this is more efficient for sparse graphs. Our results yield an improved upper bound of O(min(n^{5/2}, (m + nlog n)sqrt{m})) for the online topological ordering problem.
[ { "version": "v1", "created": "Fri, 2 Nov 2007 06:42:43 GMT" } ]
2007-11-05T00:00:00
[ [ "Kavitha", "Telikepalli", "" ], [ "Mathew", "Rogers", "" ] ]
0711.0311
H. Georg Buesching
H. Georg Buesching
Improving the LP bound of a MILP by branching concurrently
21 pages of a possibly new theory submitted by a hobby researcher. Uses algorithmic.sty
null
null
null
cs.DM cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We'll measure the differences of the dual variables and the gain of the objective function when creating new problems, which each has one inequality more than the starting LP-instance. These differences of the dual variables are naturally connected to the branches. Then we'll choose those differences of dual variables, so that for all combinations of choices at the connected branches, all dual inequalities will hold for sure. By adding the gain of each chosen branching, we get a total gain, which gives a better limit of the original problem. By this technique it is also possible to create cuts.
[ { "version": "v1", "created": "Fri, 2 Nov 2007 13:57:41 GMT" }, { "version": "v2", "created": "Fri, 21 Nov 2008 20:33:30 GMT" } ]
2008-11-21T00:00:00
[ [ "Buesching", "H. Georg", "" ] ]
0711.1055
Klas Olof Daniel Andersson
Daniel Andersson, Kristoffer Arnsfelt Hansen, Peter Bro Miltersen, Troels Bjerre Sorensen
Simple Recursive Games
null
null
null
null
cs.GT cs.DS
null
We define the class of "simple recursive games". A simple recursive game is defined as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving simple recursive games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem.
[ { "version": "v1", "created": "Wed, 7 Nov 2007 10:23:47 GMT" } ]
2007-11-08T00:00:00
[ [ "Andersson", "Daniel", "" ], [ "Hansen", "Kristoffer Arnsfelt", "" ], [ "Miltersen", "Peter Bro", "" ], [ "Sorensen", "Troels Bjerre", "" ] ]
0711.1682
Loukas Georgiadis
Loukas Georgiadis, Haim Kaplan, Nira Shafrir, Robert E. Tarjan, Renato F. Werneck
Data Structures for Mergeable Trees
null
null
null
null
cs.DS
null
Motivated by an application in computational topology, we consider a novel variant of the problem of efficiently maintaining dynamic rooted trees. This variant requires merging two paths in a single operation. In contrast to the standard problem, in which only one tree arc changes at a time, a single merge operation can change many arcs. In spite of this, we develop a data structure that supports merges on an n-node forest in O(log^2 n) amortized time and all other standard tree operations in O(log n) time (amortized, worst-case, or randomized depending on the underlying data structure). For the special case that occurs in the motivating application, in which arbitrary arc deletions (cuts) are not allowed, we give a data structure with an O(log n) time bound per operation. This is asymptotically optimal under certain assumptions. For the even-more special case in which both cuts and parent queries are disallowed, we give an alternative O(log n)-time solution that uses standard dynamic trees as a black box. This solution also applies to the motivating application. Our methods use previous work on dynamic trees in various ways, but the analysis of each algorithm requires novel ideas. We also investigate lower bounds for the problem under various assumptions.
[ { "version": "v1", "created": "Sun, 11 Nov 2007 21:28:20 GMT" } ]
2007-11-13T00:00:00
[ [ "Georgiadis", "Loukas", "" ], [ "Kaplan", "Haim", "" ], [ "Shafrir", "Nira", "" ], [ "Tarjan", "Robert E.", "" ], [ "Werneck", "Renato F.", "" ] ]
0711.2157
Bodo Manthey
Bodo Manthey
On Approximating Multi-Criteria TSP
Preliminary version at STACS 2009. This paper is a revised full version, where some proofs are simplified
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP). First, we devise randomized approximation algorithms for multi-criteria maximum traveling salesman problems (Max-TSP). For multi-criteria Max-STSP, where the edge weights have to be symmetric, we devise an algorithm with an approximation ratio of 2/3 - eps. For multi-criteria Max-ATSP, where the edge weights may be asymmetric, we present an algorithm with a ratio of 1/2 - eps. Our algorithms work for any fixed number k of objectives. Furthermore, we present a deterministic algorithm for bi-criteria Max-STSP that achieves an approximation ratio of 7/27. Finally, we present a randomized approximation algorithm for the asymmetric multi-criteria minimum TSP with triangle inequality Min-ATSP. This algorithm achieves a ratio of log n + eps.
[ { "version": "v1", "created": "Wed, 14 Nov 2007 10:53:49 GMT" }, { "version": "v2", "created": "Wed, 19 Nov 2008 09:20:10 GMT" }, { "version": "v3", "created": "Wed, 13 Jul 2011 12:29:45 GMT" } ]
2011-07-14T00:00:00
[ [ "Manthey", "Bodo", "" ] ]
0711.2399
Alexander Tiskin
Vladimir Deineko and Alexander Tiskin
Minimum-weight double-tree shortcutting for Metric TSP: Bounding the approximation ratio
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Metric Traveling Salesman Problem (TSP) is a classical NP-hard optimization problem. The double-tree shortcutting method for Metric TSP yields an exponentially-sized space of TSP tours, each of which approximates the optimal solution within at most a factor of 2. We consider the problem of finding among these tours the one that gives the closest approximation, i.e.\ the \emph{minimum-weight double-tree shortcutting}. Previously, we gave an efficient algorithm for this problem, and carried out its experimental analysis. In this paper, we address the related question of the worst-case approximation ratio for the minimum-weight double-tree shortcutting method. In particular, we give lower bounds on the approximation ratio in some specific metric spaces: the ratio of 2 in the discrete shortest path metric, 1.622 in the planar Euclidean metric, and 1.666 in the planar Minkowski metric. The first of these lower bounds is tight; we conjecture that the other two bounds are also tight, and in particular that the minimum-weight double-tree method provides a 1.622-approximation for planar Euclidean TSP.
[ { "version": "v1", "created": "Thu, 15 Nov 2007 13:19:01 GMT" }, { "version": "v2", "created": "Tue, 16 Dec 2008 11:58:25 GMT" }, { "version": "v3", "created": "Sun, 28 Dec 2008 17:28:18 GMT" } ]
2008-12-30T00:00:00
[ [ "Deineko", "Vladimir", "" ], [ "Tiskin", "Alexander", "" ] ]
0711.2585
Petteri Kaski
Andreas Bj\"orklund, Thore Husfeldt, Petteri Kaski, Mikko Koivisto
Computing the Tutte polynomial in vertex-exponential time
null
null
null
null
cs.DS cond-mat.stat-mech math.CO
null
The deletion--contraction algorithm is perhaps the most popular method for computing a host of fundamental graph invariants such as the chromatic, flow, and reliability polynomials in graph theory, the Jones polynomial of an alternating link in knot theory, and the partition functions of the models of Ising, Potts, and Fortuin--Kasteleyn in statistical physics. Prior to this work, deletion--contraction was also the fastest known general-purpose algorithm for these invariants, running in time roughly proportional to the number of spanning trees in the input graph. Here, we give a substantially faster algorithm that computes the Tutte polynomial--and hence, all the aforementioned invariants and more--of an arbitrary graph in time within a polynomial factor of the number of connected vertex sets. The algorithm actually evaluates a multivariate generalization of the Tutte polynomial by making use of an identity due to Fortuin and Kasteleyn. We also provide a polynomial-space variant of the algorithm and give an analogous result for Chung and Graham's cover polynomial. An implementation of the algorithm outperforms deletion--contraction also in practice.
[ { "version": "v1", "created": "Fri, 16 Nov 2007 10:51:10 GMT" }, { "version": "v2", "created": "Mon, 19 Nov 2007 10:41:46 GMT" }, { "version": "v3", "created": "Mon, 14 Jan 2008 16:06:31 GMT" }, { "version": "v4", "created": "Mon, 14 Apr 2008 10:31:54 GMT" } ]
2008-04-14T00:00:00
[ [ "Björklund", "Andreas", "" ], [ "Husfeldt", "Thore", "" ], [ "Kaski", "Petteri", "" ], [ "Koivisto", "Mikko", "" ] ]
0711.2710
Bernhard Haeupler
Bernhard Haeupler and Robert E. Tarjan
Finding a Feasible Flow in a Strongly Connected Network
4 pages, submitted to Operations Research Letters, minor updates: typos corrected, speed-up = improvement of the worst-case time bound
null
null
null
cs.DS
null
We consider the problem of finding a feasible single-commodity flow in a strongly connected network with fixed supplies and demands, provided that the sum of supplies equals the sum of demands and the minimum arc capacity is at least this sum. A fast algorithm for this problem improves the worst-case time bound of the Goldberg-Rao maximum flow method by a constant factor. Erlebach and Hagerup gave an linear-time feasible flow algorithm. We give an arguably simpler one.
[ { "version": "v1", "created": "Sat, 17 Nov 2007 01:59:53 GMT" }, { "version": "v2", "created": "Mon, 3 Dec 2007 15:34:37 GMT" } ]
2007-12-03T00:00:00
[ [ "Haeupler", "Bernhard", "" ], [ "Tarjan", "Robert E.", "" ] ]
0711.3250
Venkata Seshu Kumar Kurapati Mr
Venkata Seshu Kumar Kurapati
Improved Fully Dynamic Reachability Algorithm for Directed Graph
null
null
null
null
cs.DS
null
We propose a fully dynamic algorithm for maintaining reachability information in directed graphs. The proposed deterministic dynamic algorithm has an update time of $O((ins*n^{2}) + (del * (m+n*log(n))))$ where $m$ is the current number of edges, $n$ is the number of vertices in the graph, $ins$ is the number of edge insertions and $del$ is the number of edge deletions. Each query can be answered in O(1) time after each update. The proposed algorithm combines existing fully dynamic reachability algorithm with well known witness counting technique to improve efficiency of maintaining reachability information when edges are deleted. The proposed algorithm improves by a factor of $O(\frac{n^2}{m+n*log(n)})$ for edge deletion over the best existing fully dynamic algorithm for maintaining reachability information.
[ { "version": "v1", "created": "Wed, 21 Nov 2007 03:22:12 GMT" } ]
2007-11-22T00:00:00
[ [ "Kurapati", "Venkata Seshu Kumar", "" ] ]
0711.3672
Sebastien Tixeuil
St\'ephane Devismes (LRI), S\'ebastien Tixeuil (INRIA Futurs, LIP6), Masafumi Yamashita (TCSG)
Weak vs. Self vs. Probabilistic Stabilization
null
null
null
null
cs.DC cs.DS cs.NI
null
Self-stabilization is a strong property that guarantees that a network always resume correct behavior starting from an arbitrary initial state. Weaker guarantees have later been introduced to cope with impossibility results: probabilistic stabilization only gives probabilistic convergence to a correct behavior. Also, weak stabilization only gives the possibility of convergence. In this paper, we investigate the relative power of weak, self, and probabilistic stabilization, with respect to the set of problems that can be solved. We formally prove that in that sense, weak stabilization is strictly stronger that self-stabilization. Also, we refine previous results on weak stabilization to prove that, for practical schedule instances, a deterministic weak-stabilizing protocol can be turned into a probabilistic self-stabilizing one. This latter result hints at more practical use of weak-stabilization, as such algorthms are easier to design and prove than their (probabilistic) self-stabilizing counterparts.
[ { "version": "v1", "created": "Fri, 23 Nov 2007 07:17:25 GMT" }, { "version": "v2", "created": "Mon, 26 Nov 2007 11:08:34 GMT" } ]
2009-09-29T00:00:00
[ [ "Devismes", "Stéphane", "", "LRI" ], [ "Tixeuil", "Sébastien", "", "INRIA Futurs, LIP6" ], [ "Yamashita", "Masafumi", "", "TCSG" ] ]
0711.3861
Kamesh Munagala
Sudipto Guha, Kamesh Munagala and Peng Shi
Approximation Algorithms for Restless Bandit Problems
Merges two papers appearing in the FOCS '07 and SODA '09 conferences. This final version has been submitted for journal publication
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The restless bandit problem is one of the most well-studied generalizations of the celebrated stochastic multi-armed bandit problem in decision theory. In its ultimate generality, the restless bandit problem is known to be PSPACE-Hard to approximate to any non-trivial factor, and little progress has been made despite its importance in modeling activity allocation under uncertainty. We consider a special case that we call Feedback MAB, where the reward obtained by playing each of n independent arms varies according to an underlying on/off Markov process whose exact state is only revealed when the arm is played. The goal is to design a policy for playing the arms in order to maximize the infinite horizon time average expected reward. This problem is also an instance of a Partially Observable Markov Decision Process (POMDP), and is widely studied in wireless scheduling and unmanned aerial vehicle (UAV) routing. Unlike the stochastic MAB problem, the Feedback MAB problem does not admit to greedy index-based optimal policies. We develop a novel and general duality-based algorithmic technique that yields a surprisingly simple and intuitive 2+epsilon-approximate greedy policy to this problem. We then define a general sub-class of restless bandit problems that we term Monotone bandits, for which our policy is a 2-approximation. Our technique is robust enough to handle generalizations of these problems to incorporate various side-constraints such as blocking plays and switching costs. This technique is also of independent interest for other restless bandit problems. By presenting the first (and efficient) O(1) approximations for non-trivial instances of restless bandits as well as of POMDPs, our work initiates the study of approximation algorithms in both these contexts.
[ { "version": "v1", "created": "Sun, 25 Nov 2007 18:01:35 GMT" }, { "version": "v2", "created": "Fri, 11 Apr 2008 13:42:55 GMT" }, { "version": "v3", "created": "Sat, 12 Jul 2008 09:16:54 GMT" }, { "version": "v4", "created": "Tue, 27 Jan 2009 17:07:14 GMT" }, { "version": "v5", "created": "Tue, 3 Feb 2009 17:39:36 GMT" } ]
2009-02-03T00:00:00
[ [ "Guha", "Sudipto", "" ], [ "Munagala", "Kamesh", "" ], [ "Shi", "Peng", "" ] ]
0711.4052
Paul Bonsma
Paul Bonsma and Frederic Dorn
An FPT Algorithm for Directed Spanning k-Leaf
17 pages, 8 figures
null
null
2007-046
cs.DS cs.DM
null
An out-branching of a directed graph is a rooted spanning tree with all arcs directed outwards from the root. We consider the problem of deciding whether a given directed graph D has an out-branching with at least k leaves (Directed Spanning k-Leaf). We prove that this problem is fixed parameter tractable, when k is chosen as the parameter. Previously this was only known for restricted classes of directed graphs. The main new ingredient in our approach is a lemma that shows that given a locally optimal out-branching of a directed graph in which every arc is part of at least one out-branching, either an out-branching with at least k leaves exists, or a path decomposition with width O(k^3) can be found. This enables a dynamic programming based algorithm of running time 2^{O(k^3 \log k)} n^{O(1)}, where n=|V(D)|.
[ { "version": "v1", "created": "Mon, 26 Nov 2007 17:05:38 GMT" } ]
2007-11-27T00:00:00
[ [ "Bonsma", "Paul", "" ], [ "Dorn", "Frederic", "" ] ]
0711.4573
Mathieu Raffinot
Pierre Charbit (LIAFA), Michel Habib (LIAFA), Vincent Limouzy (LIAFA), Fabien De Montgolfier (LIAFA), Mathieu Raffinot (LIAFA), Micha\"el Rao (LIRMM)
A Note On Computing Set Overlap Classes
null
null
null
null
cs.DS
null
Let ${\cal V}$ be a finite set of $n$ elements and ${\cal F}=\{X_1,X_2, >..., X_m\}$ a family of $m$ subsets of ${\cal V}.$ Two sets $X_i$ and $X_j$ of ${\cal F}$ overlap if $X_i \cap X_j \neq \emptyset,$ $X_j \setminus X_i \neq \emptyset,$ and $X_i \setminus X_j \neq \emptyset.$ Two sets $X,Y\in {\cal F}$ are in the same overlap class if there is a series $X=X_1,X_2, ..., X_k=Y$ of sets of ${\cal F}$ in which each $X_iX_{i+1}$ overlaps. In this note, we focus on efficiently identifying all overlap classes in $O(n+\sum_{i=1}^m |X_i|)$ time. We thus revisit the clever algorithm of Dahlhaus of which we give a clear presentation and that we simplify to make it practical and implementable in its real worst case complexity. An useful variant of Dahlhaus's approach is also explained.
[ { "version": "v1", "created": "Wed, 28 Nov 2007 20:07:46 GMT" } ]
2007-11-29T00:00:00
[ [ "Charbit", "Pierre", "", "LIAFA" ], [ "Habib", "Michel", "", "LIAFA" ], [ "Limouzy", "Vincent", "", "LIAFA" ], [ "De Montgolfier", "Fabien", "", "LIAFA" ], [ "Raffinot", "Mathieu", "", "LIAFA" ], [ "Rao", "Michaël", "", "LIRMM" ] ]
0711.4825
Nitish Korula
Chandra Chekuri, Nitish Korula
Approximation Algorithms for Orienteering with Time Windows
10 pages, 2 figures
null
null
null
cs.DS
null
Orienteering is the following optimization problem: given an edge-weighted graph (directed or undirected), two nodes s,t and a time limit T, find an s-t walk of total length at most T that maximizes the number of distinct nodes visited by the walk. One obtains a generalization, namely orienteering with time-windows (also referred to as TSP with time-windows), if each node v has a specified time-window [R(v), D(v)] and a node v is counted as visited by the walk only if v is visited during its time-window. For the time-window problem, an O(\log \opt) approximation can be achieved even for directed graphs if the algorithm is allowed quasi-polynomial time. However, the best known polynomial time approximation ratios are O(\log^2 \opt) for undirected graphs and O(\log^4 \opt) in directed graphs. In this paper we make some progress towards closing this discrepancy, and in the process obtain improved approximation ratios in several natural settings. Let L(v) = D(v) - R(v) denote the length of the time-window for v and let \lmax = \max_v L(v) and \lmin = \min_v L(v). Our results are given below with \alpha denoting the known approximation ratio for orienteering (without time-windows). Currently \alpha = (2+\eps) for undirected graphs and \alpha = O(\log^2 \opt) in directed graphs. 1. An O(\alpha \log \lmax) approximation when R(v) and D(v) are integer valued for each v. 2. An O(\alpha \max{\log \opt, \log \frac{\lmax}{\lmin}}) approximation. 3. An O(\alpha \log \frac{\lmax}{\lmin}) approximation when no start and end points are specified. In particular, if \frac{\lmax}{\lmin} is poly-bounded, we obtain an O(\log n) approximation for the time-window problem in undirected graphs.
[ { "version": "v1", "created": "Thu, 29 Nov 2007 21:10:48 GMT" } ]
2007-12-03T00:00:00
[ [ "Chekuri", "Chandra", "" ], [ "Korula", "Nitish", "" ] ]
0711.4902
Mikko Alava
Mikko Alava, John Ardelius, Erik Aurell, Petteri Kaski, Supriya Krishnamurthy, Pekka Orponen, and Sakari Seitz
Circumspect descent prevails in solving random constraint satisfaction problems
6 figures, about 17 pates
null
10.1073/pnas.0712263105
null
cs.DS cond-mat.stat-mech cs.AI
null
We study the performance of stochastic local search algorithms for random instances of the $K$-satisfiability ($K$-SAT) problem. We introduce a new stochastic local search algorithm, ChainSAT, which moves in the energy landscape of a problem instance by {\em never going upwards} in energy. ChainSAT is a \emph{focused} algorithm in the sense that it considers only variables occurring in unsatisfied clauses. We show by extensive numerical investigations that ChainSAT and other focused algorithms solve large $K$-SAT instances almost surely in linear time, up to high clause-to-variable ratios $\alpha$; for example, for K=4 we observe linear-time performance well beyond the recently postulated clustering and condensation transitions in the solution space. The performance of ChainSAT is a surprise given that by design the algorithm gets trapped into the first local energy minimum it encounters, yet no such minima are encountered. We also study the geometry of the solution space as accessed by stochastic local search algorithms.
[ { "version": "v1", "created": "Fri, 30 Nov 2007 11:01:40 GMT" } ]
2009-11-13T00:00:00
[ [ "Alava", "Mikko", "" ], [ "Ardelius", "John", "" ], [ "Aurell", "Erik", "" ], [ "Kaski", "Petteri", "" ], [ "Krishnamurthy", "Supriya", "" ], [ "Orponen", "Pekka", "" ], [ "Seitz", "Sakari", "" ] ]
0711.4990
Narad Rampersad
Dalia Krieger, Narad Rampersad, Jeffrey Shallit
Finding the growth rate of a regular language in polynomial time
null
null
null
null
cs.DM cs.DS
null
We give an O(n^3+n^2 t) time algorithm to determine whether an NFA with n states and t transitions accepts a language of polynomial or exponential growth. We also show that given a DFA accepting a language of polynomial growth, we can determine the order of polynomial growth in quadratic time.
[ { "version": "v1", "created": "Fri, 30 Nov 2007 17:48:00 GMT" } ]
2007-12-03T00:00:00
[ [ "Krieger", "Dalia", "" ], [ "Rampersad", "Narad", "" ], [ "Shallit", "Jeffrey", "" ] ]
0712.1097
Joao Marques-Silva
Joao Marques-Silva, Jordi Planes
On Using Unsatisfiability for Solving Maximum Satisfiability
null
null
null
null
cs.AI cs.DS
null
Maximum Satisfiability (MaxSAT) is a well-known optimization pro- blem, with several practical applications. The most widely known MAXS AT algorithms are ineffective at solving hard problems instances from practical application domains. Recent work proposed using efficient Boolean Satisfiability (SAT) solvers for solving the MaxSAT problem, based on identifying and eliminating unsatisfiable subformulas. However, these algorithms do not scale in practice. This paper analyzes existing MaxSAT algorithms based on unsatisfiable subformula identification. Moreover, the paper proposes a number of key optimizations to these MaxSAT algorithms and a new alternative algorithm. The proposed optimizations and the new algorithm provide significant performance improvements on MaxSAT instances from practical applications. Moreover, the efficiency of the new generation of unsatisfiability-based MaxSAT solvers becomes effectively indexed to the ability of modern SAT solvers to proving unsatisfiability and identifying unsatisfiable subformulas.
[ { "version": "v1", "created": "Fri, 7 Dec 2007 09:21:58 GMT" } ]
2007-12-10T00:00:00
[ [ "Marques-Silva", "Joao", "" ], [ "Planes", "Jordi", "" ] ]
0712.1163
Philipp Schuetz
Philipp Schuetz and Amedeo Caflisch
Efficient modularity optimization by multistep greedy algorithm and vertex mover refinement
7 pages, parts of text rewritten, illustrations and pseudocode representation of algorithms added
Phys. Rev. E 77,046112 (2008)
10.1103/PhysRevE.77.046112
null
cs.DS cond-mat.dis-nn cs.DM physics.soc-ph
null
Identifying strongly connected substructures in large networks provides insight into their coarse-grained organization. Several approaches based on the optimization of a quality function, e.g., the modularity, have been proposed. We present here a multistep extension of the greedy algorithm (MSG) that allows the merging of more than one pair of communities at each iteration step. The essential idea is to prevent the premature condensation into few large communities. Upon convergence of the MSG a simple refinement procedure called "vertex mover" (VM) is used for reassigning vertices to neighboring communities to improve the final modularity value. With an appropriate choice of the step width, the combined MSG-VM algorithm is able to find solutions of higher modularity than those reported previously. The multistep extension does not alter the scaling of computational cost of the greedy algorithm.
[ { "version": "v1", "created": "Fri, 7 Dec 2007 15:48:31 GMT" }, { "version": "v2", "created": "Fri, 2 May 2008 10:16:35 GMT" } ]
2008-05-02T00:00:00
[ [ "Schuetz", "Philipp", "" ], [ "Caflisch", "Amedeo", "" ] ]
0712.1959
Tamal Dey
Siu-Wing Cheng and Tamal K. Dey
Delaunay Edge Flips in Dense Surface Triangulations
This paper is prelude to "Maintaining Deforming Surface Meshes" by Cheng-Dey in SODA 2008
null
null
null
cs.CG cs.DS
null
Delaunay flip is an elegant, simple tool to convert a triangulation of a point set to its Delaunay triangulation. The technique has been researched extensively for full dimensional triangulations of point sets. However, an important case of triangulations which are not full dimensional is surface triangulations in three dimensions. In this paper we address the question of converting a surface triangulation to a subcomplex of the Delaunay triangulation with edge flips. We show that the surface triangulations which closely approximate a smooth surface with uniform density can be transformed to a Delaunay triangulation with a simple edge flip algorithm. The condition on uniformity becomes less stringent with increasing density of the triangulation. If the condition is dropped completely, the flip algorithm still terminates although the output surface triangulation becomes "almost Delaunay" instead of exactly Delaunay.
[ { "version": "v1", "created": "Wed, 12 Dec 2007 15:45:53 GMT" } ]
2007-12-13T00:00:00
[ [ "Cheng", "Siu-Wing", "" ], [ "Dey", "Tamal K.", "" ] ]
0712.2629
Toshiya Itoh
Ryoso Hamane, Toshiya Itoh, and Kouhei Tomita
Approximation Algorithms for the Highway Problem under the Coupon Model
13 pages, 5 figures
IEICE Trans. on Fundamentals, E92-A(8), pp.1779-1786, 2009
10.1587/transfun.E92.A.1779
null
cs.DS
null
When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i \in V has the production cost d_i and each customer e_j \in E has the valuation v_j on the bundle e_j \subseteq V of items. When the store sells an item i \in V at the price r_i, the profit for the item i is p_i=r_i-d_i. The goal of the store is to decide the price of each item to maximize its total profit. In most of the previous works, the item pricing problem was considered under the assumption that p_i \geq 0 for each i \in V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of loss-leader, and showed that the seller can get more total profit in the case that p_i < 0 is allowed than in the case that p_i < 0 is not allowed. In this paper, we consider the line and the cycle highway problem, and show approximation algorithms for the line and/or cycle highway problem for which the smallest valuation is s and the largest valuation is \ell or all valuations are identical.
[ { "version": "v1", "created": "Mon, 17 Dec 2007 04:47:38 GMT" }, { "version": "v2", "created": "Fri, 4 Jan 2008 05:54:40 GMT" } ]
2011-09-29T00:00:00
[ [ "Hamane", "Ryoso", "" ], [ "Itoh", "Toshiya", "" ], [ "Tomita", "Kouhei", "" ] ]
0712.2661
Gregory Gutin
P. Balister, S. Gerke, G. Gutin, A. Johnstone, J. Reddington, E. Scott, A. Soleimanfallah, A. Yeo
Algorithms for Generating Convex Sets in Acyclic Digraphs
null
null
null
null
cs.DM cs.DS
null
A set $X$ of vertices of an acyclic digraph $D$ is convex if $X\neq \emptyset$ and there is no directed path between vertices of $X$ which contains a vertex not in $X$. A set $X$ is connected if $X\neq \emptyset$ and the underlying undirected graph of the subgraph of $D$ induced by $X$ is connected. Connected convex sets and convex sets of acyclic digraphs are of interest in the area of modern embedded processor technology. We construct an algorithm $\cal A$ for enumeration of all connected convex sets of an acyclic digraph $D$ of order $n$. The time complexity of $\cal A$ is $O(n\cdot cc(D))$, where $cc(D)$ is the number of connected convex sets in $D$. We also give an optimal algorithm for enumeration of all (not just connected) convex sets of an acyclic digraph $D$ of order $n$. In computational experiments we demonstrate that our algorithms outperform the best algorithms in the literature. Using the same approach as for $\cal A$, we design an algorithm for generating all connected sets of a connected undirected graph $G$. The complexity of the algorithm is $O(n\cdot c(G)),$ where $n$ is the order of $G$ and $c(G)$ is the number of connected sets of $G.$ The previously reported algorithm for connected set enumeration is of running time $O(mn\cdot c(G))$, where $m$ is the number of edges in $G.$
[ { "version": "v1", "created": "Mon, 17 Dec 2007 09:18:57 GMT" } ]
2007-12-18T00:00:00
[ [ "Balister", "P.", "" ], [ "Gerke", "S.", "" ], [ "Gutin", "G.", "" ], [ "Johnstone", "A.", "" ], [ "Reddington", "J.", "" ], [ "Scott", "E.", "" ], [ "Soleimanfallah", "A.", "" ], [ "Yeo", "A.", "" ] ]
0712.2682
Kai Puolamaki
Kai Puolam\"aki, Sami Hanhij\"arvi, Gemma C. Garriga
An Approximation Ratio for Biclustering
9 pages, 2 figures; presentation clarified, replaced to match the version to be published in IPL
Information Processing Letters 108 (2008) 45-49
10.1016/j.ipl.2008.03.013
Publications in Computer and Information Science E13
cs.DS stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of biclustering consists of the simultaneous clustering of rows and columns of a matrix such that each of the submatrices induced by a pair of row and column clusters is as uniform as possible. In this paper we approximate the optimal biclustering by applying one-way clustering algorithms independently on the rows and on the columns of the input matrix. We show that such a solution yields a worst-case approximation ratio of 1+sqrt(2) under L1-norm for 0-1 valued matrices, and of 2 under L2-norm for real valued matrices.
[ { "version": "v1", "created": "Mon, 17 Dec 2007 11:45:42 GMT" }, { "version": "v2", "created": "Fri, 22 Aug 2008 07:01:26 GMT" } ]
2008-08-22T00:00:00
[ [ "Puolamäki", "Kai", "" ], [ "Hanhijärvi", "Sami", "" ], [ "Garriga", "Gemma C.", "" ] ]
0712.3203
Wan ChangLin
Changlin Wan, Zhongzhi Shi
Solving Medium-Density Subset Sum Problems in Expected Polynomial Time: An Enumeration Approach
11 pages, 1 figure
Changlin Wan, Zhongzhi Shi: Solving Medium-Density Subset Sum Problems in Expected Polynomial Time: An Enumeration Approach. FAW 2008: 300-310
10.1007/978-3-540-69311-6_31
null
cs.DS cs.CC cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The subset sum problem (SSP) can be briefly stated as: given a target integer $E$ and a set $A$ containing $n$ positive integer $a_j$, find a subset of $A$ summing to $E$. The \textit{density} $d$ of an SSP instance is defined by the ratio of $n$ to $m$, where $m$ is the logarithm of the largest integer within $A$. Based on the structural and statistical properties of subset sums, we present an improved enumeration scheme for SSP, and implement it as a complete and exact algorithm (EnumPlus). The algorithm always equivalently reduces an instance to be low-density, and then solve it by enumeration. Through this approach, we show the possibility to design a sole algorithm that can efficiently solve arbitrary density instance in a uniform way. Furthermore, our algorithm has considerable performance advantage over previous algorithms. Firstly, it extends the density scope, in which SSP can be solved in expected polynomial time. Specifically, It solves SSP in expected $O(n\log{n})$ time when density $d \geq c\cdot \sqrt{n}/\log{n}$, while the previously best density scope is $d \geq c\cdot n/(\log{n})^{2}$. In addition, the overall expected time and space requirement in the average case are proven to be $O(n^5\log n)$ and $O(n^5)$ respectively. Secondly, in the worst case, it slightly improves the previously best time complexity of exact algorithms for SSP. Specifically, the worst-case time complexity of our algorithm is proved to be $O((n-6)2^{n/2}+n)$, while the previously best result is $O(n2^{n/2})$.
[ { "version": "v1", "created": "Wed, 19 Dec 2007 14:43:50 GMT" }, { "version": "v2", "created": "Mon, 23 Jun 2008 02:00:12 GMT" } ]
2008-06-23T00:00:00
[ [ "Wan", "Changlin", "" ], [ "Shi", "Zhongzhi", "" ] ]
0712.3333
Abraham Punnen
Qiaoming Han and Abraham P. Punnen
On the approximability of the vertex cover and related problems
null
null
null
null
cs.DS cs.DM
null
In this paper we show that the problem of identifying an edge $(i,j)$ in a graph $G$ such that there exists an optimal vertex cover $S$ of $G$ containing exactly one of the nodes $i$ and $j$ is NP-hard. Such an edge is called a weak edge. We then develop a polynomial time approximation algorithm for the vertex cover problem with performance guarantee $2-\frac{1}{1+\sigma}$, where $\sigma$ is an upper bound on a measure related to a weak edge of a graph. Further, we discuss a new relaxation of the vertex cover problem which is used in our approximation algorithm to obtain smaller values of $\sigma$. We also obtain linear programming representations of the vertex cover problem for special graphs. Our results provide new insights into the approximability of the vertex cover problem - a long standing open problem.
[ { "version": "v1", "created": "Thu, 20 Dec 2007 06:35:05 GMT" } ]
2007-12-21T00:00:00
[ [ "Han", "Qiaoming", "" ], [ "Punnen", "Abraham P.", "" ] ]
0712.3335
Abraham Punnen
Qiaoming Han, Abraham P. Punnen, and Yinyu Ye
A polynomial time $\frac 3 2$ -approximation algorithm for the vertex cover problem on a class of graphs
null
null
null
null
cs.DS cs.DM
null
We develop a polynomial time 3/2-approximation algorithm to solve the vertex cover problem on a class of graphs satisfying a property called ``active edge hypothesis''. The algorithm also guarantees an optimal solution on specially structured graphs. Further, we give an extended algorithm which guarantees a vertex cover $S_1$ on an arbitrary graph such that $|S_1|\leq {3/2} |S^*|+\xi$ where $S^*$ is an optimal vertex cover and $\xi$ is an error bound identified by the algorithm. We obtained $\xi = 0$ for all the test problems we have considered which include specially constructed instances that were expected to be hard. So far we could not construct a graph that gives $\xi \not= 0$.
[ { "version": "v1", "created": "Thu, 20 Dec 2007 06:53:30 GMT" } ]
2007-12-21T00:00:00
[ [ "Han", "Qiaoming", "" ], [ "Punnen", "Abraham P.", "" ], [ "Ye", "Yinyu", "" ] ]
0712.3360
Rossano Venturini
Paolo Ferragina (1), Rodrigo Gonzalez (2), Gonzalo Navarro (2), Rossano Venturini (2) ((1) Dept. of Computer Science, University of Pisa, (2) Dept. of Computer Science, University of Chile)
Compressed Text Indexes:From Theory to Practice!
null
null
null
null
cs.DS
null
A compressed full-text self-index represents a text in a compressed form and still answers queries efficiently. This technology represents a breakthrough over the text indexing techniques of the previous decade, whose indexes required several times the size of the text. Although it is relatively new, this technology has matured up to a point where theoretical research is giving way to practical developments. Nonetheless this requires significant programming skills, a deep engineering effort, and a strong algorithmic background to dig into the research results. To date only isolated implementations and focused comparisons of compressed indexes have been reported, and they missed a common API, which prevented their re-use or deployment within other applications. The goal of this paper is to fill this gap. First, we present the existing implementations of compressed indexes from a practitioner's point of view. Second, we introduce the Pizza&Chili site, which offers tuned implementations and a standardized API for the most successful compressed full-text self-indexes, together with effective testbeds and scripts for their automatic validation and test. Third, we show the results of our extensive experiments on these codes with the aim of demonstrating the practical relevance of this novel and exciting technology.
[ { "version": "v1", "created": "Thu, 20 Dec 2007 10:42:54 GMT" } ]
2007-12-21T00:00:00
[ [ "Ferragina", "Paolo", "" ], [ "Gonzalez", "Rodrigo", "" ], [ "Navarro", "Gonzalo", "" ], [ "Venturini", "Rossano", "" ] ]
0712.3568
David Pritchard
Jochen Konemann, David Pritchard, Kunlun Tan
A Partition-Based Relaxation For Steiner Trees
Submitted to Math. Prog
null
null
null
cs.DS
null
The Steiner tree problem is a classical NP-hard optimization problem with a wide range of practical applications. In an instance of this problem, we are given an undirected graph G=(V,E), a set of terminals R, and non-negative costs c_e for all edges e in E. Any tree that contains all terminals is called a Steiner tree; the goal is to find a minimum-cost Steiner tree. The nodes V R are called Steiner nodes. The best approximation algorithm known for the Steiner tree problem is due to Robins and Zelikovsky (SIAM J. Discrete Math, 2005); their greedy algorithm achieves a performance guarantee of 1+(ln 3)/2 ~ 1.55. The best known linear (LP)-based algorithm, on the other hand, is due to Goemans and Bertsimas (Math. Programming, 1993) and achieves an approximation ratio of 2-2/|R|. In this paper we establish a link between greedy and LP-based approaches by showing that Robins and Zelikovsky's algorithm has a natural primal-dual interpretation with respect to a novel partition-based linear programming relaxation. We also exhibit surprising connections between the new formulation and existing LPs and we show that the new LP is stronger than the bidirected cut formulation. An instance is b-quasi-bipartite if each connected component of G R has at most b vertices. We show that Robins' and Zelikovsky's algorithm has an approximation ratio better than 1+(ln 3)/2 for such instances, and we prove that the integrality gap of our LP is between 8/7 and (2b+1)/(b+1).
[ { "version": "v1", "created": "Thu, 20 Dec 2007 21:06:35 GMT" } ]
2007-12-24T00:00:00
[ [ "Konemann", "Jochen", "" ], [ "Pritchard", "David", "" ], [ "Tan", "Kunlun", "" ] ]
0712.3829
Francois Le Gall
Yoshifumi Inui and Francois Le Gall
Quantum Property Testing of Group Solvability
11 pages; supersedes arXiv:quant-ph/0610013
Algorithmica 59(1): 35-47 (2011)
10.1007/s00453-009-9338-8
null
quant-ph cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Testing efficiently whether a finite set with a binary operation over it, given as an oracle, is a group is a well-known open problem in the field of property testing. Recently, Friedl, Ivanyos and Santha have made a significant step in the direction of solving this problem by showing that it it possible to test efficiently whether the input is an Abelian group or is far, with respect to some distance, from any Abelian group. In this paper, we make a step further and construct an efficient quantum algorithm that tests whether the input is a solvable group, or is far from any solvable group. More precisely, the number of queries used by our algorithm is polylogarithmic in the size of the set.
[ { "version": "v1", "created": "Sat, 22 Dec 2007 04:47:03 GMT" }, { "version": "v2", "created": "Sun, 3 Jan 2010 08:29:30 GMT" } ]
2021-10-05T00:00:00
[ [ "Inui", "Yoshifumi", "" ], [ "Gall", "Francois Le", "" ] ]
0712.3858
Abraham Punnen
Abraham P. Punnen and Ruonan Zhang
Bottleneck flows in networks
null
null
null
null
cs.DS
null
The bottleneck network flow problem (BNFP) is a generalization of several well-studied bottleneck problems such as the bottleneck transportation problem (BTP), bottleneck assignment problem (BAP), bottleneck path problem (BPP), and so on. In this paper we provide a review of important results on this topic and its various special cases. We observe that the BNFP can be solved as a sequence of $O(\log n)$ maximum flow problems. However, special augmenting path based algorithms for the maximum flow problem can be modified to obtain algorithms for the BNFP with the property that these variations and the corresponding maximum flow algorithms have identical worst case time complexity. On unit capacity network we show that BNFP can be solved in $O(\min \{{m(n\log n)}^{{2/3}}, m^{{3/2}}\sqrt{\log n}\})$. This improves the best available algorithm by a factor of $\sqrt{\log n}$. On unit capacity simple graphs, we show that BNFP can be solved in $O(m \sqrt {n \log n})$ time. As a consequence we have an $O(m \sqrt {n \log n})$ algorithm for the BTP with unit arc capacities.
[ { "version": "v1", "created": "Sat, 22 Dec 2007 13:49:45 GMT" } ]
2007-12-27T00:00:00
[ [ "Punnen", "Abraham P.", "" ], [ "Zhang", "Ruonan", "" ] ]
0712.3876
Amir Rothschild
Ely Porat and Amir Rothschild
Explicit Non-Adaptive Combinatorial Group Testing Schemes
15 pages, accepted to ICALP 2008
null
null
null
cs.DS
null
Group testing is a long studied problem in combinatorics: A small set of $r$ ill people should be identified out of the whole ($n$ people) by using only queries (tests) of the form "Does set X contain an ill human?". In this paper we provide an explicit construction of a testing scheme which is better (smaller) than any known explicit construction. This scheme has $\bigT{\min[r^2 \ln n,n]}$ tests which is as many as the best non-explicit schemes have. In our construction we use a fact that may have a value by its own right: Linear error-correction codes with parameters $[m,k,\delta m]_q$ meeting the Gilbert-Varshamov bound may be constructed quite efficiently, in $\bigT{q^km}$ time.
[ { "version": "v1", "created": "Sat, 22 Dec 2007 21:04:34 GMT" }, { "version": "v2", "created": "Wed, 23 Jan 2008 22:30:43 GMT" }, { "version": "v3", "created": "Sun, 27 Apr 2008 20:32:13 GMT" }, { "version": "v4", "created": "Tue, 29 Apr 2008 19:55:32 GMT" }, { "version": "v5", "created": "Tue, 29 Apr 2008 20:02:41 GMT" } ]
2008-04-29T00:00:00
[ [ "Porat", "Ely", "" ], [ "Rothschild", "Amir", "" ] ]
0712.3936
Juli\'an Mestre
Juli\'an Mestre
Lagrangian Relaxation and Partial Cover
20 pages, extended abstract appeared in STACS 2008
null
null
null
cs.DS cs.DM
null
Lagrangian relaxation has been used extensively in the design of approximation algorithms. This paper studies its strengths and limitations when applied to Partial Cover.
[ { "version": "v1", "created": "Sun, 23 Dec 2007 18:33:36 GMT" } ]
2007-12-27T00:00:00
[ [ "Mestre", "Julián", "" ] ]
0712.4027
Olga Holtz
James Demmel, Ioana Dumitriu, Olga Holtz, Plamen Koev
Accurate and Efficient Expression Evaluation and Linear Algebra
49 pages, 6 figures, 1 table
Acta Numerica, Volume 17, May 2008, pp 87-145
10.1017/S0962492906350015
null
math.NA cs.CC cs.DS math.RA
null
We survey and unify recent results on the existence of accurate algorithms for evaluating multivariate polynomials, and more generally for accurate numerical linear algebra with structured matrices. By "accurate" we mean that the computed answer has relative error less than 1, i.e., has some correct leading digits. We also address efficiency, by which we mean algorithms that run in polynomial time in the size of the input. Our results will depend strongly on the model of arithmetic: Most of our results will use the so-called Traditional Model (TM). We give a set of necessary and sufficient conditions to decide whether a high accuracy algorithm exists in the TM, and describe progress toward a decision procedure that will take any problem and provide either a high accuracy algorithm or a proof that none exists. When no accurate algorithm exists in the TM, it is natural to extend the set of available accurate operations by a library of additional operations, such as $x+y+z$, dot products, or indeed any enumerable set which could then be used to build further accurate algorithms. We show how our accurate algorithms and decision procedure for finding them extend to this case. Finally, we address other models of arithmetic, and the relationship between (im)possibility in the TM and (in)efficient algorithms operating on numbers represented as bit strings.
[ { "version": "v1", "created": "Mon, 24 Dec 2007 20:14:50 GMT" } ]
2008-05-21T00:00:00
[ [ "Demmel", "James", "" ], [ "Dumitriu", "Ioana", "" ], [ "Holtz", "Olga", "" ], [ "Koev", "Plamen", "" ] ]
0712.4046
David Harvey
David Harvey
Faster polynomial multiplication via multipoint Kronecker substitution
14 pages, 4 figures
null
null
null
cs.SC cs.DS
null
We give several new algorithms for dense polynomial multiplication based on the Kronecker substitution method. For moderately sized input polynomials, the new algorithms improve on the performance of the standard Kronecker substitution by a sizeable constant, both in theory and in empirical tests.
[ { "version": "v1", "created": "Tue, 25 Dec 2007 04:57:04 GMT" } ]
2007-12-27T00:00:00
[ [ "Harvey", "David", "" ] ]
0712.4213
Seiichiro Tani
Seiichiro Tani, Hirotada Kobayashi, Keiji Matsumoto
Exact Quantum Algorithms for the Leader Election Problem
47 pages, preliminary version in Proceedings of STACS 2005
ACM TOCT 4 (2012): Article 1; IEEE TPDS 23 (2012): 255 - 262
null
null
quant-ph cs.DC cs.DS
null
This paper gives the first separation of quantum and classical pure (i.e., non-cryptographic) computing abilities with no restriction on the amount of available computing resources, by considering the exact solvability of a celebrated unsolvable problem in classical distributed computing, the ``leader election problem'' on anonymous networks. The goal of the leader election problem is to elect a unique leader from among distributed parties. The paper considers this problem for anonymous networks, in which each party has the same identifier. It is well-known that no classical algorithm can solve exactly (i.e., in bounded time without error) the leader election problem in anonymous networks, even if it is given the number of parties. This paper gives two quantum algorithms that, given the number of parties, can exactly solve the problem for any network topology in polynomial rounds and polynomial communication/time complexity with respect to the number of parties, when the parties are connected by quantum communication links.
[ { "version": "v1", "created": "Thu, 27 Dec 2007 10:52:52 GMT" } ]
2012-10-10T00:00:00
[ [ "Tani", "Seiichiro", "" ], [ "Kobayashi", "Hirotada", "" ], [ "Matsumoto", "Keiji", "" ] ]
0801.0102
Michael Baer
Michael B. Baer
Reserved-Length Prefix Coding
5 pages, submitted to ISIT 2008
null
null
null
cs.IT cs.DS math.IT
null
Huffman coding finds an optimal prefix code for a given probability mass function. Consider situations in which one wishes to find an optimal code with the restriction that all codewords have lengths that lie in a user-specified set of lengths (or, equivalently, no codewords have lengths that lie in a complementary set). This paper introduces a polynomial-time dynamic programming algorithm that finds optimal codes for this reserved-length prefix coding problem. This has applications to quickly encoding and decoding lossless codes. In addition, one modification of the approach solves any quasiarithmetic prefix coding problem, while another finds optimal codes restricted to the set of codes with g codeword lengths for user-specified g (e.g., g=2).
[ { "version": "v1", "created": "Sun, 30 Dec 2007 00:14:24 GMT" } ]
2008-01-03T00:00:00
[ [ "Baer", "Michael B.", "" ] ]
0801.0590
Kettani Omar
Omar Kettani
An algorithm for finding the Independence Number of a graph
15 pages; a corrected proof for the second method is added
null
null
null
cs.DM cs.DS
null
In this paper, we prove that for every connected graph G, there exists a split graph H with the same independence number and the same order. Then we propose a first algorithm for finding this graph, given the degree sequence of the input graph G. Further, we propose a second algorithm for finding the independence number of G, given the adjacency matrix of G.
[ { "version": "v1", "created": "Thu, 3 Jan 2008 20:51:38 GMT" }, { "version": "v2", "created": "Fri, 4 Jan 2008 16:00:32 GMT" }, { "version": "v3", "created": "Sun, 6 Jan 2008 20:50:20 GMT" }, { "version": "v4", "created": "Wed, 9 Jan 2008 13:51:59 GMT" } ]
2008-01-09T00:00:00
[ [ "Kettani", "Omar", "" ] ]
0801.1300
Igor Razgon
Igor Razgon and Barry O'Sullivan
Almost 2-SAT is Fixed-Parameter Tractable
This new version fixes the bug found by Somnath Sikdar in the proof of Claim 8. In the repaired version the modification of the Almost 2-SAT problem called 2-SLASAT is no longer needed and only the modification called 2-ASLASAT remains relevant. Hence the whole manuscript is updated so that the 2-SLASAT problem is not mentioned there anymore
null
null
null
cs.DS cs.CG cs.LO
null
We consider the following problem. Given a 2-CNF formula, is it possible to remove at most $k$ clauses so that the resulting 2-CNF formula is satisfiable? This problem is known to different research communities in Theoretical Computer Science under the names 'Almost 2-SAT', 'All-but-$k$ 2-SAT', '2-CNF deletion', '2-SAT deletion'. The status of fixed-parameter tractability of this problem is a long-standing open question in the area of Parameterized Complexity. We resolve this open question by proposing an algorithm which solves this problem in $O(15^k*k*m^3)$ and thus we show that this problem is fixed-parameter tractable.
[ { "version": "v1", "created": "Tue, 8 Jan 2008 19:04:14 GMT" }, { "version": "v2", "created": "Wed, 9 Jan 2008 19:24:05 GMT" }, { "version": "v3", "created": "Mon, 18 Feb 2008 15:14:49 GMT" }, { "version": "v4", "created": "Fri, 18 Apr 2008 14:07:04 GMT" } ]
2008-04-18T00:00:00
[ [ "Razgon", "Igor", "" ], [ "O'Sullivan", "Barry", "" ] ]
0801.1416
Piyush Kurur
Anindya De, Piyush P Kurur, Chandan Saha and Ramprasad Saptharishi
Fast Integer Multiplication using Modular Arithmetic
fixed some typos and references
null
null
null
cs.SC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give an $O(N\cdot \log N\cdot 2^{O(\log^*N)})$ algorithm for multiplying two $N$-bit integers that improves the $O(N\cdot \log N\cdot \log\log N)$ algorithm by Sch\"{o}nhage-Strassen. Both these algorithms use modular arithmetic. Recently, F\"{u}rer gave an $O(N\cdot \log N\cdot 2^{O(\log^*N)})$ algorithm which however uses arithmetic over complex numbers as opposed to modular arithmetic. In this paper, we use multivariate polynomial multiplication along with ideas from F\"{u}rer's algorithm to achieve this improvement in the modular setting. Our algorithm can also be viewed as a $p$-adic version of F\"{u}rer's algorithm. Thus, we show that the two seemingly different approaches to integer multiplication, modular and complex arithmetic, are similar.
[ { "version": "v1", "created": "Wed, 9 Jan 2008 12:44:55 GMT" }, { "version": "v2", "created": "Tue, 6 May 2008 07:05:09 GMT" }, { "version": "v3", "created": "Fri, 19 Sep 2008 06:45:16 GMT" } ]
2008-09-19T00:00:00
[ [ "De", "Anindya", "" ], [ "Kurur", "Piyush P", "" ], [ "Saha", "Chandan", "" ], [ "Saptharishi", "Ramprasad", "" ] ]
0801.1979
Gregory Gutin
G. Gutin, I. Razgon, E.J. Kim
Minimum Leaf Out-branching and Related Problems
The main change is a quadratic kernel derivation
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a digraph $D$, the Minimum Leaf Out-Branching problem (MinLOB) is the problem of finding in $D$ an out-branching with the minimum possible number of leaves, i.e., vertices of out-degree 0. We prove that MinLOB is polynomial-time solvable for acyclic digraphs. In general, MinLOB is NP-hard and we consider three parameterizations of MinLOB. We prove that two of them are NP-complete for every value of the parameter, but the third one is fixed-parameter tractable (FPT). The FPT parametrization is as follows: given a digraph $D$ of order $n$ and a positive integral parameter $k$, check whether $D$ contains an out-branching with at most $n-k$ leaves (and find such an out-branching if it exists). We find a problem kernel of order $O(k^2)$ and construct an algorithm of running time $O(2^{O(k\log k)}+n^6),$ which is an `additive' FPT algorithm. We also consider transformations from two related problems, the minimum path covering and the maximum internal out-tree problems into MinLOB, which imply that some parameterizations of the two problems are FPT as well.
[ { "version": "v1", "created": "Sun, 13 Jan 2008 19:33:29 GMT" }, { "version": "v2", "created": "Mon, 28 Jan 2008 17:41:09 GMT" }, { "version": "v3", "created": "Tue, 14 Oct 2008 20:51:12 GMT" } ]
2008-10-14T00:00:00
[ [ "Gutin", "G.", "" ], [ "Razgon", "I.", "" ], [ "Kim", "E. J.", "" ] ]
0801.1987
Neal E. Young
Christos Koufogiannakis and Neal E. Young
A Nearly Linear-Time PTAS for Explicit Fractional Packing and Covering Linear Programs
corrected version of FOCS 2007 paper: 10.1109/FOCS.2007.62. Accepted to Algorithmica, 2013
Algorithmica 70(4):648-674(2014)
10.1007/s00453-013-9771-6
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give an approximation algorithm for packing and covering linear programs (linear programs with non-negative coefficients). Given a constraint matrix with n non-zeros, r rows, and c columns, the algorithm computes feasible primal and dual solutions whose costs are within a factor of 1+eps of the optimal cost in time O((r+c)log(n)/eps^2 + n).
[ { "version": "v1", "created": "Sun, 13 Jan 2008 22:04:49 GMT" }, { "version": "v2", "created": "Wed, 13 Mar 2013 16:03:10 GMT" } ]
2015-06-02T00:00:00
[ [ "Koufogiannakis", "Christos", "" ], [ "Young", "Neal E.", "" ] ]
0801.2284
Kettani Omar
Omar Kettani
Le probleme de l'isomorphisme de graphes est dans P
This paper has been withdrawn
null
null
null
cs.DM cs.DS
null
This paper has been withdrawn by the author, due to possible counter-examples.
[ { "version": "v1", "created": "Tue, 15 Jan 2008 13:06:41 GMT" }, { "version": "v2", "created": "Sat, 19 Jan 2008 13:58:28 GMT" } ]
2008-01-19T00:00:00
[ [ "Kettani", "Omar", "" ] ]
0801.2378
Paolo Ferragina
Paolo Ferragina
String algorithms and data structures
null
null
null
null
cs.DS cs.IR
null
The string-matching field has grown at a such complicated stage that various issues come into play when studying it: data structure and algorithmic design, database principles, compression techniques, architectural features, cache and prefetching policies. The expertise nowadays required to design good string data structures and algorithms is therefore transversal to many computer science fields and much more study on the orchestration of known, or novel, techniques is needed to make progress in this fascinating topic. This survey is aimed at illustrating the key ideas which should constitute, in our opinion, the current background of every index designer. We also discuss the positive features and drawback of known indexing schemes and algorithms, and devote much attention to detail research issues and open problems both on the theoretical and the experimental side.
[ { "version": "v1", "created": "Tue, 15 Jan 2008 20:54:18 GMT" } ]
2008-01-16T00:00:00
[ [ "Ferragina", "Paolo", "" ] ]
0801.2931
Jon Feldman
Jon Feldman, S. Muthukrishnan, Evdokia Nikolova, Martin Pal
A Truthful Mechanism for Offline Ad Slot Scheduling
null
null
null
null
cs.GT cs.DS
null
We consider the "Offline Ad Slot Scheduling" problem, where advertisers must be scheduled to "sponsored search" slots during a given period of time. Advertisers specify a budget constraint, as well as a maximum cost per click, and may not be assigned to more than one slot for a particular search. We give a truthful mechanism under the utility model where bidders try to maximize their clicks, subject to their personal constraints. In addition, we show that the revenue-maximizing mechanism is not truthful, but has a Nash equilibrium whose outcome is identical to our mechanism. As far as we can tell, this is the first treatment of sponsored search that directly incorporates both multiple slots and budget constraints into an analysis of incentives. Our mechanism employs a descending-price auction that maintains a solution to a certain machine scheduling problem whose job lengths depend on the price, and hence is variable over the auction. The price stops when the set of bidders that can afford that price pack exactly into a block of ad slots, at which point the mechanism allocates that block and continues on the remaining slots. To prove our result on the equilibrium of the revenue-maximizing mechanism, we first show that a greedy algorithm suffices to solve the revenue-maximizing linear program; we then use this insight to prove that bidders allocated in the same block of our mechanism have no incentive to deviate from bidding the fixed price of that block.
[ { "version": "v1", "created": "Fri, 18 Jan 2008 16:34:30 GMT" } ]
2008-01-21T00:00:00
[ [ "Feldman", "Jon", "" ], [ "Muthukrishnan", "S.", "" ], [ "Nikolova", "Evdokia", "" ], [ "Pal", "Martin", "" ] ]
0801.3147
Ke Xu
Liang Li, Xin Li, Tian Liu, Ke Xu
From k-SAT to k-CSP: Two Generalized Algorithms
null
null
null
null
cs.DS cs.AI cs.CC
null
Constraint satisfaction problems (CSPs) models many important intractable NP-hard problems such as propositional satisfiability problem (SAT). Algorithms with non-trivial upper bounds on running time for restricted SAT with bounded clause length k (k-SAT) can be classified into three styles: DPLL-like, PPSZ-like and Local Search, with local search algorithms having already been generalized to CSP with bounded constraint arity k (k-CSP). We generalize a DPLL-like algorithm in its simplest form and a PPSZ-like algorithm from k-SAT to k-CSP. As far as we know, this is the first attempt to use PPSZ-like strategy to solve k-CSP, and before little work has been focused on the DPLL-like or PPSZ-like strategies for k-CSP.
[ { "version": "v1", "created": "Mon, 21 Jan 2008 08:07:33 GMT" } ]
2008-01-22T00:00:00
[ [ "Li", "Liang", "" ], [ "Li", "Xin", "" ], [ "Liu", "Tian", "" ], [ "Xu", "Ke", "" ] ]
0801.3581
Shay Solomon
Yefim Dinitz, Michael Elkin, Shay Solomon
Shallow, Low, and Light Trees, and Tight Lower Bounds for Euclidean Spanners
41 pages, 11 figures
null
null
null
cs.CG cs.DS
null
We show that for every $n$-point metric space $M$ there exists a spanning tree $T$ with unweighted diameter $O(\log n)$ and weight $\omega(T) = O(\log n) \cdot \omega(MST(M))$. Moreover, there is a designated point $rt$ such that for every point $v$, $dist_T(rt,v) \le (1+\epsilon) \cdot dist_M(rt,v)$, for an arbitrarily small constant $\epsilon > 0$. We extend this result, and provide a tradeoff between unweighted diameter and weight, and prove that this tradeoff is \emph{tight up to constant factors} in the entire range of parameters. These results enable us to settle a long-standing open question in Computational Geometry. In STOC'95 Arya et al. devised a construction of Euclidean Spanners with unweighted diameter $O(\log n)$ and weight $O(\log n) \cdot \omega(MST(M))$. Ten years later in SODA'05 Agarwal et al. showed that this result is tight up to a factor of $O(\log \log n)$. We close this gap and show that the result of Arya et al. is tight up to constant factors.
[ { "version": "v1", "created": "Wed, 23 Jan 2008 13:57:00 GMT" } ]
2011-08-31T00:00:00
[ [ "Dinitz", "Yefim", "" ], [ "Elkin", "Michael", "" ], [ "Solomon", "Shay", "" ] ]
0801.3710
Amitabh Trehan
Jared Saia, Amitabh Trehan
Picking up the Pieces: Self-Healing in Reconfigurable Networks
To be presented at IPDPS (IEEE International Parallel & Distributed Processing Symposium) 2008
null
10.1109/IPDPS.2008.4536326
null
cs.DS cs.DC cs.NI
null
We consider the problem of self-healing in networks that are reconfigurable in the sense that they can change their topology during an attack. Our goal is to maintain connectivity in these networks, even in the presence of repeated adversarial node deletion, by carefully adding edges after each attack. We present a new algorithm, DASH, that provably ensures that: 1) the network stays connected even if an adversary deletes up to all nodes in the network; and 2) no node ever increases its degree by more than 2 log n, where n is the number of nodes initially in the network. DASH is fully distributed; adds new edges only among neighbors of deleted nodes; and has average latency and bandwidth costs that are at most logarithmic in n. DASH has these properties irrespective of the topology of the initial network, and is thus orthogonal and complementary to traditional topology-based approaches to defending against attack. We also prove lower-bounds showing that DASH is asymptotically optimal in terms of minimizing maximum degree increase over multiple attacks. Finally, we present empirical results on power-law graphs that show that DASH performs well in practice, and that it significantly outperforms naive algorithms in reducing maximum degree increase. We also present empirical results on performance of our algorithms and a new heuristic with regard to stretch (increase in shortest path lengths).
[ { "version": "v1", "created": "Thu, 24 Jan 2008 07:46:50 GMT" } ]
2016-11-17T00:00:00
[ [ "Saia", "Jared", "" ], [ "Trehan", "Amitabh", "" ] ]
0801.4130
Klas Olof Daniel Andersson
Daniel Andersson
Solving Min-Max Problems with Applications to Games
null
null
null
null
cs.GT cs.DS
null
We refine existing general network optimization techniques, give new characterizations for the class of problems to which they can be applied, and show that they can also be used to solve various two-player games in almost linear time. Among these is a new variant of the network interdiction problem, where the interdictor wants to destroy high-capacity paths from the source to the destination using a vertex-wise limited budget of arc removals. We also show that replacing the limit average in mean payoff games by the maximum weight results in a class of games amenable to these techniques.
[ { "version": "v1", "created": "Sun, 27 Jan 2008 13:28:43 GMT" } ]
2008-01-29T00:00:00
[ [ "Andersson", "Daniel", "" ] ]
0801.4190
Sebastian Roch
Constantinos Daskalakis, Elchanan Mossel, Sebastien Roch
Phylogenies without Branch Bounds: Contracting the Short, Pruning the Deep
null
null
null
null
q-bio.PE cs.CE cs.DS math.PR math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new phylogenetic reconstruction algorithm which, unlike most previous rigorous inference techniques, does not rely on assumptions regarding the branch lengths or the depth of the tree. The algorithm returns a forest which is guaranteed to contain all edges that are: 1) sufficiently long and 2) sufficiently close to the leaves. How much of the true tree is recovered depends on the sequence length provided. The algorithm is distance-based and runs in polynomial time.
[ { "version": "v1", "created": "Mon, 28 Jan 2008 05:10:22 GMT" }, { "version": "v2", "created": "Tue, 28 Jul 2009 01:48:27 GMT" } ]
2011-09-30T00:00:00
[ [ "Daskalakis", "Constantinos", "" ], [ "Mossel", "Elchanan", "" ], [ "Roch", "Sebastien", "" ] ]
0801.4238
Christoph Durr
Marek Chrobak, Christoph Durr, Mathilde Hurand and Julien Robert
Algorithms for Temperature-Aware Task Scheduling in Microprocessor Systems
null
null
null
null
cs.DS
null
We study scheduling problems motivated by recently developed techniques for microprocessor thermal management at the operating systems level. The general scenario can be described as follows. The microprocessor's temperature is controlled by the hardware thermal management system that continuously monitors the chip temperature and automatically reduces the processor's speed as soon as the thermal threshold is exceeded. Some tasks are more CPU-intensive than other and thus generate more heat during execution. The cooling system operates non-stop, reducing (at an exponential rate) the deviation of the processor's temperature from the ambient temperature. As a result, the processor's temperature, and thus the performance as well, depends on the order of the task execution. Given a variety of possible underlying architectures, models for cooling and for hardware thermal management, as well as types of tasks, this scenario gives rise to a plethora of interesting and never studied scheduling problems. We focus on scheduling real-time jobs in a simplified model for cooling and thermal management. A collection of unit-length jobs is given, each job specified by its release time, deadline and heat contribution. If, at some time step, the temperature of the system is t and the processor executes a job with heat contribution h, then the temperature at the next step is (t+h)/2. The temperature cannot exceed the given thermal threshold T. The objective is to maximize the throughput, that is, the number of tasks that meet their deadlines. We prove that, in the offline case, computing the optimum schedule is NP-hard, even if all jobs are released at the same time. In the online case, we show a 2-competitive deterministic algorithm and a matching lower bound.
[ { "version": "v1", "created": "Mon, 28 Jan 2008 10:47:42 GMT" } ]
2008-01-29T00:00:00
[ [ "Chrobak", "Marek", "" ], [ "Durr", "Christoph", "" ], [ "Hurand", "Mathilde", "" ], [ "Robert", "Julien", "" ] ]
0801.4851
Rajgopal Kannan
Costas Busch and Rajgopal Kannan
Bicretieria Optimization in Routing Games
15 pages, submitted to SPAA
null
null
null
cs.GT cs.DS
null
Two important metrics for measuring the quality of routing paths are the maximum edge congestion $C$ and maximum path length $D$. Here, we study bicriteria in routing games where each player $i$ selfishly selects a path that simultaneously minimizes its maximum edge congestion $C_i$ and path length $D_i$. We study the stability and price of anarchy of two bicriteria games: - {\em Max games}, where the social cost is $\max(C,D)$ and the player cost is $\max(C_i, D_i)$. We prove that max games are stable and convergent under best-response dynamics, and that the price of anarchy is bounded above by the maximum path length in the players' strategy sets. We also show that this bound is tight in worst-case scenarios. - {\em Sum games}, where the social cost is $C+D$ and the player cost is $C_i+D_i$. For sum games, we first show the negative result that there are game instances that have no Nash-equilibria. Therefore, we examine an approximate game called the {\em sum-bucket game} that is always convergent (and therefore stable). We show that the price of anarchy in sum-bucket games is bounded above by $C^* \cdot D^* / (C^* + D^*)$ (with a poly-log factor), where $C^*$ and $D^*$ are the optimal coordinated congestion and path length. Thus, the sum-bucket game has typically superior price of anarchy bounds than the max game. In fact, when either $C^*$ or $D^*$ is small (e.g. constant) the social cost of the Nash-equilibria is very close to the coordinated optimal $C^* + D^*$ (within a poly-log factor). We also show that the price of anarchy bound is tight for cases where both $C^*$ and $D^*$ are large.
[ { "version": "v1", "created": "Thu, 31 Jan 2008 19:29:13 GMT" } ]
2008-02-01T00:00:00
[ [ "Busch", "Costas", "" ], [ "Kannan", "Rajgopal", "" ] ]
0802.0017
Amir Rothschild
Amihood Amir and Klim Efremenko and Oren Kapah and Ely Porat and Amir Rothschild
Improved Deterministic Length Reduction
7 pages
null
null
null
cs.DS
null
This paper presents a new technique for deterministic length reduction. This technique improves the running time of the algorithm presented in \cite{LR07} for performing fast convolution in sparse data. While the regular fast convolution of vectors $V_1,V_2$ whose sizes are $N_1,N_2$ respectively, takes $O(N_1 \log N_2)$ using FFT, using the new technique for length reduction, the algorithm proposed in \cite{LR07} performs the convolution in $O(n_1 \log^3 n_1)$, where $n_1$ is the number of non-zero values in $V_1$. The algorithm assumes that $V_1$ is given in advance, and $V_2$ is given in running time. The novel technique presented in this paper improves the convolution time to $O(n_1 \log^2 n_1)$ {\sl deterministically}, which equals the best running time given achieved by a {\sl randomized} algorithm. The preprocessing time of the new technique remains the same as the preprocessing time of \cite{LR07}, which is $O(n_1^2)$. This assumes and deals the case where $N_1$ is polynomial in $n_1$. In the case where $N_1$ is exponential in $n_1$, a reduction to a polynomial case can be used. In this paper we also improve the preprocessing time of this reduction from $O(n_1^4)$ to $O(n_1^3{\rm polylog}(n_1))$.
[ { "version": "v1", "created": "Thu, 31 Jan 2008 21:59:33 GMT" } ]
2008-02-04T00:00:00
[ [ "Amir", "Amihood", "" ], [ "Efremenko", "Klim", "" ], [ "Kapah", "Oren", "" ], [ "Porat", "Ely", "" ], [ "Rothschild", "Amir", "" ] ]
0802.0802
Ping Li
Ping Li
On Approximating Frequency Moments of Data Streams with Skewed Projections
null
null
null
null
cs.DS cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose skewed stable random projections for approximating the pth frequency moments of dynamic data streams (0<p<=2), which has been frequently studied in theoretical computer science and database communities. Our method significantly (or even infinitely when p->1) improves previous methods based on (symmetric) stable random projections. Our proposed method is applicable to data streams that are (a) insertion only (the cash-register model); or (b) always non-negative (the strict Turnstile model), or (c) eventually non-negative at check points. This is only a minor restriction for practical applications. Our method works particularly well when p = 1+/- \Delta and \Delta is small, which is a practically important scenario. For example, \Delta may be the decay rate or interest rate, which are usually small. Of course, when \Delta = 0, one can compute the 1th frequent moment (i.e., the sum) essentially error-free using a simple couter. Our method may be viewed as a ``genearlized counter'' in that it can count the total value in the future, taking in account of the effect of decaying or interest accruement. In a summary, our contributions are two-fold. (A) This is the first propsal of skewed stable random projections. (B) Based on first principle, we develop various statistical estimators for skewed stable distributions, including their variances and error (tail) probability bounds, and consequently the sample complexity bounds.
[ { "version": "v1", "created": "Wed, 6 Feb 2008 13:56:51 GMT" } ]
2008-02-07T00:00:00
[ [ "Li", "Ping", "" ] ]
0802.0835
Rossano Venturini
Paolo Ferragina, Igor Nitto and Rossano Venturini
Bit-Optimal Lempel-Ziv compression
null
null
null
null
cs.DS cs.IT math.IT
null
One of the most famous and investigated lossless data-compression scheme is the one introduced by Lempel and Ziv about 40 years ago. This compression scheme is known as "dictionary-based compression" and consists of squeezing an input string by replacing some of its substrings with (shorter) codewords which are actually pointers to a dictionary of phrases built as the string is processed. Surprisingly enough, although many fundamental results are nowadays known about upper bounds on the speed and effectiveness of this compression process and references therein), ``we are not aware of any parsing scheme that achieves optimality when the LZ77-dictionary is in use under any constraint on the codewords other than being of equal length'' [N. Rajpoot and C. Sahinalp. Handbook of Lossless Data Compression, chapter Dictionary-based data compression. Academic Press, 2002. pag. 159]. Here optimality means to achieve the minimum number of bits in compressing each individual input string, without any assumption on its generating source. In this paper we provide the first LZ-based compressor which computes the bit-optimal parsing of any input string in efficient time and optimal space, for a general class of variable-length codeword encodings which encompasses most of the ones typically used in data compression and in the design of search engines and compressed indexes.
[ { "version": "v1", "created": "Wed, 6 Feb 2008 16:31:54 GMT" } ]
2008-02-07T00:00:00
[ [ "Ferragina", "Paolo", "" ], [ "Nitto", "Igor", "" ], [ "Venturini", "Rossano", "" ] ]
0802.1026
Benjamin Sach Mr
Benjamin Sach and Rapha\"el Clifford
An Empirical Study of Cache-Oblivious Priority Queues and their Application to the Shortest Path Problem
null
null
null
null
cs.DS cs.SE
null
In recent years the Cache-Oblivious model of external memory computation has provided an attractive theoretical basis for the analysis of algorithms on massive datasets. Much progress has been made in discovering algorithms that are asymptotically optimal or near optimal. However, to date there are still relatively few successful experimental studies. In this paper we compare two different Cache-Oblivious priority queues based on the Funnel and Bucket Heap and apply them to the single source shortest path problem on graphs with positive edge weights. Our results show that when RAM is limited and data is swapping to external storage, the Cache-Oblivious priority queues achieve orders of magnitude speedups over standard internal memory techniques. However, for the single source shortest path problem both on simulated and real world graph data, these speedups are markedly lower due to the time required to access the graph adjacency list itself.
[ { "version": "v1", "created": "Thu, 7 Feb 2008 18:02:11 GMT" } ]
2008-02-08T00:00:00
[ [ "Sach", "Benjamin", "" ], [ "Clifford", "Raphaël", "" ] ]
0802.1059
Tobias Friedrich
Deepak Ajwani, Tobias Friedrich
Average-Case Analysis of Online Topological Ordering
22 pages, long version of ISAAC'07 paper
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated experimentally on random DAGs. We present the first average-case analysis of online topological ordering algorithms. We prove an expected runtime of O(n^2 polylog(n)) under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (SODA, 1990), Katriel and Bodlaender (TALG, 2006), and Pearce and Kelly (JEA, 2006). This is much less than the best known worst-case bound O(n^{2.75}) for this problem.
[ { "version": "v1", "created": "Thu, 7 Feb 2008 20:27:17 GMT" } ]
2008-02-08T00:00:00
[ [ "Ajwani", "Deepak", "" ], [ "Friedrich", "Tobias", "" ] ]
0802.1237
Gwena\"el Joret
Jean Cardinal, Samuel Fiorini, and Gwena\"el Joret
Minimum Entropy Orientations
Referees' comments incorporated
Operations Research Letters 36 (2008), pp. 680-683
10.1016/j.orl.2008.06.010
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study graph orientations that minimize the entropy of the in-degree sequence. The problem of finding such an orientation is an interesting special case of the minimum entropy set cover problem previously studied by Halperin and Karp [Theoret. Comput. Sci., 2005] and by the current authors [Algorithmica, to appear]. We prove that the minimum entropy orientation problem is NP-hard even if the graph is planar, and that there exists a simple linear-time algorithm that returns an approximate solution with an additive error guarantee of 1 bit. This improves on the only previously known algorithm which has an additive error guarantee of log_2 e bits (approx. 1.4427 bits).
[ { "version": "v1", "created": "Sat, 9 Feb 2008 01:38:06 GMT" }, { "version": "v2", "created": "Mon, 22 Sep 2008 14:43:52 GMT" } ]
2008-10-28T00:00:00
[ [ "Cardinal", "Jean", "" ], [ "Fiorini", "Samuel", "" ], [ "Joret", "Gwenaël", "" ] ]
0802.1338
Shai Gutner
Shai Gutner and Michael Tarsi
Some results on (a:b)-choosability
null
null
null
null
cs.DM cs.CC cs.DS
null
A solution to a problem of Erd\H{o}s, Rubin and Taylor is obtained by showing that if a graph $G$ is $(a:b)$-choosable, and $c/d > a/b$, then $G$ is not necessarily $(c:d)$-choosable. Applying probabilistic methods, an upper bound for the $k^{th}$ choice number of a graph is given. We also prove that a directed graph with maximum outdegree $d$ and no odd directed cycle is $(k(d+1):k)$-choosable for every $k \geq 1$. Other results presented in this article are related to the strong choice number of graphs (a generalization of the strong chromatic number). We conclude with complexity analysis of some decision problems related to graph choosability.
[ { "version": "v1", "created": "Sun, 10 Feb 2008 17:46:54 GMT" } ]
2008-02-12T00:00:00
[ [ "Gutner", "Shai", "" ], [ "Tarsi", "Michael", "" ] ]
0802.1427
Klim Efremenko
Klim Efremenko, Ely Porat
Approximating General Metric Distances Between a Pattern and a Text
This is updated version of paper appered in SODA 2008
SODA 2008
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Let $T=t_0 ... t_{n-1}$ be a text and $P = p_0 ... p_{m-1}$ a pattern taken from some finite alphabet set $\Sigma$, and let $\dist$ be a metric on $\Sigma$. We consider the problem of calculating the sum of distances between the symbols of $P$ and the symbols of substrings of $T$ of length $m$ for all possible offsets. We present an $\epsilon$-approximation algorithm for this problem which runs in time $O(\frac{1}{\epsilon^2}n\cdot \mathrm{polylog}(n,\abs{\Sigma}))$
[ { "version": "v1", "created": "Mon, 11 Feb 2008 12:36:31 GMT" } ]
2008-02-12T00:00:00
[ [ "Efremenko", "Klim", "" ], [ "Porat", "Ely", "" ] ]
0802.1471
Ronald de Wolf
Ronald de Wolf (CWI Amsterdam)
Error-Correcting Data Structures
15 pages LaTeX; an abridged version will appear in the Proceedings of the STACS 2009 conference
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. This new model is the common generalization of (static) data structures and locally decodable error-correcting codes. The main issue is the tradeoff between the space used by the data structure and the time (number of probes) needed to answer a query about the encoded object. We prove a number of upper and lower bounds on various natural error-correcting data structure problems. In particular, we show that the optimal length of error-correcting data structures for the Membership problem (where we want to store subsets of size s from a universe of size n) is closely related to the optimal length of locally decodable codes for s-bit strings.
[ { "version": "v1", "created": "Mon, 11 Feb 2008 16:35:49 GMT" }, { "version": "v2", "created": "Mon, 1 Dec 2008 14:25:48 GMT" } ]
2008-12-01T00:00:00
[ [ "de Wolf", "Ronald", "", "CWI Amsterdam" ] ]
0802.1685
Christoph Durr
Marcin Bienkowski, Marek Chrobak, Christoph Durr, Mathilde Hurand, Artur Jez, Lukasz Jez, Jakub Lopuszanski, Grzegorz Stachowiak
Generalized Whac-a-Mole
null
null
null
null
cs.DS
null
We consider online competitive algorithms for the problem of collecting weighted items from a dynamic set S, when items are added to or deleted from S over time. The objective is to maximize the total weight of collected items. We study the general version, as well as variants with various restrictions, including the following: the uniform case, when all items have the same weight, the decremental sets, when all items are present at the beginning and only deletion operations are allowed, and dynamic queues, where the dynamic set is ordered and only its prefixes can be deleted (with no restriction on insertions). The dynamic queue case is a generalization of bounded-delay packet scheduling (also referred to as buffer management). We present several upper and lower bounds on the competitive ratio for these variants.
[ { "version": "v1", "created": "Tue, 12 Feb 2008 18:41:46 GMT" }, { "version": "v2", "created": "Sun, 17 Feb 2008 00:09:51 GMT" } ]
2016-09-08T00:00:00
[ [ "Bienkowski", "Marcin", "" ], [ "Chrobak", "Marek", "" ], [ "Durr", "Christoph", "" ], [ "Hurand", "Mathilde", "" ], [ "Jez", "Artur", "" ], [ "Jez", "Lukasz", "" ], [ "Lopuszanski", "Jakub", "" ], [ "Stachowiak", "Grzegorz", "" ] ]
0802.1722
Saket Saurabh
Omid Amini, Fedor V. Fomin and Saket Saurabh
Parameterized Algorithms for Partial Cover Problems
20 page, 1 Figure
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Covering problems are fundamental classical problems in optimization, computer science and complexity theory. Typically an input to these problems is a family of sets over a finite universe and the goal is to cover the elements of the universe with as few sets of the family as possible. The variations of covering problems include well known problems like Set Cover, Vertex Cover, Dominating Set and Facility Location to name a few. Recently there has been a lot of study on partial covering problems, a natural generalization of covering problems. Here, the goal is not to cover all the elements but to cover the specified number of elements with the minimum number of sets. In this paper we study partial covering problems in graphs in the realm of parameterized complexity. Classical (non-partial) version of all these problems have been intensively studied in planar graphs and in graphs excluding a fixed graph $H$ as a minor. However, the techniques developed for parameterized version of non-partial covering problems cannot be applied directly to their partial counterparts. The approach we use, to show that various partial covering problems are fixed parameter tractable on planar graphs, graphs of bounded local treewidth and graph excluding some graph as a minor, is quite different from previously known techniques. The main idea behind our approach is the concept of implicit branching. We find implicit branching technique to be interesting on its own and believe that it can be used for some other problems.
[ { "version": "v1", "created": "Tue, 12 Feb 2008 21:19:40 GMT" } ]
2008-02-14T00:00:00
[ [ "Amini", "Omid", "" ], [ "Fomin", "Fedor V.", "" ], [ "Saurabh", "Saket", "" ] ]
0802.1957
Sudhir Singh
Sudhir Kumar Singh, Vwani P. Roychowdhury
To Broad-Match or Not to Broad-Match : An Auctioneer's Dilemma ?
33 pages, 10 figures, new results added, substantially revised
null
null
null
cs.GT cs.CC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We initiate the study of an interesting aspect of sponsored search advertising, namely the consequences of broad match-a feature where an ad of an advertiser can be mapped to a broader range of relevant queries, and not necessarily to the particular keyword(s) that ad is associated with. Starting with a very natural setting for strategies available to the advertisers, and via a careful look through the algorithmic lens, we first propose solution concepts for the game originating from the strategic behavior of advertisers as they try to optimize their budget allocation across various keywords. Next, we consider two broad match scenarios based on factors such as information asymmetry between advertisers and the auctioneer, and the extent of auctioneer's control on the budget splitting. In the first scenario, the advertisers have the full information about broad match and relevant parameters, and can reapportion their own budgets to utilize the extra information; in particular, the auctioneer has no direct control over budget splitting. We show that, the same broad match may lead to different equilibria, one leading to a revenue improvement, whereas another to a revenue loss. This leaves the auctioneer in a dilemma - whether to broad-match or not. This motivates us to consider another broad match scenario, where the advertisers have information only about the current scenario, and the allocation of the budgets unspent in the current scenario is in the control of the auctioneer. We observe that the auctioneer can always improve his revenue by judiciously using broad match. Thus, information seems to be a double-edged sword for the auctioneer.
[ { "version": "v1", "created": "Thu, 14 Feb 2008 03:45:07 GMT" }, { "version": "v2", "created": "Mon, 21 Jul 2008 19:40:28 GMT" } ]
2008-07-21T00:00:00
[ [ "Singh", "Sudhir Kumar", "" ], [ "Roychowdhury", "Vwani P.", "" ] ]
0802.2015
Steven de Rooij
Wouter Koolen and Steven de Rooij
Combining Expert Advice Efficiently
50 pages
null
null
null
cs.LG cs.DS cs.IT math.IT
null
We show how models for prediction with expert advice can be defined concisely and clearly using hidden Markov models (HMMs); standard HMM algorithms can then be used to efficiently calculate, among other things, how the expert predictions should be weighted according to the model. We cast many existing models as HMMs and recover the best known running times in each case. We also describe two new models: the switch distribution, which was recently developed to improve Bayesian/Minimum Description Length model selection, and a new generalisation of the fixed share algorithm based on run-length coding. We give loss bounds for all models and shed new light on their relationships.
[ { "version": "v1", "created": "Thu, 14 Feb 2008 14:54:57 GMT" }, { "version": "v2", "created": "Fri, 15 Feb 2008 10:59:15 GMT" } ]
2008-02-15T00:00:00
[ [ "Koolen", "Wouter", "" ], [ "de Rooij", "Steven", "" ] ]
0802.2130
Ashkan Aazami
Ashkan Aazami
Domination in graphs with bounded propagation: algorithms, formulations and hardness results
24 pages
null
null
null
cs.DS cs.CC
null
We introduce a hierarchy of problems between the \textsc{Dominating Set} problem and the \textsc{Power Dominating Set} (PDS) problem called the $\ell$-round power dominating set ($\ell$-round PDS, for short) problem. For $\ell=1$, this is the \textsc{Dominating Set} problem, and for $\ell\geq n-1$, this is the PDS problem; here $n$ denotes the number of nodes in the input graph. In PDS the goal is to find a minimum size set of nodes $S$ that power dominates all the nodes, where a node $v$ is power dominated if (1) $v$ is in $S$ or it has a neighbor in $S$, or (2) $v$ has a neighbor $u$ such that $u$ and all of its neighbors except $v$ are power dominated. Note that rule (1) is the same as for the \textsc{Dominating Set} problem, and that rule (2) is a type of propagation rule that applies iteratively. The $\ell$-round PDS problem has the same set of rules as PDS, except we apply rule (2) in ``parallel'' in at most $\ell-1$ rounds. We prove that $\ell$-round PDS cannot be approximated better than $2^{\log^{1-\epsilon}{n}}$ even for $\ell=4$ in general graphs. We provide a dynamic programming algorithm to solve $\ell$-round PDS optimally in polynomial time on graphs of bounded tree-width. We present a PTAS (polynomial time approximation scheme) for $\ell$-round PDS on planar graphs for $\ell=O(\tfrac{\log{n}}{\log{\log{n}}})$. Finally, we give integer programming formulations for $\ell$-round PDS.
[ { "version": "v1", "created": "Fri, 15 Feb 2008 02:55:52 GMT" } ]
2008-02-18T00:00:00
[ [ "Aazami", "Ashkan", "" ] ]
0802.2157
Shai Gutner
Shai Gutner
Choice numbers of graphs
null
null
null
null
cs.DM cs.CC cs.DS
null
A solution to a problem of Erd\H{o}s, Rubin and Taylor is obtained by showing that if a graph $G$ is $(a:b)$-choosable, and $c/d > a/b$, then $G$ is not necessarily $(c:d)$-choosable. The simplest case of another problem, stated by the same authors, is settled, proving that every 2-choosable graph is also $(4:2)$-choosable. Applying probabilistic methods, an upper bound for the $k^{th}$ choice number of a graph is given. We also prove that a directed graph with maximum outdegree $d$ and no odd directed cycle is $(k(d+1):k)$-choosable for every $k \geq 1$. Other results presented in this article are related to the strong choice number of graphs (a generalization of the strong chromatic number). We conclude with complexity analysis of some decision problems related to graph choosability.
[ { "version": "v1", "created": "Fri, 15 Feb 2008 09:05:54 GMT" } ]
2008-02-18T00:00:00
[ [ "Gutner", "Shai", "" ] ]
0802.2184
Jean Cardinal
Jean Cardinal, Christophe Dumeunier
Set Covering Problems with General Objective Functions
14 pages, 1 figure
null
null
null
cs.DS
null
We introduce a parameterized version of set cover that generalizes several previously studied problems. Given a ground set V and a collection of subsets S_i of V, a feasible solution is a partition of V such that each subset of the partition is included in one of the S_i. The problem involves maximizing the mean subset size of the partition, where the mean is the generalized mean of parameter p, taken over the elements. For p=-1, the problem is equivalent to the classical minimum set cover problem. For p=0, it is equivalent to the minimum entropy set cover problem, introduced by Halperin and Karp. For p=1, the problem includes the maximum-edge clique partition problem as a special case. We prove that the greedy algorithm simultaneously approximates the problem within a factor of (p+1)^1/p for any p in R^+, and that this is the best possible unless P=NP. These results both generalize and simplify previous results for special cases. We also consider the corresponding graph coloring problem, and prove several tractability and inapproximability results. Finally, we consider a further generalization of the set cover problem in which we aim at minimizing the sum of some concave function of the part sizes. As an application, we derive an approximation ratio for a Rent-or-Buy set cover problem.
[ { "version": "v1", "created": "Fri, 15 Feb 2008 11:56:28 GMT" } ]
2008-02-18T00:00:00
[ [ "Cardinal", "Jean", "" ], [ "Dumeunier", "Christophe", "" ] ]
0802.2228
Sebastian Ordyniak
Stephan Kreutzer, Sebastian Ordyniak
Digraph Decompositions and Monotonicity in Digraph Searching
null
null
null
null
cs.DM cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider monotonicity problems for graph searching games. Variants of these games - defined by the type of moves allowed for the players - have been found to be closely connected to graph decompositions and associated width measures such as path- or tree-width. Of particular interest is the question whether these games are monotone, i.e. whether the cops can catch a robber without ever allowing the robber to reach positions that have been cleared before. The monotonicity problem for graph searching games has intensely been studied in the literature, but for two types of games the problem was left unresolved. These are the games on digraphs where the robber is invisible and lazy or visible and fast. In this paper, we solve the problems by giving examples showing that both types of games are non-monotone. Graph searching games on digraphs are closely related to recent proposals for digraph decompositions generalising tree-width to directed graphs. These proposals have partly been motivated by attempts to develop a structure theory for digraphs similar to the graph minor theory developed by Robertson and Seymour for undirected graphs, and partly by the immense number of algorithmic results using tree-width of undirected graphs and the hope that part of this success might be reproducible on digraphs using a directed tree-width. Unfortunately the number of applications for the digraphs measures introduced so far is still small. We therefore explore the limits of the algorithmic applicability of digraph decompositions. In particular, we show that various natural candidates for problems that might benefit from digraphs having small directed tree-width remain NP-complete even on almost acyclic graphs.
[ { "version": "v1", "created": "Fri, 15 Feb 2008 15:44:34 GMT" } ]
2008-02-18T00:00:00
[ [ "Kreutzer", "Stephan", "" ], [ "Ordyniak", "Sebastian", "" ] ]
0802.2305
Ping Li
Ping Li
Compressed Counting
null
null
null
null
cs.IT cs.CC cs.DM cs.DS cs.LG math.IT
null
Counting is among the most fundamental operations in computing. For example, counting the pth frequency moment has been a very active area of research, in theoretical computer science, databases, and data mining. When p=1, the task (i.e., counting the sum) can be accomplished using a simple counter. Compressed Counting (CC) is proposed for efficiently computing the pth frequency moment of a data stream signal A_t, where 0<p<=2. CC is applicable if the streaming data follow the Turnstile model, with the restriction that at the time t for the evaluation, A_t[i]>= 0, which includes the strict Turnstile model as a special case. For natural data streams encountered in practice, this restriction is minor. The underly technique for CC is what we call skewed stable random projections, which captures the intuition that, when p=1 a simple counter suffices, and when p = 1+/\Delta with small \Delta, the sample complexity of a counter system should be low (continuously as a function of \Delta). We show at small \Delta the sample complexity (number of projections) k = O(1/\epsilon) instead of O(1/\epsilon^2). Compressed Counting can serve a basic building block for other tasks in statistics and computing, for example, estimation entropies of data streams, parameter estimations using the method of moments and maximum likelihood. Finally, another contribution is an algorithm for approximating the logarithmic norm, \sum_{i=1}^D\log A_t[i], and logarithmic distance. The logarithmic distance is useful in machine learning practice with heavy-tailed data.
[ { "version": "v1", "created": "Sun, 17 Feb 2008 16:42:52 GMT" }, { "version": "v2", "created": "Sun, 24 Feb 2008 09:51:09 GMT" } ]
2008-02-24T00:00:00
[ [ "Li", "Ping", "" ] ]
0802.2418
Jacob Scott
Christopher Crutchfield, Zoran Dzunic, Jeremy T. Fineman, David R. Karger, and Jacob Scott
Improved Approximations for Multiprocessor Scheduling Under Uncertainty
null
null
null
null
cs.DC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents improved approximation algorithms for the problem of multiprocessor scheduling under uncertainty, or SUU, in which the execution of each job may fail probabilistically. This problem is motivated by the increasing use of distributed computing to handle large, computationally intensive tasks. In the SUU problem we are given n unit-length jobs and m machines, a directed acyclic graph G of precedence constraints among jobs, and unrelated failure probabilities q_{ij} for each job j when executed on machine i for a single timestep. Our goal is to find a schedule that minimizes the expected makespan, which is the expected time at which all jobs complete. Lin and Rajaraman gave the first approximations for this NP-hard problem for the special cases of independent jobs, precedence constraints forming disjoint chains, and precedence constraints forming trees. In this paper, we present asymptotically better approximation algorithms. In particular, we give an O(loglog min(m,n))-approximation for independent jobs (improving on the previously best O(log n)-approximation). We also give an O(log(n+m) loglog min(m,n))-approximation algorithm for precedence constraints that form disjoint chains (improving on the previously best O(log(n)log(m)log(n+m)/loglog(n+m))-approximation by a (log n/loglog n)^2 factor when n = poly(m). Our algorithm for precedence constraints forming chains can also be used as a component for precedence constraints forming trees, yielding a similar improvement over the previously best algorithms for trees.
[ { "version": "v1", "created": "Mon, 18 Feb 2008 20:57:17 GMT" }, { "version": "v2", "created": "Tue, 19 Feb 2008 02:58:36 GMT" } ]
2008-02-19T00:00:00
[ [ "Crutchfield", "Christopher", "" ], [ "Dzunic", "Zoran", "" ], [ "Fineman", "Jeremy T.", "" ], [ "Karger", "David R.", "" ], [ "Scott", "Jacob", "" ] ]
0802.2528
Nitish Korula
Chandra Chekuri, Nitish Korula
Min-Cost 2-Connected Subgraphs With k Terminals
18 pages, 3 figures
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the k-2VC problem, we are given an undirected graph G with edge costs and an integer k; the goal is to find a minimum-cost 2-vertex-connected subgraph of G containing at least k vertices. A slightly more general version is obtained if the input also specifies a subset S \subseteq V of terminals and the goal is to find a subgraph containing at least k terminals. Closely related to the k-2VC problem, and in fact a special case of it, is the k-2EC problem, in which the goal is to find a minimum-cost 2-edge-connected subgraph containing k vertices. The k-2EC problem was introduced by Lau et al., who also gave a poly-logarithmic approximation for it. No previous approximation algorithm was known for the more general k-2VC problem. We describe an O(\log n \log k) approximation for the k-2VC problem.
[ { "version": "v1", "created": "Mon, 18 Feb 2008 18:34:28 GMT" } ]
2008-02-19T00:00:00
[ [ "Chekuri", "Chandra", "" ], [ "Korula", "Nitish", "" ] ]
0802.2612
Sergey Gubin
Sergey Gubin
On Subgraph Isomorphism
Simplified, 6 pages
Polynomial size asymmetric linear model for Subgraph Isomorphism, Proceedings WCECS 2008, ISBN: 978-988-98671-0-2, pp.241-246
null
null
cs.DM cs.CC cs.DS math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Article explicitly expresses Subgraph Isomorphism by a polynomial size asymmetric linear system.
[ { "version": "v1", "created": "Tue, 19 Feb 2008 09:06:40 GMT" }, { "version": "v2", "created": "Thu, 14 Aug 2008 22:22:49 GMT" } ]
2008-11-10T00:00:00
[ [ "Gubin", "Sergey", "" ] ]
0802.2668
Shai Gutner
Shai Gutner
The complexity of planar graph choosability
null
Discrete Math. 159 (1996), 119-130
null
null
cs.DM cs.CC cs.DS
null
A graph $G$ is {\em $k$-choosable} if for every assignment of a set $S(v)$ of $k$ colors to every vertex $v$ of $G$, there is a proper coloring of $G$ that assigns to each vertex $v$ a color from $S(v)$. We consider the complexity of deciding whether a given graph is $k$-choosable for some constant $k$. In particular, it is shown that deciding whether a given planar graph is 4-choosable is NP-hard, and so is the problem of deciding whether a given planar triangle-free graph is 3-choosable. We also obtain simple constructions of a planar graph which is not 4-choosable and a planar triangle-free graph which is not 3-choosable.
[ { "version": "v1", "created": "Tue, 19 Feb 2008 15:26:19 GMT" } ]
2008-02-20T00:00:00
[ [ "Gutner", "Shai", "" ] ]
0802.2825
Pascal Weil
Thomas Thierauf, Fabian Wagner
The Isomorphism Problem for Planar 3-Connected Graphs is in Unambiguous Logspace
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS cs.CC
null
The isomorphism problem for planar graphs is known to be efficiently solvable. For planar 3-connected graphs, the isomorphism problem can be solved by efficient parallel algorithms, it is in the class $AC^1$. In this paper we improve the upper bound for planar 3-connected graphs to unambiguous logspace, in fact to $UL \cap coUL$. As a consequence of our method we get that the isomorphism problem for oriented graphs is in $NL$. We also show that the problems are hard for $L$.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:03:55 GMT" } ]
2008-02-21T00:00:00
[ [ "Thierauf", "Thomas", "" ], [ "Wagner", "Fabian", "" ] ]
0802.2826
Pascal Weil
Antti Valmari, Petri Lehtinen
Efficient Minimization of DFAs with Partial Transition Functions
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.IT cs.DS math.IT
null
Let PT-DFA mean a deterministic finite automaton whose transition relation is a partial function. We present an algorithm for minimizing a PT-DFA in $O(m \lg n)$ time and $O(m+n+\alpha)$ memory, where $n$ is the number of states, $m$ is the number of defined transitions, and $\alpha$ is the size of the alphabet. Time consumption does not depend on $\alpha$, because the $\alpha$ term arises from an array that is accessed at random and never initialized. It is not needed, if transitions are in a suitable order in the input. The algorithm uses two instances of an array-based data structure for maintaining a refinable partition. Its operations are all amortized constant time. One instance represents the classical blocks and the other a partition of transitions. Our measurements demonstrate the speed advantage of our algorithm on PT-DFAs over an $O(\alpha n \lg n)$ time, $O(\alpha n)$ memory algorithm.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:04:34 GMT" } ]
2008-02-21T00:00:00
[ [ "Valmari", "Antti", "" ], [ "Lehtinen", "Petri", "" ] ]
0802.2827
Pascal Weil
Johan M. M. Van Rooij, Hans L. Bodlaender
Design by Measure and Conquer, A Faster Exact Algorithm for Dominating Set
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS
null
The measure and conquer approach has proven to be a powerful tool to analyse exact algorithms for combinatorial problems, like Dominating Set and Independent Set. In this paper, we propose to use measure and conquer also as a tool in the design of algorithms. In an iterative process, we can obtain a series of branch and reduce algorithms. A mathematical analysis of an algorithm in the series with measure and conquer results in a quasiconvex programming problem. The solution by computer to this problem not only gives a bound on the running time, but also can give a new reduction rule, thus giving a new, possibly faster algorithm. This makes design by measure and conquer a form of computer aided algorithm design. When we apply the methodology to a Set Cover modelling of the Dominating Set problem, we obtain the currently fastest known exact algorithms for Dominating Set: an algorithm that uses $O(1.5134^n)$ time and polynomial space, and an algorithm that uses $O(1.5063^n)$ time.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:05:58 GMT" } ]
2008-02-21T00:00:00
[ [ "Van Rooij", "Johan M. M.", "" ], [ "Bodlaender", "Hans L.", "" ] ]
0802.2829
Pascal Weil
Maxime Crochemore (IGM), Lucian Ilie
Understanding maximal repetitions in strings
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS math.CO
null
The cornerstone of any algorithm computing all repetitions in a string of length n in O(n) time is the fact that the number of runs (or maximal repetitions) is O(n). We give a simple proof of this result. As a consequence of our approach, the stronger result concerning the linearity of the sum of exponents of all runs follows easily.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:10:15 GMT" } ]
2008-02-21T00:00:00
[ [ "Crochemore", "Maxime", "", "IGM" ], [ "Ilie", "Lucian", "" ] ]
0802.2832
Pascal Weil
Zvi Lotker, Boaz Patt-Shamir, Dror Rawitz
Rent, Lease or Buy: Randomized Algorithms for Multislope Ski Rental
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS
null
In the Multislope Ski Rental problem, the user needs a certain resource for some unknown period of time. To use the resource, the user must subscribe to one of several options, each of which consists of a one-time setup cost (``buying price''), and cost proportional to the duration of the usage (``rental rate''). The larger the price, the smaller the rent. The actual usage time is determined by an adversary, and the goal of an algorithm is to minimize the cost by choosing the best option at any point in time. Multislope Ski Rental is a natural generalization of the classical Ski Rental problem (where the only options are pure rent and pure buy), which is one of the fundamental problems of online computation. The Multislope Ski Rental problem is an abstraction of many problems where online decisions cannot be modeled by just two options, e.g., power management in systems which can be shut down in parts. In this paper we study randomized algorithms for Multislope Ski Rental. Our results include the best possible online randomized strategy for any additive instance, where the cost of switching from one option to another is the difference in their buying prices; and an algorithm that produces an $e$-competitive randomized strategy for any (non-additive) instance.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:13:19 GMT" } ]
2008-02-21T00:00:00
[ [ "Lotker", "Zvi", "" ], [ "Patt-Shamir", "Boaz", "" ], [ "Rawitz", "Dror", "" ] ]
0802.2834
Pascal Weil
Andreas Bj\"orklund, Thore Husfeldt, Petteri Kaski (HIIT), Mikko Koivisto (HIIT)
Trimmed Moebius Inversion and Graphs of Bounded Degree
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS math.CO
null
We study ways to expedite Yates's algorithm for computing the zeta and Moebius transforms of a function defined on the subset lattice. We develop a trimmed variant of Moebius inversion that proceeds point by point, finishing the calculation at a subset before considering its supersets. For an $n$-element universe $U$ and a family $\scr F$ of its subsets, trimmed Moebius inversion allows us to compute the number of packings, coverings, and partitions of $U$ with $k$ sets from $\scr F$ in time within a polynomial factor (in $n$) of the number of supersets of the members of $\scr F$. Relying on an intersection theorem of Chung et al. (1986) to bound the sizes of set families, we apply these ideas to well-studied combinatorial optimisation problems on graphs of maximum degree $\Delta$. In particular, we show how to compute the Domatic Number in time within a polynomial factor of $(2^{\Delta+1-2)^{n/(\Delta+1)$ and the Chromatic Number in time within a polynomial factor of $(2^{\Delta+1-\Delta-1)^{n/(\Delta+1)$. For any constant $\Delta$, these bounds are $O\bigl((2-\epsilon)^n\bigr)$ for $\epsilon>0$ independent of the number of vertices $n$.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:15:00 GMT" } ]
2008-02-21T00:00:00
[ [ "Björklund", "Andreas", "", "HIIT" ], [ "Husfeldt", "Thore", "", "HIIT" ], [ "Kaski", "Petteri", "", "HIIT" ], [ "Koivisto", "Mikko", "", "HIIT" ] ]
0802.2836
Pascal Weil
Vincenzo Bonifaci, Peter Korteweg, Alberto Marchetti-Spaccamela, Leen Stougie (CWI)
Minimizing Flow Time in the Wireless Gathering Problem
null
ACM Transactions on Algorithms 7(3): 33:1-33:20 (2011)
10.1145/1978782.1978788
null
cs.DS cs.NI
null
We address the problem of efficient data gathering in a wireless network through multi-hop communication. We focus on the objective of minimizing the maximum flow time of a data packet. We prove that no polynomial time algorithm for this problem can have approximation ratio less than $\Omega(m^{1/3)$ when $m$ packets have to be transmitted, unless $P = NP$. We then use resource augmentation to assess the performance of a FIFO-like strategy. We prove that this strategy is 5-speed optimal, i.e., its cost remains within the optimal cost if we allow the algorithm to transmit data at a speed 5 times higher than that of the optimal solution we compare to.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:18:24 GMT" } ]
2019-07-01T00:00:00
[ [ "Bonifaci", "Vincenzo", "", "CWI" ], [ "Korteweg", "Peter", "", "CWI" ], [ "Marchetti-Spaccamela", "Alberto", "", "CWI" ], [ "Stougie", "Leen", "", "CWI" ] ]
0802.2838
Pascal Weil
Chandan Saha
Factoring Polynomials over Finite Fields using Balance Test
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS cs.DM
null
We study the problem of factoring univariate polynomials over finite fields. Under the assumption of the Extended Riemann Hypothesis (ERH), (Gao, 2001) designed a polynomial time algorithm that fails to factor only if the input polynomial satisfies a strong symmetry property, namely square balance. In this paper, we propose an extension of Gao's algorithm that fails only under an even stronger symmetry property. We also show that our property can be used to improve the time complexity of best deterministic algorithms on most input polynomials. The property also yields a new randomized polynomial time algorithm.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:18:52 GMT" } ]
2008-02-21T00:00:00
[ [ "Saha", "Chandan", "" ] ]
0802.2841
Pascal Weil
Patrick Briest, Martin Hoefer, Piotr Krysta
Stackelberg Network Pricing Games
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS cs.GT
null
We study a multi-player one-round game termed Stackelberg Network Pricing Game, in which a leader can set prices for a subset of $m$ priceable edges in a graph. The other edges have a fixed cost. Based on the leader's decision one or more followers optimize a polynomial-time solvable combinatorial minimization problem and choose a minimum cost solution satisfying their requirements based on the fixed costs and the leader's prices. The leader receives as revenue the total amount of prices paid by the followers for priceable edges in their solutions, and the problem is to find revenue maximizing prices. Our model extends several known pricing problems, including single-minded and unit-demand pricing, as well as Stackelberg pricing for certain follower problems like shortest path or minimum spanning tree. Our first main result is a tight analysis of a single-price algorithm for the single follower game, which provides a $(1+\epsilon) \log m$-approximation for any $\epsilon >0$. This can be extended to provide a $(1+\epsilon)(\log k + \log m)$-approximation for the general problem and $k$ followers. The latter result is essentially best possible, as the problem is shown to be hard to approximate within $\mathcal{O(\log^\epsilon k + \log^\epsilon m)$. If followers have demands, the single-price algorithm provides a $(1+\epsilon)m^2$-approximation, and the problem is hard to approximate within $\mathcal{O(m^\epsilon)$ for some $\epsilon >0$. Our second main result is a polynomial time algorithm for revenue maximization in the special case of Stackelberg bipartite vertex cover, which is based on non-trivial max-flow and LP-duality techniques. Our results can be extended to provide constant-factor approximations for any constant number of followers.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:19:33 GMT" } ]
2008-02-21T00:00:00
[ [ "Briest", "Patrick", "" ], [ "Hoefer", "Martin", "" ], [ "Krysta", "Piotr", "" ] ]
0802.2843
Pascal Weil
Joshua Brody, Amit Chakrabarti
Sublinear Communication Protocols for Multi-Party Pointer Jumping and a Related Lower Bound
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.CC cs.DS
null
We study the one-way number-on-the-forehead (NOF) communication complexity of the $k$-layer pointer jumping problem with $n$ vertices per layer. This classic problem, which has connections to many aspects of complexity theory, has seen a recent burst of research activity, seemingly preparing the ground for an $\Omega(n)$ lower bound, for constant $k$. Our first result is a surprising sublinear -- i.e., $o(n)$ -- upper bound for the problem that holds for $k \ge 3$, dashing hopes for such a lower bound. A closer look at the protocol achieving the upper bound shows that all but one of the players involved are collapsing, i.e., their messages depend only on the composition of the layers ahead of them. We consider protocols for the pointer jumping problem where all players are collapsing. Our second result shows that a strong $n - O(\log n)$ lower bound does hold in this case. Our third result is another upper bound showing that nontrivial protocols for (a non-Boolean version of) pointer jumping are possible even when all players are collapsing. Our lower bound result uses a novel proof technique, different from those of earlier lower bounds that had an information-theoretic flavor. We hope this is useful in further study of the problem.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:20:14 GMT" } ]
2008-02-21T00:00:00
[ [ "Brody", "Joshua", "" ], [ "Chakrabarti", "Amit", "" ] ]
0802.2845
Pascal Weil
Eric Colin De Verdi\`ere (LIENS), Alexander Schrijver (CWI)
Shortest Vertex-Disjoint Two-Face Paths in Planar Graphs
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS math.CO
null
Let $G$ be a directed planar graph of complexity $n$, each arc having a nonnegative length. Let $s$ and $t$ be two distinct faces of $G$; let $s_1,...,s_k$ be vertices incident with $s$; let $t_1,...,t_k$ be vertices incident with $t$. We give an algorithm to compute $k$ pairwise vertex-disjoint paths connecting the pairs $(s_i,t_i)$ in $G$, with minimal total length, in $O(kn\log n)$ time.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:20:48 GMT" } ]
2008-02-21T00:00:00
[ [ "De Verdière", "Eric Colin", "", "LIENS" ], [ "Schrijver", "Alexander", "", "CWI" ] ]
0802.2846
Pascal Weil
Atlas F. Cook IV, Carola Wenk
Geodesic Fr\'echet Distance Inside a Simple Polygon
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS cs.CG
null
We unveil an alluring alternative to parametric search that applies to both the non-geodesic and geodesic Fr\'echet optimization problems. This randomized approach is based on a variant of red-blue intersections and is appealing due to its elegance and practical efficiency when compared to parametric search. We present the first algorithm for the geodesic Fr\'echet distance between two polygonal curves $A$ and $B$ inside a simple bounding polygon $P$. The geodesic Fr\'echet decision problem is solved almost as fast as its non-geodesic sibling and requires $O(N^{2\log k)$ time and $O(k+N)$ space after $O(k)$ preprocessing, where $N$ is the larger of the complexities of $A$ and $B$ and $k$ is the complexity of $P$. The geodesic Fr\'echet optimization problem is solved by a randomized approach in $O(k+N^{2\log kN\log N)$ expected time and $O(k+N^{2)$ space. This runtime is only a logarithmic factor larger than the standard non-geodesic Fr\'echet algorithm (Alt and Godau 1995). Results are also presented for the geodesic Fr\'echet distance in a polygonal domain with obstacles and the geodesic Hausdorff distance for sets of points or sets of line segments inside a simple polygon $P$.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:21:19 GMT" } ]
2008-05-21T00:00:00
[ [ "Cook", "Atlas F.", "IV" ], [ "Wenk", "Carola", "" ] ]
0802.2847
Pascal Weil
Ulrich Meyer
On Dynamic Breadth-First Search in External-Memory
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS
null
We provide the first non-trivial result on dynamic breadth-first search (BFS) in external-memory: For general sparse undirected graphs of initially $n$ nodes and O(n) edges and monotone update sequences of either $\Theta(n)$ edge insertions or $\Theta(n)$ edge deletions, we prove an amortized high-probability bound of $O(n/B^{2/3}+\sort(n)\cdot \log B)$ I/Os per update. In contrast, the currently best approach for static BFS on sparse undirected graphs requires $\Omega(n/B^{1/2}+\sort(n))$ I/Os.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:21:21 GMT" } ]
2008-02-21T00:00:00
[ [ "Meyer", "Ulrich", "" ] ]
0802.2850
Pascal Weil
Samir Datta, Raghav Kulkarni, Sambuddha Roy (IBM IRL)
Deterministically Isolating a Perfect Matching in Bipartite Planar Graphs
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS math.CO
null
We present a deterministic way of assigning small (log bit) weights to the edges of a bipartite planar graph so that the minimum weight perfect matching becomes unique. The isolation lemma as described in (Mulmuley et al. 1987) achieves the same for general graphs using a randomized weighting scheme, whereas we can do it deterministically when restricted to bipartite planar graphs. As a consequence, we reduce both decision and construction versions of the matching problem to testing whether a matrix is singular, under the promise that its determinant is 0 or 1, thus obtaining a highly parallel SPL algorithm for bipartite planar graphs. This improves the earlier known bounds of non-uniform SPL by (Allender et al. 1999) and $NC^2$ by (Miller and Naor 1995, Mahajan and Varadarajan 2000). It also rekindles the hope of obtaining a deterministic parallel algorithm for constructing a perfect matching in non-bipartite planar graphs, which has been open for a long time. Our techniques are elementary and simple.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:21:52 GMT" } ]
2008-02-21T00:00:00
[ [ "Datta", "Samir", "", "IBM IRL" ], [ "Kulkarni", "Raghav", "", "IBM IRL" ], [ "Roy", "Sambuddha", "", "IBM IRL" ] ]
0802.2851
Pascal Weil
Pinyan Lu, Changyuan Yu
An Improved Randomized Truthful Mechanism for Scheduling Unrelated Machines
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS
null
We study the scheduling problem on unrelated machines in the mechanism design setting. This problem was proposed and studied in the seminal paper (Nisan and Ronen 1999), where they gave a 1.75-approximation randomized truthful mechanism for the case of two machines. We improve this result by a 1.6737-approximation randomized truthful mechanism. We also generalize our result to a $0.8368m$-approximation mechanism for task scheduling with $m$ machines, which improve the previous best upper bound of $0.875m(Mu'alem and Schapira 2007).
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:22:30 GMT" } ]
2008-02-21T00:00:00
[ [ "Lu", "Pinyan", "" ], [ "Yu", "Changyuan", "" ] ]
0802.2852
Pascal Weil
Martin Dietzfelbinger, Jonathan E. Rowe, Ingo Wegener, Philipp Woelfel
Tight Bounds for Blind Search on the Integers
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS
null
We analyze a simple random process in which a token is moved in the interval $A=\{0,...,n\$: Fix a probability distribution $\mu$ over $\{1,...,n\$. Initially, the token is placed in a random position in $A$. In round $t$, a random value $d$ is chosen according to $\mu$. If the token is in position $a\geq d$, then it is moved to position $a-d$. Otherwise it stays put. Let $T$ be the number of rounds until the token reaches position 0. We show tight bounds for the expectation of $T$ for the optimal distribution $\mu$. More precisely, we show that $\min_\mu\{E_\mu(T)\=\Theta((\log n)^2)$. For the proof, a novel potential function argument is introduced. The research is motivated by the problem of approximating the minimum of a continuous function over $[0,1]$ with a ``blind'' optimization strategy.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:22:33 GMT" } ]
2008-02-21T00:00:00
[ [ "Dietzfelbinger", "Martin", "" ], [ "Rowe", "Jonathan E.", "" ], [ "Wegener", "Ingo", "" ], [ "Woelfel", "Philipp", "" ] ]
0802.2854
Pascal Weil
Thomas Erlebach, Torben Hagerup, Klaus Jansen, Moritz Minzlaff, Alexander Wolff
Trimming of Graphs, with Application to Point Labeling
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DM cs.DS math.CO
null
For $t,g>0$, a vertex-weighted graph of total weight $W$ is $(t,g)$-trimmable if it contains a vertex-induced subgraph of total weight at least $(1-1/t)W$ and with no simple path of more than $g$ edges. A family of graphs is trimmable if for each constant $t>0$, there is a constant $g=g(t)$ such that every vertex-weighted graph in the family is $(t,g)$-trimmable. We show that every family of graphs of bounded domino treewidth is trimmable. This implies that every family of graphs of bounded degree is trimmable if the graphs in the family have bounded treewidth or are planar. Based on this result, we derive a polynomial-time approximation scheme for the problem of labeling weighted points with nonoverlapping sliding labels of unit height and given lengths so as to maximize the total weight of the labeled points. This settles one of the last major open questions in the theory of map labeling.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:23:38 GMT" } ]
2008-02-21T00:00:00
[ [ "Erlebach", "Thomas", "" ], [ "Hagerup", "Torben", "" ], [ "Jansen", "Klaus", "" ], [ "Minzlaff", "Moritz", "" ], [ "Wolff", "Alexander", "" ] ]
0802.2855
Pascal Weil
Thomas Erlebach, Michael Hoffmann, Danny Krizanc, Mat\'us Mihal'\'ak, Rajeev Raman
Computing Minimum Spanning Trees with Uncertainty
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS
null
We consider the minimum spanning tree problem in a setting where information about the edge weights of the given graph is uncertain. Initially, for each edge $e$ of the graph only a set $A_e$, called an uncertainty area, that contains the actual edge weight $w_e$ is known. The algorithm can `update' $e$ to obtain the edge weight $w_e \in A_e$. The task is to output the edge set of a minimum spanning tree after a minimum number of updates. An algorithm is $k$-update competitive if it makes at most $k$ times as many updates as the optimum. We present a 2-update competitive algorithm if all areas $A_e$ are open or trivial, which is the best possible among deterministic algorithms. The condition on the areas $A_e$ is to exclude degenerate inputs for which no constant update competitive algorithm can exist. Next, we consider a setting where the vertices of the graph correspond to points in Euclidean space and the weight of an edge is equal to the distance of its endpoints. The location of each point is initially given as an uncertainty area, and an update reveals the exact location of the point. We give a general relation between the edge uncertainty and the vertex uncertainty versions of a problem and use it to derive a 4-update competitive algorithm for the minimum spanning tree problem in the vertex uncertainty model. Again, we show that this is best possible among deterministic algorithms.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:24:10 GMT" } ]
2008-02-21T00:00:00
[ [ "Erlebach", "Thomas", "" ], [ "Hoffmann", "Michael", "" ], [ "Krizanc", "Danny", "" ], [ "Mihal'ák", "Matús", "" ], [ "Raman", "Rajeev", "" ] ]
0802.2856
Pascal Weil
Javier Esparza, Stefan Kiefer, Michael Luttenberger
Convergence Thresholds of Newton's Method for Monotone Polynomial Equations
version 2 deposited February 29, after the end of the STACS conference. Two minor mistakes corrected
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS cs.IT cs.NA math.IT
null
Monotone systems of polynomial equations (MSPEs) are systems of fixed-point equations $X_1 = f_1(X_1, ..., X_n),$ $..., X_n = f_n(X_1, ..., X_n)$ where each $f_i$ is a polynomial with positive real coefficients. The question of computing the least non-negative solution of a given MSPE $\vec X = \vec f(\vec X)$ arises naturally in the analysis of stochastic models such as stochastic context-free grammars, probabilistic pushdown automata, and back-button processes. Etessami and Yannakakis have recently adapted Newton's iterative method to MSPEs. In a previous paper we have proved the existence of a threshold $k_{\vec f}$ for strongly connected MSPEs, such that after $k_{\vec f}$ iterations of Newton's method each new iteration computes at least 1 new bit of the solution. However, the proof was purely existential. In this paper we give an upper bound for $k_{\vec f}$ as a function of the minimal component of the least fixed-point $\mu\vec f$ of $\vec f(\vec X)$. Using this result we show that $k_{\vec f}$ is at most single exponential resp. linear for strongly connected MSPEs derived from probabilistic pushdown automata resp. from back-button processes. Further, we prove the existence of a threshold for arbitrary MSPEs after which each new iteration computes at least $1/w2^h$ new bits of the solution, where $w$ and $h$ are the width and height of the DAG of strongly connected components.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:24:39 GMT" }, { "version": "v2", "created": "Fri, 29 Feb 2008 07:31:48 GMT" } ]
2008-02-29T00:00:00
[ [ "Esparza", "Javier", "" ], [ "Kiefer", "Stefan", "" ], [ "Luttenberger", "Michael", "" ] ]
0802.2857
Pascal Weil
Shachar Lovett
Lower bounds for adaptive linearity tests
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.CC cs.DS
null
Linearity tests are randomized algorithms which have oracle access to the truth table of some function f, and are supposed to distinguish between linear functions and functions which are far from linear. Linearity tests were first introduced by (Blum, Luby and Rubenfeld, 1993), and were later used in the PCP theorem, among other applications. The quality of a linearity test is described by its correctness c - the probability it accepts linear functions, its soundness s - the probability it accepts functions far from linear, and its query complexity q - the number of queries it makes. Linearity tests were studied in order to decrease the soundness of linearity tests, while keeping the query complexity small (for one reason, to improve PCP constructions). Samorodnitsky and Trevisan (Samorodnitsky and Trevisan 2000) constructed the Complete Graph Test, and prove that no Hyper Graph Test can perform better than the Complete Graph Test. Later in (Samorodnitsky and Trevisan 2006) they prove, among other results, that no non-adaptive linearity test can perform better than the Complete Graph Test. Their proof uses the algebraic machinery of the Gowers Norm. A result by (Ben-Sasson, Harsha and Raskhodnikova 2005) allows to generalize this lower bound also to adaptive linearity tests. We also prove the same optimal lower bound for adaptive linearity test, but our proof technique is arguably simpler and more direct than the one used in (Samorodnitsky and Trevisan 2006). We also study, like (Samorodnitsky and Trevisan 2006), the behavior of linearity tests on quadratic functions. However, instead of analyzing the Gowers Norm of certain functions, we provide a more direct combinatorial proof, studying the behavior of linearity tests on random quadratic functions...
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:26:40 GMT" } ]
2008-02-21T00:00:00
[ [ "Lovett", "Shachar", "" ] ]
0802.2864
Pascal Weil
Iyad A. Kanj, Ljubomir Perkovic
On Geometric Spanners of Euclidean and Unit Disk Graphs
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS
null
We consider the problem of constructing bounded-degree planar geometric spanners of Euclidean and unit-disk graphs. It is well known that the Delaunay subgraph is a planar geometric spanner with stretch factor $C_{del\approx 2.42$; however, its degree may not be bounded. Our first result is a very simple linear time algorithm for constructing a subgraph of the Delaunay graph with stretch factor $\rho =1+2\pi(k\cos{\frac{\pi{k)^{-1$ and degree bounded by $k$, for any integer parameter $k\geq 14$. This result immediately implies an algorithm for constructing a planar geometric spanner of a Euclidean graph with stretch factor $\rho \cdot C_{del$ and degree bounded by $k$, for any integer parameter $k\geq 14$. Moreover, the resulting spanner contains a Euclidean Minimum Spanning Tree (EMST) as a subgraph. Our second contribution lies in developing the structural results necessary to transfer our analysis and algorithm from Euclidean graphs to unit disk graphs, the usual model for wireless ad-hoc networks. We obtain a very simple distributed, {\em strictly-localized algorithm that, given a unit disk graph embedded in the plane, constructs a geometric spanner with the above stretch factor and degree bound, and also containing an EMST as a subgraph. The obtained results dramatically improve the previous results in all aspects, as shown in the paper.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:36:52 GMT" } ]
2008-02-21T00:00:00
[ [ "Kanj", "Iyad A.", "" ], [ "Perkovic", "Ljubomir", "" ] ]
0802.2867
Pascal Weil
Viet Tung Hoang, Wing-Kin Sung
Fixed Parameter Polynomial Time Algorithms for Maximum Agreement and Compatible Supertrees
null
Dans Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DS
null
Consider a set of labels $L$ and a set of trees ${\mathcal T} = \{{\mathcal T}^{(1), {\mathcal T}^{(2), ..., {\mathcal T}^{(k) \$ where each tree ${\mathcal T}^{(i)$ is distinctly leaf-labeled by some subset of $L$. One fundamental problem is to find the biggest tree (denoted as supertree) to represent $\mathcal T}$ which minimizes the disagreements with the trees in ${\mathcal T}$ under certain criteria. This problem finds applications in phylogenetics, database, and data mining. In this paper, we focus on two particular supertree problems, namely, the maximum agreement supertree problem (MASP) and the maximum compatible supertree problem (MCSP). These two problems are known to be NP-hard for $k \geq 3$. This paper gives the first polynomial time algorithms for both MASP and MCSP when both $k$ and the maximum degree $D$ of the trees are constant.
[ { "version": "v1", "created": "Wed, 20 Feb 2008 14:38:47 GMT" } ]
2008-02-21T00:00:00
[ [ "Hoang", "Viet Tung", "" ], [ "Sung", "Wing-Kin", "" ] ]