id
stringlengths
9
16
submitter
stringlengths
4
52
authors
stringlengths
4
937
title
stringlengths
7
243
comments
stringlengths
1
472
journal-ref
stringlengths
4
244
doi
stringlengths
14
55
report-no
stringlengths
3
125
categories
stringlengths
5
97
license
stringclasses
9 values
abstract
stringlengths
33
2.95k
versions
list
update_date
timestamp[s]
authors_parsed
sequence
0802.3448
Haim Kaplan
Edith Cohen and Haim Kaplan
Sketch-Based Estimation of Subpopulation-Weight
null
null
null
null
cs.DB cs.DS cs.NI cs.PF
null
Summaries of massive data sets support approximate query processing over the original data. A basic aggregate over a set of records is the weight of subpopulations specified as a predicate over records' attributes. Bottom-k sketches are a powerful summarization format of weighted items that includes priority sampling and the classic weighted sampling without replacement. They can be computed efficiently for many representations of the data including distributed databases and data streams. We derive novel unbiased estimators and efficient confidence bounds for subpopulation weight. Our estimators and bounds are tailored by distinguishing between applications (such as data streams) where the total weight of the sketched set can be computed by the summarization algorithm without a significant use of additional resources, and applications (such as sketches of network neighborhoods) where this is not the case. Our rigorous derivations are based on clever applications of the Horvitz-Thompson estimator, and are complemented by efficient computational methods. We demonstrate their benefit on a wide range of Pareto distributions.
[ { "version": "v1", "created": "Sat, 23 Feb 2008 15:25:04 GMT" } ]
2008-02-26T00:00:00
[ [ "Cohen", "Edith", "" ], [ "Kaplan", "Haim", "" ] ]
0802.3881
Jorge Sousa Pinto
Jos\'e Bacelar Almeida, Jorge Sousa Pinto
Deriving Sorting Algorithms
Technical Report
null
null
DI-PURe-06.04.01
cs.DS cs.LO
null
This paper proposes new derivations of three well-known sorting algorithms, in their functional formulation. The approach we use is based on three main ingredients: first, the algorithms are derived from a simpler algorithm, i.e. the specification is already a solution to the problem (in this sense our derivations are program transformations). Secondly, a mixture of inductive and coinductive arguments are used in a uniform, algebraic style in our reasoning. Finally, the approach uses structural invariants so as to strengthen the equational reasoning with logical arguments that cannot be captured in the algebraic framework.
[ { "version": "v1", "created": "Tue, 26 Feb 2008 19:47:57 GMT" } ]
2008-02-27T00:00:00
[ [ "Almeida", "José Bacelar", "" ], [ "Pinto", "Jorge Sousa", "" ] ]
0802.4040
Stephan Mertens
Stefan Boettcher, Stephan Mertens
Analysis of the Karmarkar-Karp Differencing Algorithm
9 pages, 8 figures; minor changes
European Physics Journal B 65, 131-140 (2008)
10.1140/epjb/e2008-00320-9
null
cs.NA cond-mat.dis-nn cs.DM cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Karmarkar-Karp differencing algorithm is the best known polynomial time heuristic for the number partitioning problem, fundamental in both theoretical computer science and statistical physics. We analyze the performance of the differencing algorithm on random instances by mapping it to a nonlinear rate equation. Our analysis reveals strong finite size effects that explain why the precise asymptotics of the differencing solution is hard to establish by simulations. The asymptotic series emerging from the rate equation satisfies all known bounds on the Karmarkar-Karp algorithm and projects a scaling $n^{-c\ln n}$, where $c=1/(2\ln2)=0.7213...$. Our calculations reveal subtle relations between the algorithm and Fibonacci-like sequences, and we establish an explicit identity to that effect.
[ { "version": "v1", "created": "Wed, 27 Feb 2008 17:24:07 GMT" }, { "version": "v2", "created": "Fri, 3 Oct 2008 09:48:52 GMT" } ]
2008-10-03T00:00:00
[ [ "Boettcher", "Stefan", "" ], [ "Mertens", "Stephan", "" ] ]
0802.4244
Dimitris Papamichail
Christos Tryfonas, Dimitris Papamichail, Andrew Mehler, Steven Skiena
Call Admission Control Algorithm for pre-stored VBR video streams
12 pages, 9 figures, includes appendix
null
null
null
cs.NI cs.DS
http://creativecommons.org/licenses/by-nc-sa/3.0/
We examine the problem of accepting a new request for a pre-stored VBR video stream that has been smoothed using any of the smoothing algorithms found in the literature. The output of these algorithms is a piecewise constant-rate schedule for a Variable Bit-Rate (VBR) stream. The schedule guarantees that the decoder buffer does not overflow or underflow. The problem addressed in this paper is the determination of the minimal time displacement of each new requested VBR stream so that it can be accomodated by the network and/or the video server without overbooking the committed traffic. We prove that this call-admission control problem for multiple requested VBR streams is NP-complete and inapproximable within a constant factor, by reducing it from the VERTEX COLOR problem. We also present a deterministic morphology-sensitive algorithm that calculates the minimal time displacement of a VBR stream request. The complexity of the proposed algorithm make it suitable for real-time determination of the time displacement parameter during the call admission phase.
[ { "version": "v1", "created": "Thu, 28 Feb 2008 17:45:03 GMT" } ]
2008-02-29T00:00:00
[ [ "Tryfonas", "Christos", "" ], [ "Papamichail", "Dimitris", "" ], [ "Mehler", "Andrew", "" ], [ "Skiena", "Steven", "" ] ]
0802.4325
Mirela Damian
Mirela Damian
A Simple Yao-Yao-Based Spanner of Bounded Degree
7 pages, 5 figures
null
null
null
cs.CG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is a standing open question to decide whether the Yao-Yao structure for unit disk graphs (UDGs) is a length spanner of not. This question is highly relevant to the topology control problem for wireless ad hoc networks. In this paper we make progress towards resolving this question by showing that the Yao-Yao structure is a length spanner for UDGs of bounded aspect ratio. We also propose a new local algorithm, called Yao-Sparse-Sink, based on the Yao-Sink method introduced by Li, Wan, Wang and Frieder, that computes a (1+e)-spanner of bounded degree for a given UDG and for given e > 0. The Yao-Sparse-Sink method enables an efficient local computation of sparse sink trees. Finally, we show that all these structures for UDGs -- Yao, Yao-Yao, Yao-Sink and Yao-Sparse-Sink -- have arbitrarily large weight.
[ { "version": "v1", "created": "Fri, 29 Feb 2008 14:39:59 GMT" }, { "version": "v2", "created": "Fri, 4 Apr 2008 14:40:40 GMT" } ]
2008-04-04T00:00:00
[ [ "Damian", "Mirela", "" ] ]
0803.0248
Emmanuelle Lebhar
Augustin Chaintreau, Pierre Fraigniaud, Emmanuelle Lebhar
Networks become navigable as nodes move and forget
21 pages, 1 figure
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a dynamical process for network evolution, aiming at explaining the emergence of the small world phenomenon, i.e., the statistical observation that any pair of individuals are linked by a short chain of acquaintances computable by a simple decentralized routing algorithm, known as greedy routing. Previously proposed dynamical processes enabled to demonstrate experimentally (by simulations) that the small world phenomenon can emerge from local dynamics. However, the analysis of greedy routing using the probability distributions arising from these dynamics is quite complex because of mutual dependencies. In contrast, our process enables complete formal analysis. It is based on the combination of two simple processes: a random walk process, and an harmonic forgetting process. Both processes reflect natural behaviors of the individuals, viewed as nodes in the network of inter-individual acquaintances. We prove that, in k-dimensional lattices, the combination of these two processes generates long-range links mutually independently distributed as a k-harmonic distribution. We analyze the performances of greedy routing at the stationary regime of our process, and prove that the expected number of steps for routing from any source to any target in any multidimensional lattice is a polylogarithmic function of the distance between the two nodes in the lattice. Up to our knowledge, these results are the first formal proof that navigability in small worlds can emerge from a dynamical process for network evolution. Our dynamical process can find practical applications to the design of spatial gossip and resource location protocols.
[ { "version": "v1", "created": "Mon, 3 Mar 2008 14:44:08 GMT" } ]
2008-03-04T00:00:00
[ [ "Chaintreau", "Augustin", "" ], [ "Fraigniaud", "Pierre", "" ], [ "Lebhar", "Emmanuelle", "" ] ]
0803.0473
Edith Cohen
Edith Cohen, Nick Duffield, Haim Kaplan, Carsten Lund, and Mikkel Thorup
Stream sampling for variance-optimal estimation of subset sums
31 pages. An extended abstract appeared in the proceedings of the 20th ACM-SIAM Symposium on Discrete Algorithms (SODA 2009)
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present an efficient reservoir sampling scheme, $\varoptk$, that dominates all previous schemes in terms of estimation quality. $\varoptk$ provides {\em variance optimal unbiased estimation of subset sums}. More precisely, if we have seen $n$ items of the stream, then for {\em any} subset size $m$, our scheme based on $k$ samples minimizes the average variance over all subsets of size $m$. In fact, the optimality is against any off-line scheme with $k$ samples tailored for the concrete set of items seen. In addition to optimal average variance, our scheme provides tighter worst-case bounds on the variance of {\em particular} subsets than previously possible. It is efficient, handling each new item of the stream in $O(\log k)$ time. Finally, it is particularly well suited for combination of samples from different streams in a distributed setting.
[ { "version": "v1", "created": "Tue, 4 Mar 2008 15:12:24 GMT" }, { "version": "v2", "created": "Mon, 15 Nov 2010 16:43:54 GMT" } ]
2010-11-16T00:00:00
[ [ "Cohen", "Edith", "" ], [ "Duffield", "Nick", "" ], [ "Kaplan", "Haim", "" ], [ "Lund", "Carsten", "" ], [ "Thorup", "Mikkel", "" ] ]
0803.0476
Renaud Lambiotte
Vincent D. Blondel, Jean-Loup Guillaume, Renaud Lambiotte and Etienne Lefebvre
Fast unfolding of communities in large networks
6 pages, 5 figures, 1 table; new version with new figures in order to clarify our method, where we look more carefully at the role played by the ordering of the nodes and where we compare our method with that of Wakita and Tsurumi
J. Stat. Mech. (2008) P10008
10.1088/1742-5468/2008/10/P10008
null
physics.soc-ph cond-mat.stat-mech cs.CY cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection method in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2.6 million customers and by analyzing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad-hoc modular networks. .
[ { "version": "v1", "created": "Tue, 4 Mar 2008 15:29:44 GMT" }, { "version": "v2", "created": "Fri, 25 Jul 2008 09:52:42 GMT" } ]
2008-12-01T00:00:00
[ [ "Blondel", "Vincent D.", "" ], [ "Guillaume", "Jean-Loup", "" ], [ "Lambiotte", "Renaud", "" ], [ "Lefebvre", "Etienne", "" ] ]
0803.0701
Gregory Gutin
N Alon, F.V. Fomin, G. Gutin, M. Krivelevich and S. Saurabh
Spanning directed trees with many leaves
null
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The {\sc Directed Maximum Leaf Out-Branching} problem is to find an out-branching (i.e. a rooted oriented spanning tree) in a given digraph with the maximum number of leaves. In this paper, we obtain two combinatorial results on the number of leaves in out-branchings. We show that - every strongly connected $n$-vertex digraph $D$ with minimum in-degree at least 3 has an out-branching with at least $(n/4)^{1/3}-1$ leaves; - if a strongly connected digraph $D$ does not contain an out-branching with $k$ leaves, then the pathwidth of its underlying graph UG($D$) is $O(k\log k)$. Moreover, if the digraph is acyclic, the pathwidth is at most $4k$. The last result implies that it can be decided in time $2^{O(k\log^2 k)}\cdot n^{O(1)}$ whether a strongly connected digraph on $n$ vertices has an out-branching with at least $k$ leaves. On acyclic digraphs the running time of our algorithm is $2^{O(k\log k)}\cdot n^{O(1)}$.
[ { "version": "v1", "created": "Wed, 5 Mar 2008 16:38:34 GMT" } ]
2008-03-06T00:00:00
[ [ "Alon", "N", "" ], [ "Fomin", "F. V.", "" ], [ "Gutin", "G.", "" ], [ "Krivelevich", "M.", "" ], [ "Saurabh", "S.", "" ] ]
0803.0726
Marie-Pierre B\'eal
Marie-Pierre B\'eal, Dominique Perrin
A quadratic algorithm for road coloring
null
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Road Coloring Theorem states that every aperiodic directed graph with constant out-degree has a synchronized coloring. This theorem had been conjectured during many years as the Road Coloring Problem before being settled by A. Trahtman. Trahtman's proof leads to an algorithm that finds a synchronized labeling with a cubic worst-case time complexity. We show a variant of his construction with a worst-case complexity which is quadratic in time and linear in space. We also extend the Road Coloring Theorem to the periodic case.
[ { "version": "v1", "created": "Wed, 5 Mar 2008 20:35:54 GMT" }, { "version": "v2", "created": "Wed, 5 Mar 2008 21:33:23 GMT" }, { "version": "v3", "created": "Mon, 7 Apr 2008 15:12:00 GMT" }, { "version": "v4", "created": "Tue, 15 Apr 2008 16:32:12 GMT" }, { "version": "v5", "created": "Wed, 14 May 2008 14:54:09 GMT" }, { "version": "v6", "created": "Fri, 11 Jul 2008 14:21:07 GMT" }, { "version": "v7", "created": "Wed, 7 Oct 2009 16:00:57 GMT" }, { "version": "v8", "created": "Mon, 23 Apr 2012 16:17:48 GMT" }, { "version": "v9", "created": "Thu, 30 May 2013 16:16:40 GMT" } ]
2013-05-31T00:00:00
[ [ "Béal", "Marie-Pierre", "" ], [ "Perrin", "Dominique", "" ] ]
0803.0731
Ning Chen
Ning Chen and Zhiyuan Yan
Complexity Analysis of Reed-Solomon Decoding over GF(2^m) Without Using Syndromes
11 pages, submitted to EURASIP Journal on Wireless Communications and Networking
null
null
null
cs.IT cs.CC cs.DS math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the majority of the applications of Reed-Solomon (RS) codes, hard decision decoding is based on syndromes. Recently, there has been renewed interest in decoding RS codes without using syndromes. In this paper, we investigate the complexity of syndromeless decoding for RS codes, and compare it to that of syndrome-based decoding. Aiming to provide guidelines to practical applications, our complexity analysis differs in several aspects from existing asymptotic complexity analysis, which is typically based on multiplicative fast Fourier transform (FFT) techniques and is usually in big O notation. First, we focus on RS codes over characteristic-2 fields, over which some multiplicative FFT techniques are not applicable. Secondly, due to moderate block lengths of RS codes in practice, our analysis is complete since all terms in the complexities are accounted for. Finally, in addition to fast implementation using additive FFT techniques, we also consider direct implementation, which is still relevant for RS codes with moderate lengths. Comparing the complexities of both syndromeless and syndrome-based decoding algorithms based on direct and fast implementations, we show that syndromeless decoding algorithms have higher complexities than syndrome-based ones for high rate RS codes regardless of the implementation. Both errors-only and errors-and-erasures decoding are considered in this paper. We also derive tighter bounds on the complexities of fast polynomial multiplications based on Cantor's approach and the fast extended Euclidean algorithm.
[ { "version": "v1", "created": "Wed, 5 Mar 2008 18:54:35 GMT" }, { "version": "v2", "created": "Wed, 7 May 2008 21:05:41 GMT" } ]
2008-05-08T00:00:00
[ [ "Chen", "Ning", "" ], [ "Yan", "Zhiyuan", "" ] ]
0803.0792
Siddhartha Sen
Bernhard Haeupler, Siddhartha Sen, and Robert E. Tarjan
Incremental Topological Ordering and Strong Component Maintenance
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an on-line algorithm for maintaining a topological order of a directed acyclic graph as arcs are added, and detecting a cycle when one is created. Our algorithm takes O(m^{1/2}) amortized time per arc, where m is the total number of arcs. For sparse graphs, this bound improves the best previous bound by a logarithmic factor and is tight to within a constant factor for a natural class of algorithms that includes all the existing ones. Our main insight is that the bidirectional search method of previous algorithms does not require an ordered search, but can be more general. This allows us to avoid the use of heaps (priority queues) entirely. Instead, the deterministic version of our algorithm uses (approximate) median-finding. The randomized version of our algorithm avoids this complication, making it very simple. We extend our topological ordering algorithm to give the first detailed algorithm for maintaining the strong components of a directed graph, and a topological order of these components, as arcs are added. This extension also has an amortized time bound of O(m^{1/2}) per arc.
[ { "version": "v1", "created": "Thu, 6 Mar 2008 05:11:18 GMT" } ]
2008-03-07T00:00:00
[ [ "Haeupler", "Bernhard", "" ], [ "Sen", "Siddhartha", "" ], [ "Tarjan", "Robert E.", "" ] ]
0803.0845
Evain Laurent
Laurent Evain
Knapsack cryptosystems built on NP-hard instance
20 pages
null
null
null
cs.CR cs.CC cs.DM cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We construct three public key knapsack cryptosystems. Standard knapsack cryptosystems hide easy instances of the knapsack problem and have been broken. The systems considered in the article face this problem: They hide a random (possibly hard) instance of the knapsack problem. We provide both complexity results (size of the key, time needed to encypher/decypher...) and experimental results. Security results are given for the second cryptosystem (the fastest one and the one with the shortest key). Probabilistic polynomial reductions show that finding the private key is as difficult as factorizing a product of two primes. We also consider heuristic attacks. First, the density of the cryptosystem can be chosen arbitrarily close to one, discarding low density attacks. Finally, we consider explicit heuristic attacks based on the LLL algorithm and we prove that with respect to these attacks, the public key is as secure as a random key.
[ { "version": "v1", "created": "Thu, 6 Mar 2008 12:20:35 GMT" } ]
2008-03-17T00:00:00
[ [ "Evain", "Laurent", "" ] ]
0803.0929
Daniel A. Spielman
Daniel A. Spielman, Nikhil Srivastava
Graph Sparsification by Effective Resistances
null
null
null
null
cs.DS
http://creativecommons.org/licenses/by/3.0/
We present a nearly-linear time algorithm that produces high-quality sparsifiers of weighted graphs. Given as input a weighted graph $G=(V,E,w)$ and a parameter $\epsilon>0$, we produce a weighted subgraph $H=(V,\tilde{E},\tilde{w})$ of $G$ such that $|\tilde{E}|=O(n\log n/\epsilon^2)$ and for all vectors $x\in\R^V$ $(1-\epsilon)\sum_{uv\in E}(x(u)-x(v))^2w_{uv}\le \sum_{uv\in\tilde{E}}(x(u)-x(v))^2\tilde{w}_{uv} \le (1+\epsilon)\sum_{uv\in E}(x(u)-x(v))^2w_{uv}. (*)$ This improves upon the sparsifiers constructed by Spielman and Teng, which had $O(n\log^c n)$ edges for some large constant $c$, and upon those of Bencz\'ur and Karger, which only satisfied (*) for $x\in\{0,1\}^V$. A key ingredient in our algorithm is a subroutine of independent interest: a nearly-linear time algorithm that builds a data structure from which we can query the approximate effective resistance between any two vertices in a graph in $O(\log n)$ time.
[ { "version": "v1", "created": "Thu, 6 Mar 2008 18:03:06 GMT" }, { "version": "v2", "created": "Fri, 7 Mar 2008 23:10:59 GMT" }, { "version": "v3", "created": "Fri, 14 Mar 2008 19:49:32 GMT" }, { "version": "v4", "created": "Wed, 18 Nov 2009 07:22:03 GMT" } ]
2009-11-18T00:00:00
[ [ "Spielman", "Daniel A.", "" ], [ "Srivastava", "Nikhil", "" ] ]
0803.0954
Michael Hahsler
Michael Hahsler, Christian Buchta, and Kurt Hornik
Selective association rule generation
null
Computational Statistics, 2007. Online First, Published: 25 July 2007
10.1007/s00180-007-0062-z
null
cs.DB cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mining association rules is a popular and well researched method for discovering interesting relations between variables in large databases. A practical problem is that at medium to low support values often a large number of frequent itemsets and an even larger number of association rules are found in a database. A widely used approach is to gradually increase minimum support and minimum confidence or to filter the found rules using increasingly strict constraints on additional measures of interestingness until the set of rules found is reduced to a manageable size. In this paper we describe a different approach which is based on the idea to first define a set of ``interesting'' itemsets (e.g., by a mixture of mining and expert knowledge) and then, in a second step to selectively generate rules for only these itemsets. The main advantage of this approach over increasing thresholds or filtering rules is that the number of rules found is significantly reduced while at the same time it is not necessary to increase the support and confidence thresholds which might lead to missing important information in the database.
[ { "version": "v1", "created": "Thu, 6 Mar 2008 19:43:35 GMT" } ]
2008-12-18T00:00:00
[ [ "Hahsler", "Michael", "" ], [ "Buchta", "Christian", "" ], [ "Hornik", "Kurt", "" ] ]
0803.0988
Samuel Daitch
Samuel I. Daitch, Daniel A. Spielman
Faster Approximate Lossy Generalized Flow via Interior Point Algorithms
v2: bug fixes and some expanded proofs
null
null
null
cs.DS cs.NA
http://creativecommons.org/licenses/by/3.0/
We present faster approximation algorithms for generalized network flow problems. A generalized flow is one in which the flow out of an edge differs from the flow into the edge by a constant factor. We limit ourselves to the lossy case, when these factors are at most 1. Our algorithm uses a standard interior-point algorithm to solve a linear program formulation of the network flow problem. The system of linear equations that arises at each step of the interior-point algorithm takes the form of a symmetric M-matrix. We present an algorithm for solving such systems in nearly linear time. The algorithm relies on the Spielman-Teng nearly linear time algorithm for solving linear systems in diagonally-dominant matrices. For a graph with m edges, our algorithm obtains an additive epsilon approximation of the maximum generalized flow and minimum cost generalized flow in time tildeO(m^(3/2) * log(1/epsilon)). In many parameter ranges, this improves over previous algorithms by a factor of approximately m^(1/2). We also obtain a similar improvement for exactly solving the standard min-cost flow problem.
[ { "version": "v1", "created": "Thu, 6 Mar 2008 21:57:53 GMT" }, { "version": "v2", "created": "Mon, 7 Apr 2008 19:02:38 GMT" } ]
2008-04-07T00:00:00
[ [ "Daitch", "Samuel I.", "" ], [ "Spielman", "Daniel A.", "" ] ]
0803.1245
George Bell
George I. Bell
The shortest game of Chinese Checkers and related problems
22 pages, 10 figures; published version
INTEGERS: Electronic Journal of Combinatorial Number Theory 9 (2009) #G01
null
null
math.CO cs.DM cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In 1979, David Fabian found a complete game of two-person Chinese Checkers in 30 moves (15 by each player) [Martin Gardner, Penrose Tiles to Trapdoor Ciphers, MAA, 1997]. This solution requires that the two players cooperate to generate a win as quickly as possible for one of them. We show, using computational search techniques, that no shorter game is possible. We also consider a solitaire version of Chinese Checkers where one player attempts to move her pieces across the board in as few moves as possible. In 1971, Octave Levenspiel found a solution in 27 moves [Ibid.]; we demonstrate that no shorter solution exists. To show optimality, we employ a variant of A* search, as well as bidirectional search.
[ { "version": "v1", "created": "Sat, 8 Mar 2008 14:38:31 GMT" }, { "version": "v2", "created": "Tue, 13 Jan 2009 18:41:21 GMT" } ]
2009-01-13T00:00:00
[ [ "Bell", "George I.", "" ] ]
0803.1321
Yngve Villanger
Fedor V. Fomin and Yngve Villanger
Treewidth computation and extremal combinatorics
Corrected typos
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For a given graph G and integers b,f >= 0, let S be a subset of vertices of G of size b+1 such that the subgraph of G induced by S is connected and S can be separated from other vertices of G by removing f vertices. We prove that every graph on n vertices contains at most n\binom{b+f}{b} such vertex subsets. This result from extremal combinatorics appears to be very useful in the design of several enumeration and exact algorithms. In particular, we use it to provide algorithms that for a given n-vertex graph G - compute the treewidth of G in time O(1.7549^n) by making use of exponential space and in time O(2.6151^n) and polynomial space; - decide in time O(({\frac{2n+k+1}{3})^{k+1}\cdot kn^6}) if the treewidth of G is at most k; - list all minimal separators of G in time O(1.6181^n) and all potential maximal cliques of G in time O(1.7549^n). This significantly improves previous algorithms for these problems.
[ { "version": "v1", "created": "Sun, 9 Mar 2008 20:54:58 GMT" }, { "version": "v2", "created": "Mon, 5 May 2008 09:34:16 GMT" } ]
2008-05-05T00:00:00
[ [ "Fomin", "Fedor V.", "" ], [ "Villanger", "Yngve", "" ] ]
0803.2174
Mirela Damian
Mirela Damian, Saurav Pandit and Sriram Pemmaraju
Local Approximation Schemes for Topology Control
11 pages, 6 figures
Proceedings of the 25th ACM Symposium on Principles of Distributed Computing, pages 208-218, July 2006
null
null
cs.DS cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a distributed algorithm on wireless ad-hoc networks that runs in polylogarithmic number of rounds in the size of the network and constructs a linear size, lightweight, (1+\epsilon)-spanner for any given \epsilon > 0. A wireless network is modeled by a d-dimensional \alpha-quasi unit ball graph (\alpha-UBG), which is a higher dimensional generalization of the standard unit disk graph (UDG) model. The d-dimensional \alpha-UBG model goes beyond the unrealistic ``flat world'' assumption of UDGs and also takes into account transmission errors, fading signal strength, and physical obstructions. The main result in the paper is this: for any fixed \epsilon > 0, 0 < \alpha \le 1, and d \ge 2, there is a distributed algorithm running in O(\log n \log^* n) communication rounds on an n-node, d-dimensional \alpha-UBG G that computes a (1+\epsilon)-spanner G' of G with maximum degree \Delta(G') = O(1) and total weight w(G') = O(w(MST(G)). This result is motivated by the topology control problem in wireless ad-hoc networks and improves on existing topology control algorithms along several dimensions. The technical contributions of the paper include a new, sequential, greedy algorithm with relaxed edge ordering and lazy updating, and clustering techniques for filtering out unnecessary edges.
[ { "version": "v1", "created": "Fri, 14 Mar 2008 14:37:12 GMT" } ]
2008-03-17T00:00:00
[ [ "Damian", "Mirela", "" ], [ "Pandit", "Saurav", "" ], [ "Pemmaraju", "Sriram", "" ] ]
0803.2615
Olivier Laval
Olivier Laval (LIPN), Sophie Toulouse (LIPN), Anass Nagih (LITA)
Rapport de recherche sur le probl\`eme du plus court chemin contraint
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article provides an overview of the performance and the theoretical complexity of approximate and exact methods for various versions of the shortest path problem. The proposed study aims to improve the resolution of a more general covering problem within a column generation scheme in which the shortest path problem is the sub-problem.
[ { "version": "v1", "created": "Tue, 18 Mar 2008 12:37:36 GMT" } ]
2008-12-18T00:00:00
[ [ "Laval", "Olivier", "", "LIPN" ], [ "Toulouse", "Sophie", "", "LIPN" ], [ "Nagih", "Anass", "", "LITA" ] ]
0803.2842
Shai Gutner
Noga Alon, Yossi Azar, Shai Gutner
Admission Control to Minimize Rejections and Online Set Cover with Repetitions
null
Proc. of 17th SPAA (2005), 238-244
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the admission control problem in general networks. Communication requests arrive over time, and the online algorithm accepts or rejects each request while maintaining the capacity limitations of the network. The admission control problem has been usually analyzed as a benefit problem, where the goal is to devise an online algorithm that accepts the maximum number of requests possible. The problem with this objective function is that even algorithms with optimal competitive ratios may reject almost all of the requests, when it would have been possible to reject only a few. This could be inappropriate for settings in which rejections are intended to be rare events. In this paper, we consider preemptive online algorithms whose goal is to minimize the number of rejected requests. Each request arrives together with the path it should be routed on. We show an $O(\log^2 (mc))$-competitive randomized algorithm for the weighted case, where $m$ is the number of edges in the graph and $c$ is the maximum edge capacity. For the unweighted case, we give an $O(\log m \log c)$-competitive randomized algorithm. This settles an open question of Blum, Kalai and Kleinberg raised in \cite{BlKaKl01}. We note that allowing preemption and handling requests with given paths are essential for avoiding trivial lower bounds.
[ { "version": "v1", "created": "Wed, 19 Mar 2008 16:53:42 GMT" } ]
2008-12-18T00:00:00
[ [ "Alon", "Noga", "" ], [ "Azar", "Yossi", "" ], [ "Gutner", "Shai", "" ] ]
0803.3531
Daniel Raible
Daniel Raible and Henning Fernau
A New Upper Bound for Max-2-Sat: A Graph-Theoretic Approach
null
null
null
null
cs.DS cs.CC cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In {\sc MaxSat}, we ask for an assignment which satisfies the maximum number of clauses for a boolean formula in CNF. We present an algorithm yielding a run time upper bound of $O^*(2^{\frac{1}{6.2158}})$ for {\sc Max-2-Sat} (each clause contains at most 2 literals), where $K$ is the number of clauses. The run time has been achieved by using heuristic priorities on the choice of the variable on which we branch. The implementation of these heuristic priorities is rather simple, though they have a significant effect on the run time. The analysis is done using a tailored non-standard measure.
[ { "version": "v1", "created": "Tue, 25 Mar 2008 11:32:22 GMT" }, { "version": "v2", "created": "Mon, 7 Apr 2008 08:36:18 GMT" } ]
2008-12-18T00:00:00
[ [ "Raible", "Daniel", "" ], [ "Fernau", "Henning", "" ] ]
0803.3632
Mikhail Nesterenko
Mikhail Nesterenko, Adnan Vora
Void Traversal for Guaranteed Delivery in Geometric Routing
null
The 2nd IEEE International Conference on Mobile Ad-hoc and Sensor Systems (MASS 2005), Washington, DC, November, 2005
10.1109/MAHSS.2005.1542862
null
cs.OS cs.DC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Geometric routing algorithms like GFG (GPSR) are lightweight, scalable algorithms that can be used to route in resource-constrained ad hoc wireless networks. However, such algorithms run on planar graphs only. To efficiently construct a planar graph, they require a unit-disk graph. To make the topology unit-disk, the maximum link length in the network has to be selected conservatively. In practical setting this leads to the designs where the node density is rather high. Moreover, the network diameter of a planar subgraph is greater than the original graph, which leads to longer routes. To remedy this problem, we propose a void traversal algorithm that works on arbitrary geometric graphs. We describe how to use this algorithm for geometric routing with guaranteed delivery and compare its performance with GFG.
[ { "version": "v1", "created": "Tue, 25 Mar 2008 20:52:17 GMT" } ]
2016-11-15T00:00:00
[ [ "Nesterenko", "Mikhail", "" ], [ "Vora", "Adnan", "" ] ]
0803.3657
Yeow Meng Chee
Yeow Meng Chee and San Ling
Improved Lower Bounds for Constant GC-Content DNA Codes
4 pages
IEEE Transactions on Information Theory, vol. 54, no. 1, pp. 391-394, 2008
10.1109/TIT.2007.911167
null
cs.IT cs.DS math.CO math.IT q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The design of large libraries of oligonucleotides having constant GC-content and satisfying Hamming distance constraints between oligonucleotides and their Watson-Crick complements is important in reducing hybridization errors in DNA computing, DNA microarray technologies, and molecular bar coding. Various techniques have been studied for the construction of such oligonucleotide libraries, ranging from algorithmic constructions via stochastic local search to theoretical constructions via coding theory. We introduce a new stochastic local search method which yields improvements up to more than one third of the benchmark lower bounds of Gaborit and King (2005) for n-mer oligonucleotide libraries when n <= 14. We also found several optimal libraries by computing maximum cliques on certain graphs.
[ { "version": "v1", "created": "Wed, 26 Mar 2008 02:26:36 GMT" } ]
2008-03-27T00:00:00
[ [ "Chee", "Yeow Meng", "" ], [ "Ling", "San", "" ] ]
0803.3693
Rasmus Pagh
Martin Dietzfelbinger and Rasmus Pagh
Succinct Data Structures for Retrieval and Approximate Membership
null
null
null
null
cs.DS cs.DB cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The retrieval problem is the problem of associating data with keys in a set. Formally, the data structure must store a function f: U ->{0,1}^r that has specified values on the elements of a given set S, a subset of U, |S|=n, but may have any value on elements outside S. Minimal perfect hashing makes it possible to avoid storing the set S, but this induces a space overhead of Theta(n) bits in addition to the nr bits needed for function values. In this paper we show how to eliminate this overhead. Moreover, we show that for any k query time O(k) can be achieved using space that is within a factor 1+e^{-k} of optimal, asymptotically for large n. If we allow logarithmic evaluation time, the additive overhead can be reduced to O(log log n) bits whp. The time to construct the data structure is O(n), expected. A main technical ingredient is to utilize existing tight bounds on the probability of almost square random matrices with rows of low weight to have full row rank. In addition to direct constructions, we point out a close connection between retrieval structures and hash tables where keys are stored in an array and some kind of probing scheme is used. Further, we propose a general reduction that transfers the results on retrieval into analogous results on approximate membership, a problem traditionally addressed using Bloom filters. Again, we show how to eliminate the space overhead present in previously known methods, and get arbitrarily close to the lower bound. The evaluation procedures of our data structures are extremely simple (similar to a Bloom filter). For the results stated above we assume free access to fully random hash functions. However, we show how to justify this assumption using extra space o(n) to simulate full randomness on a RAM.
[ { "version": "v1", "created": "Wed, 26 Mar 2008 10:53:49 GMT" } ]
2008-03-27T00:00:00
[ [ "Dietzfelbinger", "Martin", "" ], [ "Pagh", "Rasmus", "" ] ]
0803.3746
Leonid Litinskii
Leonid B. Litinskii
Cluster Approach to the Domains Formation
11 pages, 5 figures, PDF-file
Optical Memory & Neural Networks (Information Optics), 2007, v.16(3) pp.144-153
null
null
cs.NE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As a rule, a quadratic functional depending on a great number of binary variables has a lot of local minima. One of approaches allowing one to find in averaged deeper local minima is aggregation of binary variables into larger blocks/domains. To minimize the functional one has to change the states of aggregated variables (domains). In the present publication we discuss methods of domains formation. It is shown that the best results are obtained when domains are formed by variables that are strongly connected with each other.
[ { "version": "v1", "created": "Wed, 26 Mar 2008 15:14:33 GMT" } ]
2008-03-27T00:00:00
[ [ "Litinskii", "Leonid B.", "" ] ]
0803.4260
Xin Han
Xin Han, Kazuo Iwama, Guochuan Zhang
On Two Dimensional Orthogonal Knapsack Problem
null
null
null
null
cs.DS cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the following knapsack problem: Given a list of squares with profits, we are requested to pack a sublist of them into a rectangular bin (not a unit square bin) to make profits in the bin as large as possible. We first observe there is a Polynomial Time Approximation Scheme (PTAS) for the problem of packing weighted squares into rectangular bins with large resources, then apply the PTAS to the problem of packing squares with profits into a rectangular bin and get a $\frac65+\epsilon$ approximation algorithm.
[ { "version": "v1", "created": "Sat, 29 Mar 2008 11:15:11 GMT" } ]
2008-12-18T00:00:00
[ [ "Han", "Xin", "" ], [ "Iwama", "Kazuo", "" ], [ "Zhang", "Guochuan", "" ] ]
0803.4355
Marko A. Rodriguez
Marko A. Rodriguez
Grammar-Based Random Walkers in Semantic Networks
First draft of manuscript originally written in November 2006
Rodriguez, M.A., "Grammar-Based Random Walkers in Semantic Networks", Knowledge-Based Systems, volume 21, issue 7, pages 727-739, ISSN: 0950-7051, Elsevier, October 2008
10.1016/j.knosys.2008.03.030
LA-UR-06-7791
cs.AI cs.DS
http://creativecommons.org/licenses/publicdomain/
Semantic networks qualify the meaning of an edge relating any two vertices. Determining which vertices are most "central" in a semantic network is difficult because one relationship type may be deemed subjectively more important than another. For this reason, research into semantic network metrics has focused primarily on context-based rankings (i.e. user prescribed contexts). Moreover, many of the current semantic network metrics rank semantic associations (i.e. directed paths between two vertices) and not the vertices themselves. This article presents a framework for calculating semantically meaningful primary eigenvector-based metrics such as eigenvector centrality and PageRank in semantic networks using a modified version of the random walker model of Markov chain analysis. Random walkers, in the context of this article, are constrained by a grammar, where the grammar is a user defined data structure that determines the meaning of the final vertex ranking. The ideas in this article are presented within the context of the Resource Description Framework (RDF) of the Semantic Web initiative.
[ { "version": "v1", "created": "Mon, 31 Mar 2008 00:13:26 GMT" }, { "version": "v2", "created": "Wed, 10 Sep 2008 23:58:07 GMT" } ]
2008-09-11T00:00:00
[ [ "Rodriguez", "Marko A.", "" ] ]
0804.0149
Fabien Mathieu
Bruno Gaume (IRIT), Fabien Mathieu (FT R&D, INRIA Rocquencourt)
From Random Graph to Small World by Wandering
null
null
null
RR-6489
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Numerous studies show that most known real-world complex networks share similar properties in their connectivity and degree distribution. They are called small worlds. This article gives a method to turn random graphs into Small World graphs by the dint of random walks.
[ { "version": "v1", "created": "Tue, 1 Apr 2008 11:59:43 GMT" }, { "version": "v2", "created": "Wed, 2 Apr 2008 08:12:38 GMT" } ]
2008-12-18T00:00:00
[ [ "Gaume", "Bruno", "", "IRIT" ], [ "Mathieu", "Fabien", "", "FT R&D, INRIA Rocquencourt" ] ]
0804.0277
Marko A. Rodriguez
Marko A. Rodriguez
Mapping Semantic Networks to Undirected Networks
null
International Journal of Applied Mathematics and Computer Sciences, volume 5, issue 1, pages 39-42, ISSN:2070-3902, LA-UR-07-5287, 2009
null
LAUR-07-5287
cs.DS
http://creativecommons.org/licenses/publicdomain/
There exists an injective, information-preserving function that maps a semantic network (i.e a directed labeled network) to a directed network (i.e. a directed unlabeled network). The edge label in the semantic network is represented as a topological feature of the directed network. Also, there exists an injective function that maps a directed network to an undirected network (i.e. an undirected unlabeled network). The edge directionality in the directed network is represented as a topological feature of the undirected network. Through function composition, there exists an injective function that maps a semantic network to an undirected network. Thus, aside from space constraints, the semantic network construct does not have any modeling functionality that is not possible with either a directed or undirected network representation. Two proofs of this idea will be presented. The first is a proof of the aforementioned function composition concept. The second is a simpler proof involving an undirected binary encoding of a semantic network.
[ { "version": "v1", "created": "Wed, 2 Apr 2008 01:19:55 GMT" } ]
2008-12-02T00:00:00
[ [ "Rodriguez", "Marko A.", "" ] ]
0804.0362
Lenka Zdeborova
John Ardelius, Lenka Zdeborov\'a
Exhaustive enumeration unveils clustering and freezing in random 3-SAT
4 pages, 3 figures
Phys. Rev. E 78, 040101(R) (2008)
10.1103/PhysRevE.78.040101
null
cond-mat.stat-mech cond-mat.dis-nn cs.CC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study geometrical properties of the complete set of solutions of the random 3-satisfiability problem. We show that even for moderate system sizes the number of clusters corresponds surprisingly well with the theoretic asymptotic prediction. We locate the freezing transition in the space of solutions which has been conjectured to be relevant in explaining the onset of computational hardness in random constraint satisfaction problems.
[ { "version": "v1", "created": "Wed, 2 Apr 2008 14:32:44 GMT" }, { "version": "v2", "created": "Mon, 14 Apr 2008 09:31:46 GMT" } ]
2008-10-02T00:00:00
[ [ "Ardelius", "John", "" ], [ "Zdeborová", "Lenka", "" ] ]
0804.0570
Daniel Raible
Jianer Chen, Henning Fernau, Dan Ning, Daniel Raible, Jianxin Wang
A Parameterized Perspective on $P_2$-Packings
null
null
null
null
cs.DS cs.CC cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
}We study (vertex-disjoint) $P_2$-packings in graphs under a parameterized perspective. Starting from a maximal $P_2$-packing $\p$ of size $j$ we use extremal arguments for determining how many vertices of $\p$ appear in some $P_2$-packing of size $(j+1)$. We basically can 'reuse' $2.5j$ vertices. We also present a kernelization algorithm that gives a kernel of size bounded by $7k$. With these two results we build an algorithm which constructs a $P_2$-packing of size $k$ in time $\Oh^*(2.482^{3k})$.
[ { "version": "v1", "created": "Thu, 3 Apr 2008 14:36:19 GMT" } ]
2008-12-18T00:00:00
[ [ "Chen", "Jianer", "" ], [ "Fernau", "Henning", "" ], [ "Ning", "Dan", "" ], [ "Raible", "Daniel", "" ], [ "Wang", "Jianxin", "" ] ]
0804.0577
Oskar Sandberg
Oskar Sandberg
Decentralized Search with Random Costs
null
null
null
null
math.PR cs.DS
http://creativecommons.org/licenses/by-nc-sa/3.0/
A decentralized search algorithm is a method of routing on a random graph that uses only limited, local, information about the realization of the graph. In some random graph models it is possible to define such algorithms which produce short paths when routing from any vertex to any other, while for others it is not. We consider random graphs with random costs assigned to the edges. In this situation, we use the methods of stochastic dynamic programming to create a decentralized search method which attempts to minimize the total cost, rather than the number of steps, of each path. We show that it succeeds in doing so among all decentralized search algorithms which monotonically approach the destination. Our algorithm depends on knowing the expected cost of routing from every vertex to any other, but we show that this may be calculated iteratively, and in practice can be easily estimated from the cost of previous routes and compressed into a small routing table. The methods applied here can also be applied directly in other situations, such as efficient searching in graphs with varying vertex degrees.
[ { "version": "v1", "created": "Thu, 3 Apr 2008 15:32:29 GMT" } ]
2008-04-04T00:00:00
[ [ "Sandberg", "Oskar", "" ] ]
0804.0722
Daniel Karapetyan
Gregory Gutin, Daniel Karapetyan
A Memetic Algorithm for the Generalized Traveling Salesman Problem
15 pages, to appear in Natural Computing, Springer, available online: http://www.springerlink.com/content/5v4568l492272865/?p=e1779dd02e4d4cbfa49d0d27b19b929f&pi=13
Natural Computing 9(1) (2010) 47-60
10.1007/s11047-009-9111-6
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The generalized traveling salesman problem (GTSP) is an extension of the well-known traveling salesman problem. In GTSP, we are given a partition of cities into groups and we are required to find a minimum length tour that includes exactly one city from each group. The recent studies on this subject consider different variations of a memetic algorithm approach to the GTSP. The aim of this paper is to present a new memetic algorithm for GTSP with a powerful local search procedure. The experiments show that the proposed algorithm clearly outperforms all of the known heuristics with respect to both solution quality and running time. While the other memetic algorithms were designed only for the symmetric GTSP, our algorithm can solve both symmetric and asymmetric instances.
[ { "version": "v1", "created": "Fri, 4 Apr 2008 13:21:40 GMT" }, { "version": "v2", "created": "Tue, 11 Nov 2008 23:58:20 GMT" }, { "version": "v3", "created": "Fri, 13 Mar 2009 22:13:27 GMT" } ]
2010-03-30T00:00:00
[ [ "Gutin", "Gregory", "" ], [ "Karapetyan", "Daniel", "" ] ]
0804.0735
Daniel Karapetyan
Gregory Gutin and Daniel Karapetyan
Generalized Traveling Salesman Problem Reduction Algorithms
To appear in Algorithmic Operations Research
Algorithmic Operations Research 4 (2009) 144-154
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The generalized traveling salesman problem (GTSP) is an extension of the well-known traveling salesman problem. In GTSP, we are given a partition of cities into groups and we are required to find a minimum length tour that includes exactly one city from each group. The aim of this paper is to present a problem reduction algorithm that deletes redundant vertices and edges, preserving the optimal solution. The algorithm's running time is O(N^3) in the worst case, but it is significantly faster in practice. The algorithm has reduced the problem size by 15-20% on average in our experiments and this has decreased the solution time by 10-60% for each of the considered solvers.
[ { "version": "v1", "created": "Fri, 4 Apr 2008 13:36:19 GMT" }, { "version": "v2", "created": "Tue, 14 Apr 2009 17:36:47 GMT" } ]
2010-03-30T00:00:00
[ [ "Gutin", "Gregory", "" ], [ "Karapetyan", "Daniel", "" ] ]
0804.0743
Fabien Mathieu
Laurent Viennot (INRIA Rocquencourt), Yacine Boufkhad (INRIA Rocquencourt, LIAFA), Fabien Mathieu (INRIA Rocquencourt, FT R&D), Fabien De Montgolfier (INRIA Rocquencourt, LIAFA), Diego Perino (INRIA Rocquencourt, FT R&D)
Scalable Distributed Video-on-Demand: Theoretical Bounds and Practical Algorithms
null
null
null
RR-6496
cs.NI cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze a distributed system where n nodes called boxes store a large set of videos and collaborate to serve simultaneously n videos or less. We explore under which conditions such a system can be scalable while serving any sequence of demands. We model this problem through a combination of two algorithms: a video allocation algorithm and a connection scheduling algorithm. The latter plays against an adversary that incrementally proposes video requests.
[ { "version": "v1", "created": "Fri, 4 Apr 2008 14:08:49 GMT" }, { "version": "v2", "created": "Tue, 8 Apr 2008 07:16:36 GMT" } ]
2008-12-18T00:00:00
[ [ "Viennot", "Laurent", "", "INRIA Rocquencourt" ], [ "Boufkhad", "Yacine", "", "INRIA\n Rocquencourt, LIAFA" ], [ "Mathieu", "Fabien", "", "INRIA Rocquencourt, FT R&D" ], [ "De Montgolfier", "Fabien", "", "INRIA Rocquencourt, LIAFA" ], [ "Perino", "Diego", "", "INRIA Rocquencourt, FT\n R&D" ] ]
0804.0936
Shripad Thite
Mark de Berg and Shripad Thite
Cache-Oblivious Selection in Sorted X+Y Matrices
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Let X[0..n-1] and Y[0..m-1] be two sorted arrays, and define the mxn matrix A by A[j][i]=X[i]+Y[j]. Frederickson and Johnson gave an efficient algorithm for selecting the k-th smallest element from A. We show how to make this algorithm IO-efficient. Our cache-oblivious algorithm performs O((m+n)/B) IOs, where B is the block size of memory transfers.
[ { "version": "v1", "created": "Sun, 6 Apr 2008 22:31:04 GMT" } ]
2008-04-08T00:00:00
[ [ "de Berg", "Mark", "" ], [ "Thite", "Shripad", "" ] ]
0804.0940
Shripad Thite
Shripad Thite
Optimum Binary Search Trees on the Hierarchical Memory Model
M.S. thesis; Department of Computer Science, University of Illinois at Urbana-Champaign; CSL Technical Report UILU-ENG-00-2215 ACT-142; November 2000
null
null
UILU-ENG-00-2215 ACT-142
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Hierarchical Memory Model (HMM) of computation is similar to the standard Random Access Machine (RAM) model except that the HMM has a non-uniform memory organized in a hierarchy of levels numbered 1 through h. The cost of accessing a memory location increases with the level number, and accesses to memory locations belonging to the same level cost the same. Formally, the cost of a single access to the memory location at address a is given by m(a), where m: N -> N is the memory cost function, and the h distinct values of m model the different levels of the memory hierarchy. We study the problem of constructing and storing a binary search tree (BST) of minimum cost, over a set of keys, with probabilities for successful and unsuccessful searches, on the HMM with an arbitrary number of memory levels, and for the special case h=2. While the problem of constructing optimum binary search trees has been well studied for the standard RAM model, the additional parameter m for the HMM increases the combinatorial complexity of the problem. We present two dynamic programming algorithms to construct optimum BSTs bottom-up. These algorithms run efficiently under some natural assumptions about the memory hierarchy. We also give an efficient algorithm to construct a BST that is close to optimum, by modifying a well-known linear-time approximation algorithm for the RAM model. We conjecture that the problem of constructing an optimum BST for the HMM with an arbitrary memory cost function m is NP-complete.
[ { "version": "v1", "created": "Mon, 7 Apr 2008 00:06:08 GMT" } ]
2008-04-08T00:00:00
[ [ "Thite", "Shripad", "" ] ]
0804.1115
Oskar Sandberg
Olof Mogren, Oskar Sandberg, Vilhelm Verendel and Devdatt Dubhashi
Adaptive Dynamics of Realistic Small-World Networks
null
null
null
null
cs.DS cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Continuing in the steps of Jon Kleinberg's and others celebrated work on decentralized search in small-world networks, we conduct an experimental analysis of a dynamic algorithm that produces small-world networks. We find that the algorithm adapts robustly to a wide variety of situations in realistic geographic networks with synthetic test data and with real world data, even when vertices are uneven and non-homogeneously distributed. We investigate the same algorithm in the case where some vertices are more popular destinations for searches than others, for example obeying power-laws. We find that the algorithm adapts and adjusts the networks according to the distributions, leading to improved performance. The ability of the dynamic process to adapt and create small worlds in such diverse settings suggests a possible mechanism by which such networks appear in nature.
[ { "version": "v1", "created": "Mon, 7 Apr 2008 19:39:59 GMT" } ]
2008-04-08T00:00:00
[ [ "Mogren", "Olof", "" ], [ "Sandberg", "Oskar", "" ], [ "Verendel", "Vilhelm", "" ], [ "Dubhashi", "Devdatt", "" ] ]
0804.1170
Daniel \v{S}tefankovi\v{c}
Satyaki Mahalanabis, Daniel Stefankovic
Approximating L1-distances between mixture distributions using random projections
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of computing L1-distances between every pair ofcprobability densities from a given family. We point out that the technique of Cauchy random projections (Indyk'06) in this context turns into stochastic integrals with respect to Cauchy motion. For piecewise-linear densities these integrals can be sampled from if one can sample from the stochastic integral of the function x->(1,x). We give an explicit density function for this stochastic integral and present an efficient sampling algorithm. As a consequence we obtain an efficient algorithm to approximate the L1-distances with a small relative error. For piecewise-polynomial densities we show how to approximately sample from the distributions resulting from the stochastic integrals. This also results in an efficient algorithm to approximate the L1-distances, although our inability to get exact samples worsens the dependence on the parameters.
[ { "version": "v1", "created": "Tue, 8 Apr 2008 02:11:13 GMT" } ]
2008-04-09T00:00:00
[ [ "Mahalanabis", "Satyaki", "" ], [ "Stefankovic", "Daniel", "" ] ]
0804.1409
Murat Ali Bayir Mr.
Murat Ali Bayir, Ismail Hakki Toroslu, Ahmet Cosar, Guven Fidan
Discovering More Accurate Frequent Web Usage Patterns
19 pages, 6 figures
null
null
null
cs.DB cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Web usage mining is a type of web mining, which exploits data mining techniques to discover valuable information from navigation behavior of World Wide Web users. As in classical data mining, data preparation and pattern discovery are the main issues in web usage mining. The first phase of web usage mining is the data processing phase, which includes the session reconstruction operation from server logs. Session reconstruction success directly affects the quality of the frequent patterns discovered in the next phase. In reactive web usage mining techniques, the source data is web server logs and the topology of the web pages served by the web server domain. Other kinds of information collected during the interactive browsing of web site by user, such as cookies or web logs containing similar information, are not used. The next phase of web usage mining is discovering frequent user navigation patterns. In this phase, pattern discovery methods are applied on the reconstructed sessions obtained in the first phase in order to discover frequent user patterns. In this paper, we propose a frequent web usage pattern discovery method that can be applied after session reconstruction phase. In order to compare accuracy performance of session reconstruction phase and pattern discovery phase, we have used an agent simulator, which models behavior of web users and generates web user navigation as well as the log data kept by the web server.
[ { "version": "v1", "created": "Wed, 9 Apr 2008 05:46:26 GMT" } ]
2008-12-18T00:00:00
[ [ "Bayir", "Murat Ali", "" ], [ "Toroslu", "Ismail Hakki", "" ], [ "Cosar", "Ahmet", "" ], [ "Fidan", "Guven", "" ] ]
0804.1724
Sudipto Guha
Sudipto Guha and Kamesh Munagala and Saswati Sarkar
Information Acquisition and Exploitation in Multichannel Wireless Networks
29 pages
null
null
null
cs.DS cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A wireless system with multiple channels is considered, where each channel has several transmission states. A user learns about the instantaneous state of an available channel by transmitting a control packet in it. Since probing all channels consumes significant energy and time, a user needs to determine what and how much information it needs to acquire about the instantaneous states of the available channels so that it can maximize its transmission rate. This motivates the study of the trade-off between the cost of information acquisition and its value towards improving the transmission rate. A simple model is presented for studying this information acquisition and exploitation trade-off when the channels are multi-state, with different distributions and information acquisition costs. The objective is to maximize a utility function which depends on both the cost and value of information. Solution techniques are presented for computing near-optimal policies with succinct representation in polynomial time. These policies provably achieve at least a fixed constant factor of the optimal utility on any problem instance, and in addition, have natural characterizations. The techniques are based on exploiting the structure of the optimal policy, and use of Lagrangean relaxations which simplify the space of approximately optimal solutions.
[ { "version": "v1", "created": "Thu, 10 Apr 2008 14:53:30 GMT" } ]
2008-04-11T00:00:00
[ [ "Guha", "Sudipto", "" ], [ "Munagala", "Kamesh", "" ], [ "Sarkar", "Saswati", "" ] ]
0804.1845
Ely Porat
Ely Porat
An Optimal Bloom Filter Replacement Based on Matrix Solving
A lectureon this paper will be available in Google video
null
null
null
cs.DS cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We suggest a method for holding a dictionary data structure, which maps keys to values, in the spirit of Bloom Filters. The space requirements of the dictionary we suggest are much smaller than those of a hashtable. We allow storing n keys, each mapped to value which is a string of k bits. Our suggested method requires nk + o(n) bits space to store the dictionary, and O(n) time to produce the data structure, and allows answering a membership query in O(1) memory probes. The dictionary size does not depend on the size of the keys. However, reducing the space requirements of the data structure comes at a certain cost. Our dictionary has a small probability of a one sided error. When attempting to obtain the value for a key that is stored in the dictionary we always get the correct answer. However, when testing for membership of an element that is not stored in the dictionary, we may get an incorrect answer, and when requesting the value of such an element we may get a certain random value. Our method is based on solving equations in GF(2^k) and using several hash functions. Another significant advantage of our suggested method is that we do not require using sophisticated hash functions. We only require pairwise independent hash functions. We also suggest a data structure that requires only nk bits space, has O(n2) preprocessing time, and has a O(log n) query time. However, this data structures requires a uniform hash functions. In order replace a Bloom Filter of n elements with an error proability of 2^{-k}, we require nk + o(n) memory bits, O(1) query time, O(n) preprocessing time, and only pairwise independent hash function. Even the most advanced previously known Bloom Filter would require nk+O(n) space, and a uniform hash functions, so our method is significantly less space consuming especially when k is small.
[ { "version": "v1", "created": "Fri, 11 Apr 2008 11:24:04 GMT" } ]
2008-04-14T00:00:00
[ [ "Porat", "Ely", "" ] ]
0804.1888
Latorre
Frank Verstraete, J. Ignacio Cirac, Jose I. Latorre
Quantum circuits for strongly correlated quantum systems
null
null
10.1103/PhysRevA.79.032316
null
quant-ph cond-mat.str-el cs.DS hep-th
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, we have witnessed an explosion of experimental tools by which quantum systems can be manipulated in a controlled and coherent way. One of the most important goals now is to build quantum simulators, which would open up the possibility of exciting experiments probing various theories in regimes that are not achievable under normal lab circumstances. Here we present a novel approach to gain detailed control on the quantum simulation of strongly correlated quantum many-body systems by constructing the explicit quantum circuits that diagonalize their dynamics. We show that the exact quantum circuits underlying some of the most relevant many-body Hamiltonians only need a finite amount of local gates. As a particularly simple instance, the full dynamics of a one-dimensional Quantum Ising model in a transverse field with four spins is shown to be reproduced using a quantum circuit of only six local gates. This opens up the possibility of experimentally producing strongly correlated states, their time evolution at zero time and even thermal superpositions at zero temperature. Our method also allows to uncover the exact circuits corresponding to models that exhibit topological order and to stabilizer states.
[ { "version": "v1", "created": "Fri, 11 Apr 2008 12:52:44 GMT" } ]
2009-11-13T00:00:00
[ [ "Verstraete", "Frank", "" ], [ "Cirac", "J. Ignacio", "" ], [ "Latorre", "Jose I.", "" ] ]
0804.2032
Frederic Dorn Harald
Paul Bonsma and Frederic Dorn
Tight Bounds and Faster Algorithms for Directed Max-Leaf Problems
17 pages, 6 figures
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An out-tree $T$ of a directed graph $D$ is a rooted tree subgraph with all arcs directed outwards from the root. An out-branching is a spanning out-tree. By $l(D)$ and $l_s(D)$ we denote the maximum number of leaves over all out-trees and out-branchings of $D$, respectively. We give fixed parameter tractable algorithms for deciding whether $l_s(D)\geq k$ and whether $l(D)\geq k$ for a digraph $D$ on $n$ vertices, both with time complexity $2^{O(k\log k)} \cdot n^{O(1)}$. This improves on previous algorithms with complexity $2^{O(k^3\log k)} \cdot n^{O(1)}$ and $2^{O(k\log^2 k)} \cdot n^{O(1)}$, respectively. To obtain the complexity bound in the case of out-branchings, we prove that when all arcs of $D$ are part of at least one out-branching, $l_s(D)\geq l(D)/3$. The second bound we prove in this paper states that for strongly connected digraphs $D$ with minimum in-degree 3, $l_s(D)\geq \Theta(\sqrt{n})$, where previously $l_s(D)\geq \Theta(\sqrt[3]{n})$ was the best known bound. This bound is tight, and also holds for the larger class of digraphs with minimum in-degree 3 in which every arc is part of at least one out-branching.
[ { "version": "v1", "created": "Sat, 12 Apr 2008 20:50:59 GMT" } ]
2008-12-18T00:00:00
[ [ "Bonsma", "Paul", "" ], [ "Dorn", "Frederic", "" ] ]
0804.2097
Tim Roughgarden
Jason D. Hartline and Tim Roughgarden
Optimal Mechansim Design and Money Burning
23 pages, 1 figure
null
null
null
cs.GT cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mechanism design is now a standard tool in computer science for aligning the incentives of self-interested agents with the objectives of a system designer. There is, however, a fundamental disconnect between the traditional application domains of mechanism design (such as auctions) and those arising in computer science (such as networks): while monetary transfers (i.e., payments) are essential for most of the known positive results in mechanism design, they are undesirable or even technologically infeasible in many computer systems. Classical impossibility results imply that the reach of mechanisms without transfers is severely limited. Computer systems typically do have the ability to reduce service quality--routing systems can drop or delay traffic, scheduling protocols can delay the release of jobs, and computational payment schemes can require computational payments from users (e.g., in spam-fighting systems). Service degradation is tantamount to requiring that users burn money}, and such ``payments'' can be used to influence the preferences of the agents at a cost of degrading the social surplus. We develop a framework for the design and analysis of money-burning mechanisms to maximize the residual surplus--the total value of the chosen outcome minus the payments required.
[ { "version": "v1", "created": "Mon, 14 Apr 2008 04:32:45 GMT" } ]
2008-04-15T00:00:00
[ [ "Hartline", "Jason D.", "" ], [ "Roughgarden", "Tim", "" ] ]
0804.2112
Shai Gutner
Yossi Azar, Iftah Gamzu and Shai Gutner
Truthful Unsplittable Flow for Large Capacity Networks
null
Proc. of 19th SPAA (2007), 320-329
null
null
cs.DS cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we focus our attention on the large capacities unsplittable flow problem in a game theoretic setting. In this setting, there are selfish agents, which control some of the requests characteristics, and may be dishonest about them. It is worth noting that in game theoretic settings many standard techniques, such as randomized rounding, violate certain monotonicity properties, which are imperative for truthfulness, and therefore cannot be employed. In light of this state of affairs, we design a monotone deterministic algorithm, which is based on a primal-dual machinery, which attains an approximation ratio of $\frac{e}{e-1}$, up to a disparity of $\epsilon$ away. This implies an improvement on the current best truthful mechanism, as well as an improvement on the current best combinatorial algorithm for the problem under consideration. Surprisingly, we demonstrate that any algorithm in the family of reasonable iterative path minimizing algorithms, cannot yield a better approximation ratio. Consequently, it follows that in order to achieve a monotone PTAS, if exists, one would have to exert different techniques. We also consider the large capacities \textit{single-minded multi-unit combinatorial auction problem}. This problem is closely related to the unsplittable flow problem since one can formulate it as a special case of the integer linear program of the unsplittable flow problem. Accordingly, we obtain a comparable performance guarantee by refining the algorithm suggested for the unsplittable flow problem.
[ { "version": "v1", "created": "Mon, 14 Apr 2008 08:03:30 GMT" } ]
2008-12-18T00:00:00
[ [ "Azar", "Yossi", "" ], [ "Gamzu", "Iftah", "" ], [ "Gutner", "Shai", "" ] ]
0804.2288
Shipra Agrawal
Shipra Agrawal, Zizhuo Wang, Yinyu Ye
Parimutuel Betting on Permutations
null
null
null
null
cs.GT cs.CC cs.DS cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We focus on a permutation betting market under parimutuel call auction model where traders bet on the final ranking of n candidates. We present a Proportional Betting mechanism for this market. Our mechanism allows the traders to bet on any subset of the n x n 'candidate-rank' pairs, and rewards them proportionally to the number of pairs that appear in the final outcome. We show that market organizer's decision problem for this mechanism can be formulated as a convex program of polynomial size. More importantly, the formulation yields a set of n x n unique marginal prices that are sufficient to price the bets in this mechanism, and are computable in polynomial-time. The marginal prices reflect the traders' beliefs about the marginal distributions over outcomes. We also propose techniques to compute the joint distribution over n! permutations from these marginal distributions. We show that using a maximum entropy criterion, we can obtain a concise parametric form (with only n x n parameters) for the joint distribution which is defined over an exponentially large state space. We then present an approximation algorithm for computing the parameters of this distribution. In fact, the algorithm addresses the generic problem of finding the maximum entropy distribution over permutations that has a given mean, and may be of independent interest.
[ { "version": "v1", "created": "Tue, 15 Apr 2008 00:20:17 GMT" } ]
2008-12-18T00:00:00
[ [ "Agrawal", "Shipra", "" ], [ "Wang", "Zizhuo", "" ], [ "Ye", "Yinyu", "" ] ]
0804.2699
Dennis Huo
Ian Christopher, Dennis Huo, and Bryan Jacobs
A Critique of a Polynomial-time SAT Solver Devised by Sergey Gubin
null
null
null
null
cs.CC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper refutes the validity of the polynomial-time algorithm for solving satisfiability proposed by Sergey Gubin. Gubin introduces the algorithm using 3-SAT and eventually expands it to accept a broad range of forms of the Boolean satisfiability problem. Because 3-SAT is NP-complete, the algorithm would have implied P = NP, had it been correct. Additionally, this paper refutes the correctness of his polynomial-time reduction of SAT to 2-SAT.
[ { "version": "v1", "created": "Wed, 16 Apr 2008 23:00:51 GMT" } ]
2008-04-18T00:00:00
[ [ "Christopher", "Ian", "" ], [ "Huo", "Dennis", "" ], [ "Jacobs", "Bryan", "" ] ]
0804.3028
Saket Saurabh
Michael Fellows, Fedor Fomin, Daniel Lokshtanov, Elena Losievskaja, Frances A. Rosamond and Saket Saurabh
Parameterized Low-distortion Embeddings - Graph metrics into lines and trees
19 pages, 1 Figure
null
null
null
cs.DS cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We revisit the issue of low-distortion embedding of metric spaces into the line, and more generally, into the shortest path metric of trees, from the parameterized complexity perspective.Let $M=M(G)$ be the shortest path metric of an edge weighted graph $G=(V,E)$ on $n$ vertices. We describe algorithms for the problem of finding a low distortion non-contracting embedding of $M$ into line and tree metrics. We give an $O(nd^4(2d+1)^{2d})$ time algorithm that for an unweighted graph metric $M$ and integer $d$ either constructs an embedding of $M$ into the line with distortion at most $d$, or concludes that no such embedding exists. We find the result surprising, because the considered problem bears a strong resemblance to the notoriously hard Bandwidth Minimization problem which does not admit any FPT algorithm unless an unlikely collapse of parameterized complexity classes occurs. We show that our algorithm can also be applied to construct small distortion embeddings of weighted graph metrics. The running time of our algorithm is $O(n(dW)^4(2d+1)^{2dW})$ where $W$ is the largest edge weight of the input graph. We also show that deciding whether a weighted graph metric $M(G)$ with maximum weight $W < |V(G)|$ can be embedded into the line with distortion at most $d$ is NP-Complete for every fixed rational $d \geq 2$. This rules out any possibility of an algorithm with running time $O((nW)^{h(d)})$ where $h$ is a function of $d$ alone. We generalize the result on embedding into the line by proving that for any tree $T$ with maximum degree $\Delta$, embedding of $M$ into a shortest path metric of $T$ is FPT, parameterized by $(\Delta,d)$.
[ { "version": "v1", "created": "Fri, 18 Apr 2008 14:39:41 GMT" } ]
2008-04-21T00:00:00
[ [ "Fellows", "Michael", "" ], [ "Fomin", "Fedor", "" ], [ "Lokshtanov", "Daniel", "" ], [ "Losievskaja", "Elena", "" ], [ "Rosamond", "Frances A.", "" ], [ "Saurabh", "Saket", "" ] ]
0804.3615
Jarek Duda
Jarek Duda
Combinatorial invariants for graph isomorphism problem
null
null
null
null
cs.CC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Presented approach in polynomial time calculates large number of invariants for each vertex, which won't change with graph isomorphism and should fully determine the graph. For example numbers of closed paths of length k for given starting vertex, what can be though as the diagonal terms of k-th power of the adjacency matrix. For k=2 we would get degree of verities invariant, higher describes local topology deeper. Now if two graphs are isomorphic, they have the same set of such vectors of invariants - we can sort theses vectors lexicographically and compare them. If they agree, permutations from sorting allow to reconstruct the isomorphism. I'm presenting arguments that these invariants should fully determine the graph, but unfortunately I can't prove it in this moment. This approach can give hope, that maybe P=NP - instead of checking all instances, we should make arithmetics on these large numbers.
[ { "version": "v1", "created": "Tue, 22 Apr 2008 22:16:46 GMT" }, { "version": "v2", "created": "Wed, 23 Apr 2008 21:54:08 GMT" }, { "version": "v3", "created": "Fri, 9 May 2008 07:09:06 GMT" }, { "version": "v4", "created": "Mon, 19 May 2008 14:20:36 GMT" } ]
2008-05-19T00:00:00
[ [ "Duda", "Jarek", "" ] ]
0804.3860
Hsiao-Fei Liu
Hsiao-Fei Liu and Kun-Mao Chao
An $\tilde{O}(n^{2.5})$-Time Algorithm for Online Topological Ordering
Better results have been proposed in the following paper: Haeupler, Kavitha, Mathew, Sen, Tarjan: Faster Algorithms for Incremental Topological Ordering. ICALP (1) 2008: 421-433
null
null
null
cs.GT cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an $\tilde{O}(n^{2.5})$-time algorithm for maintaining the topological order of a directed acyclic graph with $n$ vertices while inserting $m$ edges.
[ { "version": "v1", "created": "Thu, 24 Apr 2008 09:40:20 GMT" }, { "version": "v2", "created": "Sat, 23 Aug 2008 07:10:20 GMT" } ]
2008-08-23T00:00:00
[ [ "Liu", "Hsiao-Fei", "" ], [ "Chao", "Kun-Mao", "" ] ]
0804.3902
Tiziana Calamoneri
Tiziana Calamoneri, Andrea E.F. Clementi, Angelo Monti, Gianluca Rossi, Riccardo Silvestri
Minimum-energy broadcast in random-grid ad-hoc networks: approximation and distributed algorithms
13 pages, 3 figures, 1 table
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Min Energy broadcast problem consists in assigning transmission ranges to the nodes of an ad-hoc network in order to guarantee a directed spanning tree from a given source node and, at the same time, to minimize the energy consumption (i.e. the energy cost) yielded by the range assignment. Min energy broadcast is known to be NP-hard. We consider random-grid networks where nodes are chosen independently at random from the $n$ points of a $\sqrt n \times \sqrt n$ square grid in the plane. The probability of the existence of a node at a given point of the grid does depend on that point, that is, the probability distribution can be non-uniform. By using information-theoretic arguments, we prove a lower bound $(1-\epsilon) \frac n{\pi}$ on the energy cost of any feasible solution for this problem. Then, we provide an efficient solution of energy cost not larger than $1.1204 \frac n{\pi}$. Finally, we present a fully-distributed protocol that constructs a broadcast range assignment of energy cost not larger than $8n$,thus still yielding constant approximation. The energy load is well balanced and, at the same time, the work complexity (i.e. the energy due to all message transmissions of the protocol) is asymptotically optimal. The completion time of the protocol is only an $O(\log n)$ factor slower than the optimum. The approximation quality of our distributed solution is also experimentally evaluated. All bounds hold with probability at least $1-1/n^{\Theta(1)}$.
[ { "version": "v1", "created": "Thu, 24 Apr 2008 11:17:57 GMT" } ]
2008-04-25T00:00:00
[ [ "Calamoneri", "Tiziana", "" ], [ "Clementi", "Andrea E. F.", "" ], [ "Monti", "Angelo", "" ], [ "Rossi", "Gianluca", "" ], [ "Silvestri", "Riccardo", "" ] ]
0804.3947
Peter Sanders
Peter Sanders
Time Dependent Contraction Hierarchies -- Basic Algorithmic Ideas
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contraction hierarchies are a simple hierarchical routing technique that has proved extremely efficient for static road networks. We explain how to generalize them to networks with time-dependent edge weights. This is the first hierarchical speedup technique for time-dependent routing that allows bidirectional query algorithms.
[ { "version": "v1", "created": "Thu, 24 Apr 2008 15:24:08 GMT" } ]
2008-12-18T00:00:00
[ [ "Sanders", "Peter", "" ] ]
0804.4039
Ioannis Chatzigiannakis
Ioannis Chatzigiannakis, Georgios Giannoulis and Paul G. Spirakis
Energy and Time Efficient Scheduling of Tasks with Dependencies on Asymmetric Multiprocessors
null
null
null
RACTI-RU1-2008-10
cs.DC cs.DS cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we study the problem of scheduling tasks with dependencies in multiprocessor architectures where processors have different speeds. We present the preemptive algorithm "Save-Energy" that given a schedule of tasks it post processes it to improve the energy efficiency without any deterioration of the makespan. In terms of time efficiency, we show that preemptive scheduling in an asymmetric system can achieve the same or better optimal makespan than in a symmetric system. Motivited by real multiprocessor systems, we investigate architectures that exhibit limited asymmetry: there are two essentially different speeds. Interestingly, this special case has not been studied in the field of parallel computing and scheduling theory; only the general case was studied where processors have $K$ essentially different speeds. We present the non-preemptive algorithm ``Remnants'' that achieves almost optimal makespan. We provide a refined analysis of a recent scheduling method. Based on this analysis, we specialize the scheduling policy and provide an algorithm of $(3 + o(1))$ expected approximation factor. Note that this improves the previous best factor (6 for two speeds). We believe that our work will convince researchers to revisit this well studied scheduling problem for these simple, yet realistic, asymmetric multiprocessor architectures.
[ { "version": "v1", "created": "Fri, 25 Apr 2008 03:16:21 GMT" }, { "version": "v2", "created": "Fri, 6 Jun 2008 14:21:18 GMT" } ]
2008-06-09T00:00:00
[ [ "Chatzigiannakis", "Ioannis", "" ], [ "Giannoulis", "Georgios", "" ], [ "Spirakis", "Paul G.", "" ] ]
0804.4138
Jelani Nelson
Nicholas J. A. Harvey, Jelani Nelson, Krzysztof Onak
Sketching and Streaming Entropy via Approximation Theory
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We conclude a sequence of work by giving near-optimal sketching and streaming algorithms for estimating Shannon entropy in the most general streaming model, with arbitrary insertions and deletions. This improves on prior results that obtain suboptimal space bounds in the general model, and near-optimal bounds in the insertion-only model without sketching. Our high-level approach is simple: we give algorithms to estimate Renyi and Tsallis entropy, and use them to extrapolate an estimate of Shannon entropy. The accuracy of our estimates is proven using approximation theory arguments and extremal properties of Chebyshev polynomials, a technique which may be useful for other problems. Our work also yields the best-known and near-optimal additive approximations for entropy, and hence also for conditional entropy and mutual information.
[ { "version": "v1", "created": "Fri, 25 Apr 2008 16:04:20 GMT" } ]
2008-12-18T00:00:00
[ [ "Harvey", "Nicholas J. A.", "" ], [ "Nelson", "Jelani", "" ], [ "Onak", "Krzysztof", "" ] ]
0804.4666
Anna Gilbert
R. Berinde, A. C. Gilbert, P. Indyk, H. Karloff, M. J. Strauss
Combining geometry and combinatorics: A unified approach to sparse signal recovery
null
null
null
null
cs.DM cs.DS cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix and then uses linear programming to decode information about the signal from its measurements. The combinatorial approach constructs the measurement matrix and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of high-quality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the l_p norm for p about 1, and then show that unbalanced expanders are essentially equivalent to RIP-p matrices. From known deterministic constructions for such matrices, we obtain new deterministic measurement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance.
[ { "version": "v1", "created": "Tue, 29 Apr 2008 18:24:14 GMT" } ]
2008-04-30T00:00:00
[ [ "Berinde", "R.", "" ], [ "Gilbert", "A. C.", "" ], [ "Indyk", "P.", "" ], [ "Karloff", "H.", "" ], [ "Strauss", "M. J.", "" ] ]
0804.4744
V. Arvind
V. Arvind and Pushkar S. Joglekar
Lattice Problems, Gauge Functions and Parameterized Algorithms
null
null
null
null
cs.CC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a k-dimensional subspace M\subseteq \R^n and a full rank integer lattice L\subseteq \R^n, the \emph{subspace avoiding problem} SAP is to find a shortest vector in L\setminus M. Treating k as a parameter, we obtain new parameterized approximation and exact algorithms for SAP based on the AKS sieving technique. More precisely, we give a randomized $(1+\epsilon)$-approximation algorithm for parameterized SAP that runs in time 2^{O(n)}.(1/\epsilon)^k, where the parameter k is the dimension of the subspace M. Thus, we obtain a 2^{O(n)} time algorithm for \epsilon=2^{-O(n/k)}. We also give a 2^{O(n+k\log k)} exact algorithm for the parameterized SAP for any \ell_p norm. Several of our algorithms work for all gauge functions as metric with some natural restrictions, in particular for all \ell_p norms. We also prove an \Omega(2^n) lower bound on the query complexity of AKS sieving based exact algorithms for SVP that accesses the gauge function as oracle.
[ { "version": "v1", "created": "Wed, 30 Apr 2008 06:39:21 GMT" } ]
2008-05-01T00:00:00
[ [ "Arvind", "V.", "" ], [ "Joglekar", "Pushkar S.", "" ] ]
0804.4819
Jukka Suomela
Michael A. Bender, S\'andor P. Fekete, Alexander Kr\"oller, Vincenzo Liberatore, Joseph S. B. Mitchell, Valentin Polishchuk, Jukka Suomela
The Minimum Backlog Problem
1+16 pages, 3 figures
Theoretical Computer Science 605 (2015), 51-61
10.1016/j.tcs.2015.08.027
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the minimum backlog problem (MBP). This online problem arises, e.g., in the context of sensor networks. We focus on two main variants of MBP. The discrete MBP is a 2-person game played on a graph $G=(V,E)$. The player is initially located at a vertex of the graph. In each time step, the adversary pours a total of one unit of water into cups that are located on the vertices of the graph, arbitrarily distributing the water among the cups. The player then moves from her current vertex to an adjacent vertex and empties the cup at that vertex. The player's objective is to minimize the backlog, i.e., the maximum amount of water in any cup at any time. The geometric MBP is a continuous-time version of the MBP: the cups are points in the two-dimensional plane, the adversary pours water continuously at a constant rate, and the player moves in the plane with unit speed. Again, the player's objective is to minimize the backlog. We show that the competitive ratio of any algorithm for the MBP has a lower bound of $\Omega(D)$, where $D$ is the diameter of the graph (for the discrete MBP) or the diameter of the point set (for the geometric MBP). Therefore we focus on determining a strategy for the player that guarantees a uniform upper bound on the absolute value of the backlog. For the absolute value of the backlog there is a trivial lower bound of $\Omega(D)$, and the deamortization analysis of Dietz and Sleator gives an upper bound of $O(D\log N)$ for $N$ cups. Our main result is a tight upper bound for the geometric MBP: we show that there is a strategy for the player that guarantees a backlog of $O(D)$, independently of the number of cups.
[ { "version": "v1", "created": "Wed, 30 Apr 2008 13:13:12 GMT" }, { "version": "v2", "created": "Tue, 22 Mar 2016 20:54:15 GMT" } ]
2016-03-24T00:00:00
[ [ "Bender", "Michael A.", "" ], [ "Fekete", "Sándor P.", "" ], [ "Kröller", "Alexander", "" ], [ "Liberatore", "Vincenzo", "" ], [ "Mitchell", "Joseph S. B.", "" ], [ "Polishchuk", "Valentin", "" ], [ "Suomela", "Jukka", "" ] ]
0804.4881
Adolfo Piperno
Adolfo Piperno
Search Space Contraction in Canonical Labeling of Graphs
null
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The individualization-refinement paradigm for computing a canonical labeling and the automorphism group of a graph is investigated. A new algorithmic design aimed at reducing the size of the associated search space is introduced, and a new tool, named "Traces", is presented, together with experimental results and comparisons with existing software, such as McKay's "nauty". It is shown that the approach presented here leads to a huge reduction in the search space, thereby making computation feasible for several classes of graphs which are hard for all the main canonical labeling tools in the literature.
[ { "version": "v1", "created": "Wed, 30 Apr 2008 18:28:13 GMT" }, { "version": "v2", "created": "Wed, 26 Jan 2011 15:52:11 GMT" } ]
2015-03-13T00:00:00
[ [ "Piperno", "Adolfo", "" ] ]
0805.0389
Chaitanya Swamy
Chaitanya Swamy
Algorithms for Probabilistically-Constrained Models of Risk-Averse Stochastic Optimization with Black-Box Distributions
null
null
null
null
cs.DS cs.CC cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider various stochastic models that incorporate the notion of risk-averseness into the standard 2-stage recourse model, and develop novel techniques for solving the algorithmic problems arising in these models. A key notable feature of our work that distinguishes it from work in some other related models, such as the (standard) budget model and the (demand-) robust model, is that we obtain results in the black-box setting, that is, where one is given only sampling access to the underlying distribution. Our first model, which we call the risk-averse budget model, incorporates the notion of risk-averseness via a probabilistic constraint that restricts the probability (according to the underlying distribution) with which the second-stage cost may exceed a given budget B to at most a given input threshold \rho. We also a consider a closely-related model that we call the risk-averse robust model, where we seek to minimize the first-stage cost and the (1-\rho)-quantile of the second-stage cost. We obtain approximation algorithms for a variety of combinatorial optimization problems including the set cover, vertex cover, multicut on trees, min cut, and facility location problems, in the risk-averse budget and robust models with black-box distributions. We obtain near-optimal solutions that preserve the budget approximately and incur a small blow-up of the probability threshold (both of which are unavoidable). To the best of our knowledge, these are the first approximation results for problems involving probabilistic constraints and black-box distributions. A major component of our results is a fully polynomial approximation scheme for solving the LP-relaxation of the risk-averse problem.
[ { "version": "v1", "created": "Sun, 4 May 2008 03:57:52 GMT" } ]
2008-05-06T00:00:00
[ [ "Swamy", "Chaitanya", "" ] ]
0805.0747
Daniel Lemire
Hazel Webb, Owen Kaser, Daniel Lemire
Pruning Attribute Values From Data Cubes with Diamond Dicing
null
null
null
TR-08-011 (UNB Saint John)
cs.DB cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data stored in a data warehouse are inherently multidimensional, but most data-pruning techniques (such as iceberg and top-k queries) are unidimensional. However, analysts need to issue multidimensional queries. For example, an analyst may need to select not just the most profitable stores or--separately--the most profitable products, but simultaneous sets of stores and products fulfilling some profitability constraints. To fill this need, we propose a new operator, the diamond dice. Because of the interaction between dimensions, the computation of diamonds is challenging. We present the first diamond-dicing experiments on large data sets. Experiments show that we can compute diamond cubes over fact tables containing 100 million facts in less than 35 minutes using a standard PC.
[ { "version": "v1", "created": "Tue, 6 May 2008 15:45:15 GMT" } ]
2008-05-07T00:00:00
[ [ "Webb", "Hazel", "" ], [ "Kaser", "Owen", "" ], [ "Lemire", "Daniel", "" ] ]
0805.0851
Sebastien Tixeuil
Samuel Bernard (LIP6), St\'ephane Devismes (LRI), Maria Gradinariu Potop-Butucaru (LIP6, INRIA Rocquencourt), S\'ebastien Tixeuil (LIP6)
Bounds for self-stabilization in unidirectional networks
null
null
null
RR-6524
cs.DS cs.CC cs.DC cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A distributed algorithm is self-stabilizing if after faults and attacks hit the system and place it in some arbitrary global state, the systems recovers from this catastrophic situation without external intervention in finite time. Unidirectional networks preclude many common techniques in self-stabilization from being used, such as preserving local predicates. In this paper, we investigate the intrinsic complexity of achieving self-stabilization in unidirectional networks, and focus on the classical vertex coloring problem. When deterministic solutions are considered, we prove a lower bound of $n$ states per process (where $n$ is the network size) and a recovery time of at least $n(n-1)/2$ actions in total. We present a deterministic algorithm with matching upper bounds that performs in arbitrary graphs. When probabilistic solutions are considered, we observe that at least $\Delta + 1$ states per process and a recovery time of $\Omega(n)$ actions in total are required (where $\Delta$ denotes the maximal degree of the underlying simple undirected graph). We present a probabilistically self-stabilizing algorithm that uses $\mathtt{k}$ states per process, where $\mathtt{k}$ is a parameter of the algorithm. When $\mathtt{k}=\Delta+1$, the algorithm recovers in expected $O(\Delta n)$ actions. When $\mathtt{k}$ may grow arbitrarily, the algorithm recovers in expected O(n) actions in total. Thus, our algorithm can be made optimal with respect to space or time complexity.
[ { "version": "v1", "created": "Wed, 7 May 2008 07:39:14 GMT" }, { "version": "v2", "created": "Tue, 13 May 2008 08:06:10 GMT" } ]
2009-09-29T00:00:00
[ [ "Bernard", "Samuel", "", "LIP6" ], [ "Devismes", "Stéphane", "", "LRI" ], [ "Potop-Butucaru", "Maria Gradinariu", "", "LIP6, INRIA Rocquencourt" ], [ "Tixeuil", "Sébastien", "", "LIP6" ] ]
0805.1071
Zoya Svitkina
Zoya Svitkina and Lisa Fleischer
Submodular approximation: sampling-based algorithms and lower bounds
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce several generalizations of classical computer science problems obtained by replacing simpler objective functions with general submodular functions. The new problems include submodular load balancing, which generalizes load balancing or minimum-makespan scheduling, submodular sparsest cut and submodular balanced cut, which generalize their respective graph cut problems, as well as submodular function minimization with a cardinality lower bound. We establish upper and lower bounds for the approximability of these problems with a polynomial number of queries to a function-value oracle. The approximation guarantees for most of our algorithms are of the order of sqrt(n/ln n). We show that this is the inherent difficulty of the problems by proving matching lower bounds. We also give an improved lower bound for the problem of approximately learning a monotone submodular function. In addition, we present an algorithm for approximately learning submodular functions with special structure, whose guarantee is close to the lower bound. Although quite restrictive, the class of functions with this structure includes the ones that are used for lower bounds both by us and in previous work. This demonstrates that if there are significantly stronger lower bounds for this problem, they rely on more general submodular functions.
[ { "version": "v1", "created": "Wed, 7 May 2008 21:37:18 GMT" }, { "version": "v2", "created": "Wed, 24 Sep 2008 23:31:37 GMT" }, { "version": "v3", "created": "Mon, 31 May 2010 23:57:44 GMT" } ]
2010-06-02T00:00:00
[ [ "Svitkina", "Zoya", "" ], [ "Fleischer", "Lisa", "" ] ]
0805.1213
Florin Constantin
Florin Constantin, Jon Feldman, S. Muthukrishnan and Martin Pal
Online Ad Slotting With Cancellations
10 pages, 1 figure
null
null
null
cs.GT cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many advertisers buy advertisements (ads) on the Internet or on traditional media and seek simple, online mechanisms to reserve ad slots in advance. Media publishers represent a vast and varying inventory, and they too seek automatic, online mechanisms for pricing and allocating such reservations. In this paper, we present and study a simple model for auctioning such ad slots in advance. Bidders arrive sequentially and report which slots they are interested in. The seller must decide immediately whether or not to grant a reservation. Our model allows a seller to accept reservations, but possibly cancel the allocations later and pay the bidder a cancellation compensation (bump payment). Our main result is an online mechanism to derive prices and bump payments that is efficient to implement. This mechanism has many desirable properties. It is individually rational; winners have an incentive to be honest and bidding one's true value dominates any lower bid. Our mechanism's efficiency is within a constant fraction of the a posteriori optimally efficient solution. Its revenue is within a constant fraction of the a posteriori revenue of the Vickrey-Clarke-Groves mechanism. Our results make no assumptions about the order of arrival of bids or the value distribution of bidders and still hold if the items for sale are elements of a matroid, a more general setting than slot allocation.
[ { "version": "v1", "created": "Thu, 8 May 2008 18:12:50 GMT" } ]
2008-05-09T00:00:00
[ [ "Constantin", "Florin", "" ], [ "Feldman", "Jon", "" ], [ "Muthukrishnan", "S.", "" ], [ "Pal", "Martin", "" ] ]
0805.1257
Chadi Kari
Chadi Kari, Alexander Russell and Narasimha Shashidhar
Randomized Work-Competitive Scheduling for Cooperative Computing on $k$-partite Task Graphs
null
null
null
null
cs.DC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fundamental problem in distributed computing is the task of cooperatively executing a given set of $t$ tasks by $p$ processors where the communication medium is dynamic and subject to failures. The dynamics of the communication medium lead to groups of processors being disconnected and possibly reconnected during the entire course of the computation furthermore tasks can have dependencies among them. In this paper, we present a randomized algorithm whose competitive ratio is dependent on the dynamics of the communication medium and also on the nature of the dependencies among the tasks.
[ { "version": "v1", "created": "Fri, 9 May 2008 00:27:28 GMT" }, { "version": "v2", "created": "Sat, 24 Mar 2012 23:52:01 GMT" } ]
2012-03-27T00:00:00
[ [ "Kari", "Chadi", "" ], [ "Russell", "Alexander", "" ], [ "Shashidhar", "Narasimha", "" ] ]
0805.1348
Yakov Nekrich
Marek Karpinski, Yakov Nekrich
Searching for Frequent Colors in Rectangles
null
null
null
null
cs.DS cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a new variant of colored orthogonal range searching problem: given a query rectangle $Q$ all colors $c$, such that at least a fraction $\tau$ of all points in $Q$ are of color $c$, must be reported. We describe several data structures for that problem that use pseudo-linear space and answer queries in poly-logarithmic time.
[ { "version": "v1", "created": "Fri, 9 May 2008 13:47:55 GMT" } ]
2008-05-12T00:00:00
[ [ "Karpinski", "Marek", "" ], [ "Nekrich", "Yakov", "" ] ]
0805.1401
Mustaq Ahmed
Mustaq Ahmed, Sandip Das, Sachin Lodha, Anna Lubiw, Anil Maheshwari, Sasanka Roy
Approximation Algorithms for Shortest Descending Paths in Terrains
24 pages, 8 figures
null
null
null
cs.CG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A path from s to t on a polyhedral terrain is descending if the height of a point p never increases while we move p along the path from s to t. No efficient algorithm is known to find a shortest descending path (SDP) from s to t in a polyhedral terrain. We give two approximation algorithms (more precisely, FPTASs) that solve the SDP problem on general terrains. Both algorithms are simple, robust and easy to implement.
[ { "version": "v1", "created": "Fri, 9 May 2008 19:39:19 GMT" } ]
2008-05-12T00:00:00
[ [ "Ahmed", "Mustaq", "" ], [ "Das", "Sandip", "" ], [ "Lodha", "Sachin", "" ], [ "Lubiw", "Anna", "" ], [ "Maheshwari", "Anil", "" ], [ "Roy", "Sasanka", "" ] ]
0805.1487
Spyros Sioutas SS
Lagogiannis George, Lorentzos Nikos, Sioutas Spyros, Theodoridis Evaggelos
A Time Efficient Indexing Scheme for Complex Spatiotemporal Retrieval
6 pages, 7 figures, submitted to Sigmod Record
null
null
null
cs.DB cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper is concerned with the time efficient processing of spatiotemporal predicates, i.e. spatial predicates associated with an exact temporal constraint. A set of such predicates forms a buffer query or a Spatio-temporal Pattern (STP) Query with time. In the more general case of an STP query, the temporal dimension is introduced via the relative order of the spatial predicates (STP queries with order). Therefore, the efficient processing of a spatiotemporal predicate is crucial for the efficient implementation of more complex queries of practical interest. We propose an extension of a known approach, suitable for processing spatial predicates, which has been used for the efficient manipulation of STP queries with order. The extended method is supported by efficient indexing structures. We also provide experimental results that show the efficiency of the technique.
[ { "version": "v1", "created": "Sat, 10 May 2008 17:18:32 GMT" } ]
2008-12-18T00:00:00
[ [ "George", "Lagogiannis", "" ], [ "Nikos", "Lorentzos", "" ], [ "Spyros", "Sioutas", "" ], [ "Evaggelos", "Theodoridis", "" ] ]
0805.1598
Peiyush Jain
Peiyush Jain
A Simple In-Place Algorithm for In-Shuffle
3 pages
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper presents a simple, linear time, in-place algorithm for performing a 2-way in-shuffle which can be used with little modification for certain other k-way shuffles.
[ { "version": "v1", "created": "Mon, 12 May 2008 09:28:18 GMT" } ]
2008-05-13T00:00:00
[ [ "Jain", "Peiyush", "" ] ]
0805.1661
Glenn Hickey
G. Hickey, P. Carmi, A. Maheshwari, N. Zeh
NAPX: A Polynomial Time Approximation Scheme for the Noah's Ark Problem
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Noah's Ark Problem (NAP) is an NP-Hard optimization problem with relevance to ecological conservation management. It asks to maximize the phylogenetic diversity (PD) of a set of taxa given a fixed budget, where each taxon is associated with a cost of conservation and a probability of extinction. NAP has received renewed interest with the rise in availability of genetic sequence data, allowing PD to be used as a practical measure of biodiversity. However, only simplified instances of the problem, where one or more parameters are fixed as constants, have as of yet been addressed in the literature. We present NAPX, the first algorithm for the general version of NAP that returns a $1 - \epsilon$ approximation of the optimal solution. It runs in $O(\frac{n B^2 h^2 \log^2n}{\log^2(1 - \epsilon)})$ time where $n$ is the number of species, and $B$ is the total budget and $h$ is the height of the input tree. We also provide improved bounds for its expected running time.
[ { "version": "v1", "created": "Mon, 12 May 2008 15:04:26 GMT" }, { "version": "v2", "created": "Mon, 27 Oct 2008 18:57:31 GMT" } ]
2008-10-27T00:00:00
[ [ "Hickey", "G.", "" ], [ "Carmi", "P.", "" ], [ "Maheshwari", "A.", "" ], [ "Zeh", "N.", "" ] ]
0805.2630
Kamesh Munagala
Sudipto Guha and Kamesh Munagala
Sequential Design of Experiments via Linear Programming
The results and presentation in this paper are subsumed by the article "Approximation algorithms for Bayesian multi-armed bandit problems" http://arxiv.org/abs/1306.3525
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The celebrated multi-armed bandit problem in decision theory models the basic trade-off between exploration, or learning about the state of a system, and exploitation, or utilizing the system. In this paper we study the variant of the multi-armed bandit problem where the exploration phase involves costly experiments and occurs before the exploitation phase; and where each play of an arm during the exploration phase updates a prior belief about the arm. The problem of finding an inexpensive exploration strategy to optimize a certain exploitation objective is NP-Hard even when a single play reveals all information about an arm, and all exploration steps cost the same. We provide the first polynomial time constant-factor approximation algorithm for this class of problems. We show that this framework also generalizes several problems of interest studied in the context of data acquisition in sensor networks. Our analyses also extends to switching and setup costs, and to concave utility objectives. Our solution approach is via a novel linear program rounding technique based on stochastic packing. In addition to yielding exploration policies whose performance is within a small constant factor of the adaptive optimal policy, a nice feature of this approach is that the resulting policies explore the arms sequentially without revisiting any arm. Sequentiality is a well-studied concept in decision theory, and is very desirable in domains where multiple explorations can be conducted in parallel, for instance, in the sensor network context.
[ { "version": "v1", "created": "Sat, 17 May 2008 22:48:22 GMT" }, { "version": "v2", "created": "Tue, 18 Jun 2013 15:13:17 GMT" } ]
2013-06-19T00:00:00
[ [ "Guha", "Sudipto", "" ], [ "Munagala", "Kamesh", "" ] ]
0805.2646
Ilias Diakonikolas
Ilias Diakonikolas, Mihalis Yannakakis
Small Approximate Pareto Sets for Bi-objective Shortest Paths and Other Problems
submitted full version
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the problem of computing a minimum set of solutions that approximates within a specified accuracy $\epsilon$ the Pareto curve of a multiobjective optimization problem. We show that for a broad class of bi-objective problems (containing many important widely studied problems such as shortest paths, spanning tree, and many others), we can compute in polynomial time an $\epsilon$-Pareto set that contains at most twice as many solutions as the minimum such set. Furthermore we show that the factor of 2 is tight for these problems, i.e., it is NP-hard to do better. We present upper and lower bounds for three or more objectives, as well as for the dual problem of computing a specified number $k$ of solutions which provide a good approximation to the Pareto curve.
[ { "version": "v1", "created": "Sat, 17 May 2008 06:10:19 GMT" } ]
2008-05-20T00:00:00
[ [ "Diakonikolas", "Ilias", "" ], [ "Yannakakis", "Mihalis", "" ] ]
0805.2671
Spyros Sioutas SS
Spyros Sioutas
Finger Indexed Sets: New Approaches
13 pages, 1 figure, Submitted to Journal of Universal Computer Science (J.UCS)
null
null
null
cs.DS cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the particular case we have insertions/deletions at the tail of a given set S of $n$ one-dimensional elements, we present a simpler and more concrete algorithm than that presented in [Anderson, 2007] achieving the same (but also amortized) upper bound of $O(\sqrt{logd/loglogd})$ for finger searching queries, where $d$ is the number of sorted keys between the finger element and the target element we are looking for. Furthermore, in general case we have insertions/deletions anywhere we present a new randomized algorithm achieving the same expected time bounds. Even the new solutions achieve the optimal bounds in amortized or expected case, the advantage of simplicity is of great importance due to practical merits we gain.
[ { "version": "v1", "created": "Sat, 17 May 2008 14:05:12 GMT" } ]
2008-12-18T00:00:00
[ [ "Sioutas", "Spyros", "" ] ]
0805.2681
Spyros Sioutas SS
Spyros Sioutas, Dimitrios Sofotassios, Kostas Tsichlas, Dimitrios Sotiropoulos, Panayiotis Vlamos
Canonical polygon Queries on the plane: a New Approach
7 pages, 9 figures, Accepted for publication in Journal of Computers (JCP), http://www.informatik.uni-trier.de/~ley/db/journals/jcp/index.html
null
null
null
cs.CG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The polygon retrieval problem on points is the problem of preprocessing a set of $n$ points on the plane, so that given a polygon query, the subset of points lying inside it can be reported efficiently. It is of great interest in areas such as Computer Graphics, CAD applications, Spatial Databases and GIS developing tasks. In this paper we study the problem of canonical $k$-vertex polygon queries on the plane. A canonical $k$-vertex polygon query always meets the following specific property: a point retrieval query can be transformed into a linear number (with respect to the number of vertices) of point retrievals for orthogonal objects such as rectangles and triangles (throughout this work we call a triangle orthogonal iff two of its edges are axis-parallel). We present two new algorithms for this problem. The first one requires $O(n\log^2{n})$ space and $O(k\frac{log^3n}{loglogn}+A)$ query time. A simple modification scheme on first algorithm lead us to a second solution, which consumes $O(n^2)$ space and $O(k \frac{logn}{loglogn}+A)$ query time, where $A$ denotes the size of the answer and $k$ is the number of vertices. The best previous solution for the general polygon retrieval problem uses $O(n^2)$ space and answers a query in $O(k\log{n}+A)$ time, where $k$ is the number of vertices. It is also very complicated and difficult to be implemented in a standard imperative programming language such as C or C++.
[ { "version": "v1", "created": "Sat, 17 May 2008 16:00:09 GMT" }, { "version": "v2", "created": "Thu, 30 Jul 2009 10:23:50 GMT" } ]
2009-07-30T00:00:00
[ [ "Sioutas", "Spyros", "" ], [ "Sofotassios", "Dimitrios", "" ], [ "Tsichlas", "Kostas", "" ], [ "Sotiropoulos", "Dimitrios", "" ], [ "Vlamos", "Panayiotis", "" ] ]
0805.3742
Henrik B\"a\"arnhielm
Henrik B\"a\"arnhielm
Algorithmic problems in twisted groups of Lie type
The author's PhD thesis
null
null
null
math.GR cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This thesis contains a collection of algorithms for working with the twisted groups of Lie type known as Suzuki groups, and small and large Ree groups. The two main problems under consideration are constructive recognition and constructive membership testing. We also consider problems of generating and conjugating Sylow and maximal subgroups. The algorithms are motivated by, and form a part of, the Matrix Group Recognition Project. Obtaining both theoretically and practically efficient algorithms has been a central goal. The algorithms have been developed with, and implemented in, the computer algebra system MAGMA.
[ { "version": "v1", "created": "Sat, 24 May 2008 04:21:29 GMT" }, { "version": "v2", "created": "Sun, 8 Jun 2008 10:34:21 GMT" } ]
2008-06-08T00:00:00
[ [ "Bäärnhielm", "Henrik", "" ] ]
0805.3901
Gregory Gutin
Gregory Gutin and Eun Jung Kim
Properly Coloured Cycles and Paths: Results and Open Problems
null
null
null
null
cs.DM cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider a number of results and seven conjectures on properly edge-coloured (PC) paths and cycles in edge-coloured multigraphs. We overview some known results and prove new ones. In particular, we consider a family of transformations of an edge-coloured multigraph $G$ into an ordinary graph that allow us to check the existence PC cycles and PC $(s,t)$-paths in $G$ and, if they exist, to find shortest ones among them. We raise a problem of finding the optimal transformation and consider a possible solution to the problem.
[ { "version": "v1", "created": "Mon, 26 May 2008 09:17:21 GMT" }, { "version": "v2", "created": "Fri, 30 May 2008 14:33:10 GMT" }, { "version": "v3", "created": "Sat, 31 May 2008 07:48:28 GMT" } ]
2008-05-31T00:00:00
[ [ "Gutin", "Gregory", "" ], [ "Kim", "Eun Jung", "" ] ]
0805.4147
Meng He
Prosenjit Bose, Eric Y. Chen, Meng He, Anil Maheshwari, Pat Morin
Succinct Geometric Indexes Supporting Point Location Queries
null
null
null
null
cs.CG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose to design data structures called succinct geometric indexes of negligible space (more precisely, o(n) bits) that, by taking advantage of the n points in the data set permuted and stored elsewhere as a sequence, to support geometric queries in optimal time. Our first and main result is a succinct geometric index that can answer point location queries, a fundamental problem in computational geometry, on planar triangulations in O(lg n) time. We also design three variants of this index. The first supports point location using $\lg n + 2\sqrt{\lg n} + O(\lg^{1/4} n)$ point-line comparisons. The second supports point location in o(lg n) time when the coordinates are integers bounded by U. The last variant can answer point location in O(H+1) expected time, where H is the entropy of the query distribution. These results match the query efficiency of previous point location structures that use O(n) words or O(n lg n) bits, while saving drastic amounts of space. We then generalize our succinct geometric index to planar subdivisions, and design indexes for other types of queries. Finally, we apply our techniques to design the first implicit data structures that support point location in $O(\lg^2 n)$ time.
[ { "version": "v1", "created": "Tue, 27 May 2008 15:15:05 GMT" } ]
2008-05-28T00:00:00
[ [ "Bose", "Prosenjit", "" ], [ "Chen", "Eric Y.", "" ], [ "He", "Meng", "" ], [ "Maheshwari", "Anil", "" ], [ "Morin", "Pat", "" ] ]
0805.4300
Shai Gutner
Noga Alon and Shai Gutner
Balanced Families of Perfect Hash Functions and Their Applications
null
Proc. of 34th ICALP (2007), 435-446
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The construction of perfect hash functions is a well-studied topic. In this paper, this concept is generalized with the following definition. We say that a family of functions from $[n]$ to $[k]$ is a $\delta$-balanced $(n,k)$-family of perfect hash functions if for every $S \subseteq [n]$, $|S|=k$, the number of functions that are 1-1 on $S$ is between $T/\delta$ and $\delta T$ for some constant $T>0$. The standard definition of a family of perfect hash functions requires that there will be at least one function that is 1-1 on $S$, for each $S$ of size $k$. In the new notion of balanced families, we require the number of 1-1 functions to be almost the same (taking $\delta$ to be close to 1) for every such $S$. Our main result is that for any constant $\delta > 1$, a $\delta$-balanced $(n,k)$-family of perfect hash functions of size $2^{O(k \log \log k)} \log n$ can be constructed in time $2^{O(k \log \log k)} n \log n$. Using the technique of color-coding we can apply our explicit constructions to devise approximation algorithms for various counting problems in graphs. In particular, we exhibit a deterministic polynomial time algorithm for approximating both the number of simple paths of length $k$ and the number of simple cycles of size $k$ for any $k \leq O(\frac{\log n}{\log \log \log n})$ in a graph with $n$ vertices. The approximation is up to any fixed desirable relative error.
[ { "version": "v1", "created": "Wed, 28 May 2008 09:49:18 GMT" } ]
2008-12-18T00:00:00
[ [ "Alon", "Noga", "" ], [ "Gutner", "Shai", "" ] ]
0806.0840
Mugurel Ionut Andreica
Mugurel Ionut Andreica
A Dynamic Programming Framework for Combinatorial Optimization Problems on Graphs with Bounded Pathwidth
Some of the ideas presented in this paper were later used by the author for preparing algorithmic tasks for several contests where the author was a member of the scientific committee (e.g. ACM ICPC Southeastern regional contest 2009 and Balkan olympiad in informatics 2011). Such tasks (including the task statement and solutions) can be found in the attached zip archive; THETA 16 / AQTR, Cluj-Napoca : Romania (2008)
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present an algorithmic framework for solving a class of combinatorial optimization problems on graphs with bounded pathwidth. The problems are NP-hard in general, but solvable in linear time on this type of graphs. The problems are relevant for assessing network reliability and improving the network's performance and fault tolerance. The main technique considered in this paper is dynamic programming.
[ { "version": "v1", "created": "Wed, 4 Jun 2008 19:18:45 GMT" }, { "version": "v2", "created": "Mon, 17 Dec 2012 14:03:58 GMT" } ]
2012-12-18T00:00:00
[ [ "Andreica", "Mugurel Ionut", "" ] ]
0806.0928
Martin N\"ollenburg
Martin N\"ollenburg, Danny Holten, Markus V\"olker, Alexander Wolff
Drawing Binary Tanglegrams: An Experimental Evaluation
see http://www.siam.org/proceedings/alenex/2009/alx09_011_nollenburgm.pdf
Proceedings of the 11th Workshop on Algorithm Engineering and Experiments (ALENEX'09), pages 106-119. SIAM, April 2009
null
null
cs.DS cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A binary tanglegram is a pair <S,T> of binary trees whose leaf sets are in one-to-one correspondence; matching leaves are connected by inter-tree edges. For applications, for example in phylogenetics or software engineering, it is required that the individual trees are drawn crossing-free. A natural optimization problem, denoted tanglegram layout problem, is thus to minimize the number of crossings between inter-tree edges. The tanglegram layout problem is NP-hard and is currently considered both in application domains and theory. In this paper we present an experimental comparison of a recursive algorithm of Buchin et al., our variant of their algorithm, the algorithm hierarchy sort of Holten and van Wijk, and an integer quadratic program that yields optimal solutions.
[ { "version": "v1", "created": "Thu, 5 Jun 2008 11:00:33 GMT" } ]
2009-05-15T00:00:00
[ [ "Nöllenburg", "Martin", "" ], [ "Holten", "Danny", "" ], [ "Völker", "Markus", "" ], [ "Wolff", "Alexander", "" ] ]
0806.0983
Sandy Irani
Joan Boyar, Sandy Irani, Kim S. Larsen
A Comparison of Performance Measures for Online Algorithms
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper provides a systematic study of several proposed measures for online algorithms in the context of a specific problem, namely, the two server problem on three colinear points. Even though the problem is simple, it encapsulates a core challenge in online algorithms which is to balance greediness and adaptability. We examine Competitive Analysis, the Max/Max Ratio, the Random Order Ratio, Bijective Analysis and Relative Worst Order Analysis, and determine how these measures compare the Greedy Algorithm, Double Coverage, and Lazy Double Coverage, commonly studied algorithms in the context of server problems. We find that by the Max/Max Ratio and Bijective Analysis, Greedy is the best of the three algorithms. Under the other measures, Double Coverage and Lazy Double Coverage are better, though Relative Worst Order Analysis indicates that Greedy is sometimes better. Only Bijective Analysis and Relative Worst Order Analysis indicate that Lazy Double Coverage is better than Double Coverage. Our results also provide the first proof of optimality of an algorithm under Relative Worst Order Analysis.
[ { "version": "v1", "created": "Thu, 5 Jun 2008 14:50:08 GMT" }, { "version": "v2", "created": "Fri, 12 Oct 2012 16:03:13 GMT" } ]
2012-10-15T00:00:00
[ [ "Boyar", "Joan", "" ], [ "Irani", "Sandy", "" ], [ "Larsen", "Kim S.", "" ] ]
0806.1722
Guangwu Xu
George Davida, Bruce Litow and Guangwu Xu
Fast Arithmetics Using Chinese Remaindering
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, some issues concerning the Chinese remaindering representation are discussed. Some new converting methods, including an efficient probabilistic algorithm based on a recent result of von zur Gathen and Shparlinski \cite{Gathen-Shparlinski}, are described. An efficient refinement of the NC$^1$ division algorithm of Chiu, Davida and Litow \cite{Chiu-Davida-Litow} is given, where the number of moduli is reduced by a factor of $\log n$.
[ { "version": "v1", "created": "Tue, 10 Jun 2008 18:21:09 GMT" } ]
2008-06-11T00:00:00
[ [ "Davida", "George", "" ], [ "Litow", "Bruce", "" ], [ "Xu", "Guangwu", "" ] ]
0806.1948
Kai-Min Chung
Kai-Min Chung, Salil Vadhan
Tight Bounds for Hashing Block Sources
An extended abstract of this paper will appear in RANDOM08
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is known that if a 2-universal hash function $H$ is applied to elements of a {\em block source} $(X_1,...,X_T)$, where each item $X_i$ has enough min-entropy conditioned on the previous items, then the output distribution $(H,H(X_1),...,H(X_T))$ will be ``close'' to the uniform distribution. We provide improved bounds on how much min-entropy per item is required for this to hold, both when we ask that the output be close to uniform in statistical distance and when we only ask that it be statistically close to a distribution with small collision probability. In both cases, we reduce the dependence of the min-entropy on the number $T$ of items from $2\log T$ in previous work to $\log T$, which we show to be optimal. This leads to corresponding improvements to the recent results of Mitzenmacher and Vadhan (SODA `08) on the analysis of hashing-based algorithms and data structures when the data items come from a block source.
[ { "version": "v1", "created": "Wed, 11 Jun 2008 19:54:14 GMT" } ]
2008-06-12T00:00:00
[ [ "Chung", "Kai-Min", "" ], [ "Vadhan", "Salil", "" ] ]
0806.1978
Luca Trevisan
Luca Trevisan
Max Cut and the Smallest Eigenvalue
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a new approximation algorithm for Max Cut. Our algorithm runs in $\tilde O(n^2)$ time, where $n$ is the number of vertices, and achieves an approximation ratio of $.531$. On instances in which an optimal solution cuts a $1-\epsilon$ fraction of edges, our algorithm finds a solution that cuts a $1-4\sqrt{\epsilon} + 8\epsilon-o(1)$ fraction of edges. Our main result is a variant of spectral partitioning, which can be implemented in nearly linear time. Given a graph in which the Max Cut optimum is a $1-\epsilon$ fraction of edges, our spectral partitioning algorithm finds a set $S$ of vertices and a bipartition $L,R=S-L$ of $S$ such that at least a $1-O(\sqrt \epsilon)$ fraction of the edges incident on $S$ have one endpoint in $L$ and one endpoint in $R$. (This can be seen as an analog of Cheeger's inequality for the smallest eigenvalue of the adjacency matrix of a graph.) Iterating this procedure yields the approximation results stated above. A different, more complicated, variant of spectral partitioning leads to an $\tilde O(n^3)$ time algorithm that cuts $1/2 + e^{-\Omega(1/\eps)}$ fraction of edges in graphs in which the optimum is $1/2 + \epsilon$.
[ { "version": "v1", "created": "Thu, 12 Jun 2008 17:51:02 GMT" }, { "version": "v2", "created": "Sun, 15 Jun 2008 05:09:08 GMT" }, { "version": "v3", "created": "Mon, 22 Sep 2008 23:59:20 GMT" }, { "version": "v4", "created": "Wed, 24 Sep 2008 09:39:48 GMT" }, { "version": "v5", "created": "Mon, 8 Dec 2008 19:03:46 GMT" } ]
2008-12-08T00:00:00
[ [ "Trevisan", "Luca", "" ] ]
0806.2068
Fran\c{c}ois Nicolas
Francois Nicolas
A simple, polynomial-time algorithm for the matrix torsion problem
6 pages. Not intended to be submitted
null
null
null
cs.DM cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Matrix Torsion Problem (MTP) is: given a square matrix M with rational entries, decide whether two distinct powers of M are equal. It has been shown by Cassaigne and the author that the MTP reduces to the Matrix Power Problem (MPP) in polynomial time: given two square matrices A and B with rational entries, the MTP is to decide whether B is a power of A. Since the MPP is decidable in polynomial time, it is also the case of the MTP. However, the algorithm for MPP is highly non-trivial. The aim of this note is to present a simple, direct, polynomial-time algorithm for the MTP.
[ { "version": "v1", "created": "Thu, 12 Jun 2008 13:24:46 GMT" }, { "version": "v2", "created": "Tue, 8 Jul 2008 23:51:08 GMT" }, { "version": "v3", "created": "Tue, 8 Sep 2009 19:46:11 GMT" } ]
2009-09-08T00:00:00
[ [ "Nicolas", "Francois", "" ] ]
0806.2274
Marko A. Rodriguez
Marko A. Rodriguez, Joshua Shinavier
Exposing Multi-Relational Networks to Single-Relational Network Analysis Algorithms
ISSN:1751-1577
Journal of Informetrics, volume 4, number 1, pages 29-41, 2009
10.1016/j.joi.2009.06.004
LA-UR-08-03931
cs.DM cs.DS
http://creativecommons.org/licenses/publicdomain/
Many, if not most network analysis algorithms have been designed specifically for single-relational networks; that is, networks in which all edges are of the same type. For example, edges may either represent "friendship," "kinship," or "collaboration," but not all of them together. In contrast, a multi-relational network is a network with a heterogeneous set of edge labels which can represent relationships of various types in a single data structure. While multi-relational networks are more expressive in terms of the variety of relationships they can capture, there is a need for a general framework for transferring the many single-relational network analysis algorithms to the multi-relational domain. It is not sufficient to execute a single-relational network analysis algorithm on a multi-relational network by simply ignoring edge labels. This article presents an algebra for mapping multi-relational networks to single-relational networks, thereby exposing them to single-relational network analysis algorithms.
[ { "version": "v1", "created": "Fri, 13 Jun 2008 16:07:19 GMT" }, { "version": "v2", "created": "Wed, 9 Dec 2009 16:08:02 GMT" } ]
2009-12-09T00:00:00
[ [ "Rodriguez", "Marko A.", "" ], [ "Shinavier", "Joshua", "" ] ]
0806.2287
Shiva Kasiviswanathan
Martin Furer and Shiva Prasad Kasiviswanathan
Approximately Counting Embeddings into Random Graphs
Earlier version appeared in Random 2008. Fixed an typo in Definition 3.1
Combinator. Probab. Comp. 23 (2014) 1028-1056
10.1017/S0963548314000339
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Let H be a graph, and let C_H(G) be the number of (subgraph isomorphic) copies of H contained in a graph G. We investigate the fundamental problem of estimating C_H(G). Previous results cover only a few specific instances of this general problem, for example, the case when H has degree at most one (monomer-dimer problem). In this paper, we present the first general subcase of the subgraph isomorphism counting problem which is almost always efficiently approximable. The results rely on a new graph decomposition technique. Informally, the decomposition is a labeling of the vertices such that every edge is between vertices with different labels and for every vertex all neighbors with a higher label have identical labels. The labeling implicitly generates a sequence of bipartite graphs which permits us to break the problem of counting embeddings of large subgraphs into that of counting embeddings of small subgraphs. Using this method, we present a simple randomized algorithm for the counting problem. For all decomposable graphs H and all graphs G, the algorithm is an unbiased estimator. Furthermore, for all graphs H having a decomposition where each of the bipartite graphs generated is small and almost all graphs G, the algorithm is a fully polynomial randomized approximation scheme. We show that the graph classes of H for which we obtain a fully polynomial randomized approximation scheme for almost all G includes graphs of degree at most two, bounded-degree forests, bounded-length grid graphs, subdivision of bounded-degree graphs, and major subclasses of outerplanar graphs, series-parallel graphs and planar graphs, whereas unbounded-length grid graphs are excluded.
[ { "version": "v1", "created": "Fri, 13 Jun 2008 17:06:01 GMT" }, { "version": "v2", "created": "Fri, 21 Jun 2013 18:40:00 GMT" } ]
2019-02-20T00:00:00
[ [ "Furer", "Martin", "" ], [ "Kasiviswanathan", "Shiva Prasad", "" ] ]
0806.2707
Pat Morin
Vida Dujmovic, John Howat, and Pat Morin
Biased Range Trees
null
null
null
null
cs.CG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A data structure, called a biased range tree, is presented that preprocesses a set S of n points in R^2 and a query distribution D for 2-sided orthogonal range counting queries. The expected query time for this data structure, when queries are drawn according to D, matches, to within a constant factor, that of the optimal decision tree for S and D. The memory and preprocessing requirements of the data structure are O(n log n).
[ { "version": "v1", "created": "Tue, 17 Jun 2008 15:18:40 GMT" } ]
2008-06-18T00:00:00
[ [ "Dujmovic", "Vida", "" ], [ "Howat", "John", "" ], [ "Morin", "Pat", "" ] ]
0806.3201
Gueorgi Kossinets
Gueorgi Kossinets, Jon Kleinberg, Duncan Watts
The Structure of Information Pathways in a Social Communication Network
9 pages, 10 figures, to appear in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD'08), August 24-27, 2008, Las Vegas, Nevada, USA
null
null
null
physics.soc-ph cs.DS physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social networks are of interest to researchers in part because they are thought to mediate the flow of information in communities and organizations. Here we study the temporal dynamics of communication using on-line data, including e-mail communication among the faculty and staff of a large university over a two-year period. We formulate a temporal notion of "distance" in the underlying social network by measuring the minimum time required for information to spread from one node to another -- a concept that draws on the notion of vector-clocks from the study of distributed computing systems. We find that such temporal measures provide structural insights that are not apparent from analyses of the pure social network topology. In particular, we define the network backbone to be the subgraph consisting of edges on which information has the potential to flow the quickest. We find that the backbone is a sparse graph with a concentration of both highly embedded edges and long-range bridges -- a finding that sheds new light on the relationship between tie strength and connectivity in social networks.
[ { "version": "v1", "created": "Thu, 19 Jun 2008 14:22:25 GMT" } ]
2008-06-20T00:00:00
[ [ "Kossinets", "Gueorgi", "" ], [ "Kleinberg", "Jon", "" ], [ "Watts", "Duncan", "" ] ]
0806.3258
Daniel Karapetyan
Gregory Gutin, Daniel Karapetyan
Local Search Heuristics For The Multidimensional Assignment Problem
30 pages. A preliminary version is published in volume 5420 of Lecture Notes Comp. Sci., pages 100-115, 2009
Journal of Heuristics 17(3) (2011), 201--249
10.1007/s10732-010-9133-3
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Multidimensional Assignment Problem (MAP) (abbreviated s-AP in the case of s dimensions) is an extension of the well-known assignment problem. The most studied case of MAP is 3-AP, though the problems with larger values of s also have a large number of applications. We consider several known neighborhoods, generalize them and propose some new ones. The heuristics are evaluated both theoretically and experimentally and dominating algorithms are selected. We also demonstrate a combination of two neighborhoods may yield a heuristics which is superior to both of its components.
[ { "version": "v1", "created": "Thu, 19 Jun 2008 18:31:51 GMT" }, { "version": "v2", "created": "Sat, 12 Jul 2008 22:18:26 GMT" }, { "version": "v3", "created": "Fri, 5 Sep 2008 19:57:41 GMT" }, { "version": "v4", "created": "Mon, 20 Oct 2008 05:41:16 GMT" }, { "version": "v5", "created": "Tue, 14 Apr 2009 14:22:13 GMT" }, { "version": "v6", "created": "Sat, 25 Jul 2009 11:25:38 GMT" } ]
2015-02-24T00:00:00
[ [ "Gutin", "Gregory", "" ], [ "Karapetyan", "Daniel", "" ] ]
0806.3301
Ryan Tibshirani
Ryan J. Tibshirani
Fast computation of the median by successive binning
14 pages, 1 Postscript figure
null
null
null
stat.CO cs.DS stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a new median algorithm and a median approximation algorithm. The former has O(n) average running time and the latter has O(n) worst-case running time. These algorithms are highly competitive with the standard algorithm when computing the median of a single data set, but are significantly faster in updating the median when more data is added.
[ { "version": "v1", "created": "Fri, 20 Jun 2008 00:44:53 GMT" }, { "version": "v2", "created": "Tue, 12 May 2009 04:46:56 GMT" } ]
2009-05-12T00:00:00
[ [ "Tibshirani", "Ryan J.", "" ] ]
0806.3437
Hang Dinh
Hang Dinh and Alexander Russell
Quantum and Randomized Lower Bounds for Local Search on Vertex-Transitive Graphs
null
null
null
null
quant-ph cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of \emph{local search} on a graph. Given a real-valued black-box function f on the graph's vertices, this is the problem of determining a local minimum of f--a vertex v for which f(v) is no more than f evaluated at any of v's neighbors. In 1983, Aldous gave the first strong lower bounds for the problem, showing that any randomized algorithm requires $\Omega(2^{n/2 - o(1)})$ queries to determine a local minima on the n-dimensional hypercube. The next major step forward was not until 2004 when Aaronson, introducing a new method for query complexity bounds, both strengthened this lower bound to $\Omega(2^{n/2}/n^2)$ and gave an analogous lower bound on the quantum query complexity. While these bounds are very strong, they are known only for narrow families of graphs (hypercubes and grids). We show how to generalize Aaronson's techniques in order to give randomized (and quantum) lower bounds on the query complexity of local search for the family of vertex-transitive graphs. In particular, we show that for any vertex-transitive graph G of N vertices and diameter d, the randomized and quantum query complexities for local search on G are $\Omega(N^{1/2}/d\log N)$ and $\Omega(N^{1/4}/\sqrt{d\log N})$, respectively.
[ { "version": "v1", "created": "Fri, 20 Jun 2008 18:46:50 GMT" } ]
2008-06-23T00:00:00
[ [ "Dinh", "Hang", "" ], [ "Russell", "Alexander", "" ] ]
0806.3471
Maria Gradinariu Potop-Butucaru
Davide Canepa and Maria Gradinariu Potop-Butucaru
Stabilizing Tiny Interaction Protocols
null
null
null
null
cs.DC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present the self-stabilizing implementation of a class of token based algorithms. In the current work we only consider interactions between weak nodes. They are uniform, they do not have unique identifiers, are static and their interactions are restricted to a subset of nodes called neighbours. While interacting, a pair of neighbouring nodes may create mobile agents (that materialize in the current work the token abstraction) that perform traversals of the network and accelerate the system stabilization. In this work we only explore the power of oblivious stateless agents. Our work shows that the agent paradigm is an elegant distributed tool for achieving self-stabilization in Tiny Interaction Protocols (TIP). Nevertheless, in order to reach the full power of classical self-stabilizing algorithms more complex classes of agents have to be considered (e.g. agents with memory, identifiers or communication skills). Interestingly, our work proposes for the first time a model that unifies the recent studies in mobile robots(agents) that evolve in a discrete space and the already established population protocols paradigm.
[ { "version": "v1", "created": "Fri, 20 Jun 2008 21:01:52 GMT" }, { "version": "v2", "created": "Tue, 15 Jun 2010 11:39:09 GMT" } ]
2010-06-16T00:00:00
[ [ "Canepa", "Davide", "" ], [ "Potop-Butucaru", "Maria Gradinariu", "" ] ]
0806.3668
Bodo Manthey
Markus Bl\"aser, Bodo Manthey, Oliver Putz
Approximating Multi-Criteria Max-TSP
An extended abstract of this worl will appear in Proc. of the 16th Ann. European Symposium on Algorithms (ESA 2008)
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present randomized approximation algorithms for multi-criteria Max-TSP. For Max-STSP with k > 1 objective functions, we obtain an approximation ratio of $1/k - \eps$ for arbitrarily small $\eps > 0$. For Max-ATSP with k objective functions, we obtain an approximation ratio of $1/(k+1) - \eps$.
[ { "version": "v1", "created": "Mon, 23 Jun 2008 12:28:10 GMT" } ]
2008-12-18T00:00:00
[ [ "Bläser", "Markus", "" ], [ "Manthey", "Bodo", "" ], [ "Putz", "Oliver", "" ] ]
0806.3827
Mugurel Ionut Andreica
Mugurel Ionut Andreica
Optimal Scheduling of File Transfers with Divisible Sizes on Multiple Disjoint Paths
The algorithmic techniques presented in this paper (particularly the block partitioning framework) were used as part of the official solutions for several tasks proposed by the author in the 2012 Romanian National Olympiad in Informatics (the statements and solutions for these tasks can be found in the attached zip archive)
Proceedings of the IEEE Romania International Conference "Communications", 2008. (ISBN: 978-606-521-008-0), Bucharest : Romania (2008)
null
null
cs.DS cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper I investigate several offline and online data transfer scheduling problems and propose efficient algorithms and techniques for addressing them. In the offline case, I present a novel, heuristic, algorithm for scheduling files with divisible sizes on multiple disjoint paths, in order to maximize the total profit (the problem is equivalent to the multiple knapsack problem with divisible item sizes). I then consider a cost optimization problem for transferring a sequence of identical files, subject to time constraints imposed by the data transfer providers. For the online case I propose an algorithmic framework based on the block partitioning method, which can speed up the process of resource allocation and reservation.
[ { "version": "v1", "created": "Tue, 24 Jun 2008 07:16:26 GMT" }, { "version": "v2", "created": "Thu, 20 Dec 2012 08:42:41 GMT" } ]
2012-12-21T00:00:00
[ [ "Andreica", "Mugurel Ionut", "" ] ]
0806.4073
Frank Gurski
Frank Gurski
A comparison of two approaches for polynomial time algorithms computing basic graph parameters
25 pages, 3 figures
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we compare and illustrate the algorithmic use of graphs of bounded tree-width and graphs of bounded clique-width. For this purpose we give polynomial time algorithms for computing the four basic graph parameters independence number, clique number, chromatic number, and clique covering number on a given tree structure of graphs of bounded tree-width and graphs of bounded clique-width in polynomial time. We also present linear time algorithms for computing the latter four basic graph parameters on trees, i.e. graphs of tree-width 1, and on co-graphs, i.e. graphs of clique-width at most 2.
[ { "version": "v1", "created": "Wed, 25 Jun 2008 11:26:47 GMT" } ]
2008-12-18T00:00:00
[ [ "Gurski", "Frank", "" ] ]
0806.4361
Yakov Nekrich
Marek Karpinski, Yakov Nekrich
Space Efficient Multi-Dimensional Range Reporting
null
null
null
null
cs.DS cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a data structure that supports three-dimensional range reporting queries in $O(\log \log U + (\log \log n)^3+k)$ time and uses $O(n\log^{1+\eps} n)$ space, where $U$ is the size of the universe, $k$ is the number of points in the answer,and $\eps$ is an arbitrary constant. This result improves over the data structure of Alstrup, Brodal, and Rauhe (FOCS 2000) that uses $O(n\log^{1+\eps} n)$ space and supports queries in $O(\log n+k)$ time,the data structure of Nekrich (SoCG'07) that uses $O(n\log^{3} n)$ space and supports queries in $O(\log \log U + (\log \log n)^2 + k)$ time, and the data structure of Afshani (ESA'08) that uses $O(n\log^{3} n)$ space and also supports queries in $O(\log \log U + (\log \log n)^2 + k)$ time but relies on randomization during the preprocessing stage. Our result allows us to significantly reduce the space usage of the fastest previously known static and incremental $d$-dimensional data structures, $d\geq 3$, at a cost of increasing the query time by a negligible $O(\log \log n)$ factor.
[ { "version": "v1", "created": "Thu, 26 Jun 2008 16:32:57 GMT" }, { "version": "v2", "created": "Fri, 24 Apr 2009 12:08:08 GMT" } ]
2009-04-24T00:00:00
[ [ "Karpinski", "Marek", "" ], [ "Nekrich", "Yakov", "" ] ]
0806.4372
Stavros Nikolopoulos D.
Katerina Asdre and Stavros D. Nikolopoulos
The 1-fixed-endpoint Path Cover Problem is Polynomial on Interval Graph
null
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a variant of the path cover problem, namely, the $k$-fixed-endpoint path cover problem, or kPC for short, on interval graphs. Given a graph $G$ and a subset $\mathcal{T}$ of $k$ vertices of $V(G)$, a $k$-fixed-endpoint path cover of $G$ with respect to $\mathcal{T}$ is a set of vertex-disjoint paths $\mathcal{P}$ that covers the vertices of $G$ such that the $k$ vertices of $\mathcal{T}$ are all endpoints of the paths in $\mathcal{P}$. The kPC problem is to find a $k$-fixed-endpoint path cover of $G$ of minimum cardinality; note that, if $\mathcal{T}$ is empty the stated problem coincides with the classical path cover problem. In this paper, we study the 1-fixed-endpoint path cover problem on interval graphs, or 1PC for short, generalizing the 1HP problem which has been proved to be NP-complete even for small classes of graphs. Motivated by a work of Damaschke, where he left both 1HP and 2HP problems open for the class of interval graphs, we show that the 1PC problem can be solved in polynomial time on the class of interval graphs. The proposed algorithm is simple, runs in $O(n^2)$ time, requires linear space, and also enables us to solve the 1HP problem on interval graphs within the same time and space complexity.
[ { "version": "v1", "created": "Thu, 26 Jun 2008 18:13:31 GMT" } ]
2008-12-18T00:00:00
[ [ "Asdre", "Katerina", "" ], [ "Nikolopoulos", "Stavros D.", "" ] ]
0806.4652
Yong Gao
Yong Gao
A Fixed-Parameter Algorithm for Random Instances of Weighted d-CNF Satisfiability
13 pages
null
null
null
cs.DS cs.AI cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study random instances of the weighted $d$-CNF satisfiability problem (WEIGHTED $d$-SAT), a generic W[1]-complete problem. A random instance of the problem consists of a fixed parameter $k$ and a random $d$-CNF formula $\weicnf{n}{p}{k, d}$ generated as follows: for each subset of $d$ variables and with probability $p$, a clause over the $d$ variables is selected uniformly at random from among the $2^d - 1$ clauses that contain at least one negated literals. We show that random instances of WEIGHTED $d$-SAT can be solved in $O(k^2n + n^{O(1)})$-time with high probability, indicating that typical instances of WEIGHTED $d$-SAT under this instance distribution are fixed-parameter tractable. The result also hold for random instances from the model $\weicnf{n}{p}{k,d}(d')$ where clauses containing less than $d' (1 < d' < d)$ negated literals are forbidden, and for random instances of the renormalized (miniaturized) version of WEIGHTED $d$-SAT in certain range of the random model's parameter $p(n)$. This, together with our previous results on the threshold behavior and the resolution complexity of unsatisfiable instances of $\weicnf{n}{p}{k, d}$, provides an almost complete characterization of the typical-case behavior of random instances of WEIGHTED $d$-SAT.
[ { "version": "v1", "created": "Sat, 28 Jun 2008 05:03:47 GMT" } ]
2008-12-18T00:00:00
[ [ "Gao", "Yong", "" ] ]