id
stringlengths
9
16
submitter
stringlengths
4
52
authors
stringlengths
4
937
title
stringlengths
7
243
comments
stringlengths
1
472
journal-ref
stringlengths
4
244
doi
stringlengths
14
55
report-no
stringlengths
3
125
categories
stringlengths
5
97
license
stringclasses
9 values
abstract
stringlengths
33
2.95k
versions
list
update_date
unknown
authors_parsed
sequence
cs/0406033
Manor Mendel
Manor Mendel
Randomized k-server algorithms for growth-rate bounded graphs
The paper is withdrawn
J. Algorithms, 55(2): 192-202, 2005
10.1016/j.jalgor.2004.06.002
null
cs.DS
null
The paper referred to in the title is withdrawn.
[ { "version": "v1", "created": "Thu, 17 Jun 2004 15:11:54 GMT" }, { "version": "v2", "created": "Fri, 28 Sep 2007 22:31:51 GMT" } ]
"2007-10-01T00:00:00"
[ [ "Mendel", "Manor", "" ] ]
cs/0406034
Manor Mendel
Amos Fiat, Manor Mendel
Better algorithms for unfair metrical task systems and applications
20 pages, 1 figure
SIAM Journal on Computing 32(6), pp. 1403-1422, 2003
10.1137/S0097539700376159
null
cs.DS
null
Unfair metrical task systems are a generalization of online metrical task systems. In this paper we introduce new techniques to combine algorithms for unfair metrical task systems and apply these techniques to obtain improved randomized online algorithms for metrical task systems on arbitrary metric spaces.
[ { "version": "v1", "created": "Thu, 17 Jun 2004 18:49:20 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Fiat", "Amos", "" ], [ "Mendel", "Manor", "" ] ]
cs/0406035
Sandor P. Fekete
Ali Ahmadinia, Christophe Bobda, Sandor Fekete, Juergen Teich, Jan van der Veen
Optimal Free-Space Management and Routing-Conscious Dynamic Placement for Reconfigurable Devices
18 pages, 8 figures, 1, table; previous 5-page extended abstract appears in "International Conference on Field-Programmable Logic and Applications", 2004. New version is final journal version, to appear in IEEE Transactions on Computers
null
10.1109/TC.2007.1028
null
cs.DS cs.CG
null
We describe algorithmic results for two crucial aspects of allocating resources on computational hardware devices with partial reconfigurability. By using methods from the field of computational geometry, we derive a method that allows correct maintainance of free and occupied space of a set of n rectangular modules in optimal time Theta(n log n); previous approaches needed a time of O(n^2) for correct results and O(n) for heuristic results. We also show that finding an optimal feasible communication-conscious placement (which minimizes the total weighted Manhattan distance between the new module and existing demand points) can be computed in Theta(n log n). Both resulting algorithms are practically easy to implement and show convincing experimental behavior.
[ { "version": "v1", "created": "Fri, 18 Jun 2004 13:29:46 GMT" }, { "version": "v2", "created": "Fri, 22 Oct 2004 13:58:34 GMT" }, { "version": "v3", "created": "Wed, 28 Sep 2005 17:13:52 GMT" } ]
"2016-11-15T00:00:00"
[ [ "Ahmadinia", "Ali", "" ], [ "Bobda", "Christophe", "" ], [ "Fekete", "Sandor", "" ], [ "Teich", "Juergen", "" ], [ "van der Veen", "Jan", "" ] ]
cs/0406036
Manor Mendel
Manor Mendel, Steven S. Seiden
Online Companion Caching
17 pages, 1 figure. Preliminary version in ESA '02. To be published in Theoretical Computer Science A
Theoret. Comput. Sci. 324(2-3): 183-200, 2004
10.1016/j.tcs.2004.05.015
null
cs.DS
null
This paper is concerned with online caching algorithms for the (n,k)-companion cache, defined by Brehob et. al. In this model the cache is composed of two components: a k-way set-associative cache and a companion fully-associative cache of size n. We show that the deterministic competitive ratio for this problem is (n+1)(k+1)-1, and the randomized competitive ratio is O(\log n \log k) and \Omega(\log n +\log k).
[ { "version": "v1", "created": "Fri, 18 Jun 2004 16:20:24 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Mendel", "Manor", "" ], [ "Seiden", "Steven S.", "" ] ]
cs/0406043
Taneli Mielik\"ainen
Taneli Mielik\"ainen, Janne Ravantti, Esko Ukkonen
The Computational Complexity of Orientation Search Problems in Cryo-Electron Microscopy
null
null
null
C-2004-3, Department of Computer Science, University of Helsinki
cs.DS cs.CG cs.CV
null
In this report we study the problem of determining three-dimensional orientations for noisy projections of randomly oriented identical particles. The problem is of central importance in the tomographic reconstruction of the density map of macromolecular complexes from electron microscope images and it has been studied intensively for more than 30 years. We analyze the computational complexity of the orientation problem and show that while several variants of the problem are $NP$-hard, inapproximable and fixed-parameter intractable, some restrictions are polynomial-time approximable within a constant factor or even solvable in logarithmic space. The orientation search problem is formalized as a constrained line arrangement problem that is of independent interest. The negative complexity results give a partial justification for the heuristic methods used in orientation search, and the positive complexity results on the orientation search have some positive implications also to the problem of finding functionally analogous genes. A preliminary version ``The Computational Complexity of Orientation Search in Cryo-Electron Microscopy'' appeared in Proc. ICCS 2004, LNCS 3036, pp. 231--238. Springer-Verlag 2004.
[ { "version": "v1", "created": "Wed, 23 Jun 2004 14:28:17 GMT" }, { "version": "v2", "created": "Mon, 28 Jun 2004 08:29:20 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Mielikäinen", "Taneli", "" ], [ "Ravantti", "Janne", "" ], [ "Ukkonen", "Esko", "" ] ]
cs/0406045
Sandor P. Fekete
Erik D. Demaine, Sandor P. Fekete, and Shmuel Gal
Online Searching with Turn Cost
15 pages, 2 figures, 1 table; to appear in Theoretical Computer Science. Did some minor editorial changes, fixed some typos, etc
null
null
null
cs.DS
null
We consider the problem of searching for an object on a line at an unknown distance OPT from the original position of the searcher, in the presence of a cost of d for each time the searcher changes direction. This is a generalization of the well-studied linear-search problem. We describe a strategy that is guaranteed to find the object at a cost of at most 9*OPT + 2d, which has the optimal competitive ratio 9 with respect to OPT plus the minimum corresponding additive term. Our argument for upper and lower bound uses an infinite linear program, which we solve by experimental solution of an infinite series of approximating finite linear programs, estimating the limits, and solving the resulting recurrences. We feel that this technique is interesting in its own right and should help solve other searching problems. In particular, we consider the star search or cow-path problem with turn cost, where the hidden object is placed on one of m rays emanating from the original position of the searcher. For this problem we give a tight bound of (1+(2(m^m)/((m-1)^(m-1))) OPT + m ((m/(m-1))^(m-1) - 1) d. We also discuss tradeoff between the corresponding coefficients, and briefly consider randomized strategies on the line.
[ { "version": "v1", "created": "Wed, 23 Jun 2004 16:56:53 GMT" }, { "version": "v2", "created": "Sat, 18 Sep 2004 09:26:50 GMT" }, { "version": "v3", "created": "Fri, 4 Mar 2005 14:16:52 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Demaine", "Erik D.", "" ], [ "Fekete", "Sandor P.", "" ], [ "Gal", "Shmuel", "" ] ]
cs/0406053
Ion Mandoiu
K. Konwar, I. Mandoiu, A. Russell, A. Shvartsman
Approximation Algorithms for Minimum PCR Primer Set Selection with Amplification Length and Uniqueness Constraints
null
null
null
null
cs.DS cs.DM q-bio.QM
null
A critical problem in the emerging high-throughput genotyping protocols is to minimize the number of polymerase chain reaction (PCR) primers required to amplify the single nucleotide polymorphism loci of interest. In this paper we study PCR primer set selection with amplification length and uniqueness constraints from both theoretical and practical perspectives. We give a greedy algorithm that achieves a logarithmic approximation factor for the problem of minimizing the number of primers subject to a given upperbound on the length of PCR amplification products. We also give, using randomized rounding, the first non-trivial approximation algorithm for a version of the problem that requires unique amplification of each amplification target. Empirical results on randomly generated testcases as well as testcases extracted from the from the National Center for Biotechnology Information's genomic databases show that our algorithms are highly scalable and produce better results compared to previous heuristics.
[ { "version": "v1", "created": "Mon, 28 Jun 2004 07:04:14 GMT" }, { "version": "v2", "created": "Tue, 27 Jul 2004 17:39:03 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Konwar", "K.", "" ], [ "Mandoiu", "I.", "" ], [ "Russell", "A.", "" ], [ "Shvartsman", "A.", "" ] ]
cs/0407003
Miguel Mosteiro
Michael A. Bender, Martin Farach-Colton, Miguel Mosteiro
Insertion Sort is O(n log n)
6 pages, Latex. In Proceedings of the Third International Conference on Fun With Algorithms, FUN 2004
null
null
null
cs.DS
null
Traditional Insertion Sort runs in O(n^2) time because each insertion takes O(n) time. When people run Insertion Sort in the physical world, they leave gaps between items to accelerate insertions. Gaps help in computers as well. This paper shows that Gapped Insertion Sort has insertion times of O(log n) with high probability, yielding a total running time of O(n log n) with high probability.
[ { "version": "v1", "created": "Thu, 1 Jul 2004 15:50:26 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Bender", "Michael A.", "" ], [ "Farach-Colton", "Martin", "" ], [ "Mosteiro", "Miguel", "" ] ]
cs/0407023
Rina Panigrahy
Rina Panigrahy
Efficient Hashing with Lookups in two Memory Accesses
Submitted to SODA05
null
null
null
cs.DS
null
The study of hashing is closely related to the analysis of balls and bins. It is well-known that instead of using a single hash function if we randomly hash a ball into two bins and place it in the smaller of the two, then this dramatically lowers the maximum load on bins. This leads to the concept of two-way hashing where the largest bucket contains $O(\log\log n)$ balls with high probability. The hash look up will now search in both the buckets an item hashes to. Since an item may be placed in one of two buckets, we could potentially move an item after it has been initially placed to reduce maximum load. with a maximum load of We show that by performing moves during inserts, a maximum load of 2 can be maintained on-line, with high probability, while supporting hash update operations. In fact, with $n$ buckets, even if the space for two items are pre-allocated per bucket, as may be desirable in hardware implementations, more than $n$ items can be stored giving a high memory utilization. We also analyze the trade-off between the number of moves performed during inserts and the maximum load on a bucket. By performing at most $h$ moves, we can maintain a maximum load of $O(\frac{\log \log n}{h \log(\log\log n/h)})$. So, even by performing one move, we achieve a better bound than by performing no moves at all.
[ { "version": "v1", "created": "Fri, 9 Jul 2004 22:23:40 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Panigrahy", "Rina", "" ] ]
cs/0407036
David Eppstein
David Eppstein
All Maximal Independent Sets and Dynamic Dominance for Sparse Graphs
10 pages
ACM Trans. Algorithms 5(4):A38, 2009
10.1145/1597036.1597042
null
cs.DS
null
We describe algorithms, based on Avis and Fukuda's reverse search paradigm, for listing all maximal independent sets in a sparse graph in polynomial time and delay per output. For bounded degree graphs, our algorithms take constant time per set generated; for minor-closed graph families, the time is O(n) per set, and for more general sparse graph families we achieve subquadratic time per set. We also describe new data structures for maintaining a dynamic vertex set S in a sparse or minor-closed graph family, and querying the number of vertices not dominated by S; for minor-closed graph families the time per update is constant, while it is sublinear for any sparse graph family. We can also maintain a dynamic vertex set in an arbitrary m-edge graph and test the independence of the maintained set in time O(sqrt m) per update. We use the domination data structures as part of our enumeration algorithms.
[ { "version": "v1", "created": "Thu, 15 Jul 2004 21:04:45 GMT" } ]
"2010-01-11T00:00:00"
[ [ "Eppstein", "David", "" ] ]
cs/0407058
Sandor P. Fekete
Michael A. Bender, David P. Bunde, Erik D. Demaine, Sandor P. Fekete, Vitus J. Leung, Henk Meijer and Cynthia A. Phillips
Communication-Aware Processor Allocation for Supercomputers
19 pages, 7 figures, 1 table, Latex, submitted for journal publication. Previous version is extended abstract (14 pages), appeared in Proceedings WADS, Springer LNCS 3608, pp. 169-181
null
null
null
cs.DS cs.DC
null
This paper gives processor-allocation algorithms for minimizing the average number of communication hops between the assigned processors for grid architectures, in the presence of occupied cells. The simpler problem of assigning processors on a free grid has been studied by Karp, McKellar, and Wong who show that the solutions have nontrivial structure; they left open the complexity of the problem. The associated clustering problem is as follows: Given n points in Re^d, find k points that minimize their average pairwise L1 distance. We present a natural approximation algorithm and show that it is a 7/4-approximation for 2D grids. For d-dimensional space, the approximation guarantee is 2-(1/2d), which is tight. We also give a polynomial-time approximation scheme (PTAS) for constant dimension d, and report on experimental results.
[ { "version": "v1", "created": "Sat, 24 Jul 2004 13:40:26 GMT" }, { "version": "v2", "created": "Tue, 6 Dec 2005 13:30:13 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Bender", "Michael A.", "" ], [ "Bunde", "David P.", "" ], [ "Demaine", "Erik D.", "" ], [ "Fekete", "Sandor P.", "" ], [ "Leung", "Vitus J.", "" ], [ "Meijer", "Henk", "" ], [ "Phillips", "Cynthia A.", "" ] ]
cs/0408003
Manor Mendel
Yair Bartal, Manor Mendel
Multi-Embedding of Metric Spaces
null
SIAM J. Comput. 34(1): 248-259, 2004
10.1137/S0097539703433122
null
cs.DS
null
Metric embedding has become a common technique in the design of algorithms. Its applicability is often dependent on how high the embedding's distortion is. For example, embedding finite metric space into trees may require linear distortion as a function of its size. Using probabilistic metric embeddings, the bound on the distortion reduces to logarithmic in the size. We make a step in the direction of bypassing the lower bound on the distortion in terms of the size of the metric. We define "multi-embeddings" of metric spaces in which a point is mapped onto a set of points, while keeping the target metric of polynomial size and preserving the distortion of paths. The distortion obtained with such multi-embeddings into ultrametrics is at most O(log Delta loglog Delta) where Delta is the aspect ratio of the metric. In particular, for expander graphs, we are able to obtain constant distortion embeddings into trees in contrast with the Omega(log n) lower bound for all previous notions of embeddings. We demonstrate the algorithmic application of the new embeddings for two optimization problems: group Steiner tree and metrical task systems.
[ { "version": "v1", "created": "Mon, 2 Aug 2004 16:42:43 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Bartal", "Yair", "" ], [ "Mendel", "Manor", "" ] ]
cs/0408016
H{\aa}kan Sundell
H{\aa}kan Sundell and Philippas Tsigas
Lock-Free and Practical Deques using Single-Word Compare-And-Swap
null
null
null
2004-02
cs.DC cs.DS
null
We present an efficient and practical lock-free implementation of a concurrent deque that is disjoint-parallel accessible and uses atomic primitives which are available in modern computer systems. Previously known lock-free algorithms of deques are either based on non-available atomic synchronization primitives, only implement a subset of the functionality, or are not designed for disjoint accesses. Our algorithm is based on a doubly linked list, and only requires single-word compare-and-swap atomic primitives, even for dynamic memory sizes. We have performed an empirical study using full implementations of the most efficient algorithms of lock-free deques known. For systems with low concurrency, the algorithm by Michael shows the best performance. However, as our algorithm is designed for disjoint accesses, it performs significantly better on systems with high concurrency and non-uniform memory architecture.
[ { "version": "v1", "created": "Thu, 5 Aug 2004 14:17:01 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Sundell", "Håkan", "" ], [ "Tsigas", "Philippas", "" ] ]
cs/0408026
Wojciech Skut
Wojciech Skut
Incremental Construction of Minimal Acyclic Sequential Transducers from Unsorted Data
Proceedings of COLING 2004 (to appear), 7 pages, 5 figures
null
null
null
cs.CL cs.DS
null
This paper presents an efficient algorithm for the incremental construction of a minimal acyclic sequential transducer (ST) for a dictionary consisting of a list of input and output strings. The algorithm generalises a known method of constructing minimal finite-state automata (Daciuk et al. 2000). Unlike the algorithm published by Mihov and Maurel (2001), it does not require the input strings to be sorted. The new method is illustrated by an application to pronunciation dictionaries.
[ { "version": "v1", "created": "Tue, 10 Aug 2004 11:09:48 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Skut", "Wojciech", "" ] ]
cs/0408039
Chiranjeeb Buragohain
Nisheeth Shrivastava, Chiranjeeb Buragohain, Divyakant Agrawal, Subhash Suri
Medians and Beyond: New Aggregation Techniques for Sensor Networks
null
Proceedings of the Second ACM Conference on Embedded Networked Sensor Systems (SenSys 2004)
null
null
cs.DC cs.DB cs.DS
null
Wireless sensor networks offer the potential to span and monitor large geographical areas inexpensively. Sensors, however, have significant power constraint (battery life), making communication very expensive. Another important issue in the context of sensor-based information systems is that individual sensor readings are inherently unreliable. In order to address these two aspects, sensor database systems like TinyDB and Cougar enable in-network data aggregation to reduce the communication cost and improve reliability. The existing data aggregation techniques, however, are limited to relatively simple types of queries such as SUM, COUNT, AVG, and MIN/MAX. In this paper we propose a data aggregation scheme that significantly extends the class of queries that can be answered using sensor networks. These queries include (approximate) quantiles, such as the median, the most frequent data values, such as the consensus value, a histogram of the data distribution, as well as range queries. In our scheme, each sensor aggregates the data it has received from other sensors into a fixed (user specified) size message. We provide strict theoretical guarantees on the approximation quality of the queries in terms of the message size. We evaluate the performance of our aggregation scheme by simulation and demonstrate its accuracy, scalability and low resource utilization for highly variable input data sets.
[ { "version": "v1", "created": "Tue, 17 Aug 2004 02:21:06 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Shrivastava", "Nisheeth", "" ], [ "Buragohain", "Chiranjeeb", "" ], [ "Agrawal", "Divyakant", "" ], [ "Suri", "Subhash", "" ] ]
cs/0408040
William Gilreath
William F. Gilreath
Hash sort: A linear time complexity multiple-dimensional sort algorithm
null
Proceedings of First Southern Symposium on Computing December 1998
null
null
cs.DS
null
Sorting and hashing are two completely different concepts in computer science, and appear mutually exclusive to one another. Hashing is a search method using the data as a key to map to the location within memory, and is used for rapid storage and retrieval. Sorting is a process of organizing data from a random permutation into an ordered arrangement, and is a common activity performed frequently in a variety of applications. Almost all conventional sorting algorithms work by comparison, and in doing so have a linearithmic greatest lower bound on the algorithmic time complexity. Any improvement in the theoretical time complexity of a sorting algorithm can result in overall larger gains in implementation performance.. A gain in algorithmic performance leads to much larger gains in speed for the application that uses the sort algorithm. Such a sort algorithm needs to use an alternative method for ordering the data than comparison, to exceed the linearithmic time complexity boundary on algorithmic performance. The hash sort is a general purpose non-comparison based sorting algorithm by hashing, which has some interesting features not found in conventional sorting algorithms. The hash sort asymptotically outperforms the fastest traditional sorting algorithm, the quick sort. The hash sort algorithm has a linear time complexity factor -- even in the worst case. The hash sort opens an area for further work and investigation into alternative means of sorting.
[ { "version": "v1", "created": "Tue, 17 Aug 2004 09:23:35 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Gilreath", "William F.", "" ] ]
cs/0409009
Dirk Beyer
Dirk Beyer (University of California, Berkeley), Andreas Noack (Brandenburg University of Technology)
CrocoPat 2.1 Introduction and Reference Manual
19 pages + cover, 2 eps figures, uses llncs.cls and cs_techrpt_cover.sty, for downloading the source code, binaries, and RML examples, see http://www.software-systemtechnik.de/CrocoPat/
null
null
UCB//CSD-04-1338
cs.PL cs.DM cs.DS cs.SE
null
CrocoPat is an efficient, powerful and easy-to-use tool for manipulating relations of arbitrary arity, including directed graphs. This manual provides an introduction to and a reference for CrocoPat and its programming language RML. It includes several application examples, in particular from the analysis of structural models of software systems.
[ { "version": "v1", "created": "Tue, 7 Sep 2004 09:44:18 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Beyer", "Dirk", "", "University of California, Berkeley" ], [ "Noack", "Andreas", "", "Brandenburg University of Technology" ] ]
cs/0409013
Ching-Chi Lin
Ching-Chi Lin, Gerard J. Chang, Gen-Huey Chen
Locally connected spanning trees on graphs
14 pages, 3 figures
null
null
null
cs.DS cs.DM
null
A locally connected spanning tree of a graph $G$ is a spanning tree $T$ of $G$ such that the set of all neighbors of $v$ in $T$ induces a connected subgraph of $G$ for every $v\in V(G)$. The purpose of this paper is to give linear-time algorithms for finding locally connected spanning trees on strongly chordal graphs and proper circular-arc graphs, respectively.
[ { "version": "v1", "created": "Wed, 8 Sep 2004 09:08:18 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Lin", "Ching-Chi", "" ], [ "Chang", "Gerard J.", "" ], [ "Chen", "Gen-Huey", "" ] ]
cs/0409016
Vitaly Lugovsky
V. S. Lugovsky
Using a hierarchy of Domain Specific Languages in complex software systems design
8 pages, 1 figure
null
null
null
cs.PL cs.DS cs.SE
null
A new design methodology is introduced, with some examples on building Domain Specific Languages hierarchy on top of Scheme.
[ { "version": "v1", "created": "Thu, 9 Sep 2004 01:44:05 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Lugovsky", "V. S.", "" ] ]
cs/0409017
Jianyang Zeng
Jianyang Zeng, Wen-Jing Hsu and Jiangdian Wang
Near Optimal Routing for Small-World Networks with Augmented Local Awareness
16 pages, 1 table and 3 figures. Experimental results are added
null
null
null
cs.DM cs.DC cs.DS
null
In order to investigate the routing aspects of small-world networks, Kleinberg proposes a network model based on a $d$-dimensional lattice with long-range links chosen at random according to the $d$-harmonic distribution. Kleinberg shows that the greedy routing algorithm by using only local information performs in $O(\log^2 n)$ expected number of hops, where $n$ denotes the number of nodes in the network. Martel and Nguyen have found that the expected diameter of Kleinberg's small-world networks is $\Theta(\log n)$. Thus a question arises naturally: Can we improve the routing algorithms to match the diameter of the networks while keeping the amount of information stored on each node as small as possible? We extend Kleinberg's model and add three augmented local links for each node: two of which are connected to nodes chosen randomly and uniformly within $\log^2 n$ Mahattan distance, and the third one is connected to a node chosen randomly and uniformly within $\log n$ Mahattan distance. We show that if each node is aware of $O(\log n)$ number of neighbors via the augmented local links, there exist both non-oblivious and oblivious algorithms that can route messages between any pair of nodes in $O(\log n \log \log n)$ expected number of hops, which is a near optimal routing complexity and outperforms the other related results for routing in Kleinberg's small-world networks. Our schemes keep only $O(\log^2 n)$ bits of routing information on each node, thus they are scalable with the network size. Besides adding new light to the studies of social networks, our results may also find applications in the design of large-scale distributed networks, such as peer-to-peer systems, in the same spirit of Symphony.
[ { "version": "v1", "created": "Thu, 9 Sep 2004 03:41:48 GMT" }, { "version": "v2", "created": "Tue, 15 Feb 2005 06:37:37 GMT" }, { "version": "v3", "created": "Wed, 30 Nov 2005 19:08:12 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Zeng", "Jianyang", "" ], [ "Hsu", "Wen-Jing", "" ], [ "Wang", "Jiangdian", "" ] ]
cs/0409057
Manor Mendel
Sariel Har-Peled, Manor Mendel
Fast Construction of Nets in Low Dimensional Metrics, and Their Applications
41 pages. Extensive clean-up of minor English errors
SIAM J. Comput. 35(5):1148-1184, 2006
10.1137/S0097539704446281
null
cs.DS cs.CG
null
We present a near linear time algorithm for constructing hierarchical nets in finite metric spaces with constant doubling dimension. This data-structure is then applied to obtain improved algorithms for the following problems: Approximate nearest neighbor search, well-separated pair decomposition, compact representation scheme, doubling measure, and computation of the (approximate) Lipschitz constant of a function. In all cases, the running (preprocessing) time is near-linear and the space being used is linear.
[ { "version": "v1", "created": "Wed, 29 Sep 2004 17:44:15 GMT" }, { "version": "v2", "created": "Fri, 6 May 2005 21:18:00 GMT" }, { "version": "v3", "created": "Mon, 22 Aug 2005 05:03:43 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Har-Peled", "Sariel", "" ], [ "Mendel", "Manor", "" ] ]
cs/0410013
Alex Vinokur
Alex Vinokur
Fibonacci connection between Huffman codes and Wythoff array
12 pages, 9 tables
null
null
null
cs.DM cs.DS math.CO math.NT
null
Fibonacci connection between non-decreasing sequences of positive integers producing maximum height Huffman trees and the Wythoff array has been proved.
[ { "version": "v1", "created": "Wed, 6 Oct 2004 11:44:02 GMT" }, { "version": "v2", "created": "Sat, 8 Oct 2005 07:50:59 GMT" } ]
"2009-09-29T00:00:00"
[ [ "Vinokur", "Alex", "" ] ]
cs/0410017
James P. Crutchfield
Carl S. McTague and James P. Crutchfield
Automated Pattern Detection--An Algorithm for Constructing Optimally Synchronizing Multi-Regular Language Filters
18 pages, 12 figures, 2 appendices; http://www.santafe.edu/~cmg
null
null
Santa Fe Institute 04-09-027
cs.CV cond-mat.stat-mech cs.CL cs.DS cs.IR cs.LG nlin.AO nlin.CG nlin.PS physics.comp-ph q-bio.GN
null
In the computational-mechanics structural analysis of one-dimensional cellular automata the following automata-theoretic analogue of the \emph{change-point problem} from time series analysis arises: \emph{Given a string $\sigma$ and a collection $\{\mc{D}_i\}$ of finite automata, identify the regions of $\sigma$ that belong to each $\mc{D}_i$ and, in particular, the boundaries separating them.} We present two methods for solving this \emph{multi-regular language filtering problem}. The first, although providing the ideal solution, requires a stack, has a worst-case compute time that grows quadratically in $\sigma$'s length and conditions its output at any point on arbitrarily long windows of future input. The second method is to algorithmically construct a transducer that approximates the first algorithm. In contrast to the stack-based algorithm, however, the transducer requires only a finite amount of memory, runs in linear time, and gives immediate output for each letter read; it is, moreover, the best possible finite-state approximation with these three features.
[ { "version": "v1", "created": "Thu, 7 Oct 2004 17:20:56 GMT" } ]
"2016-08-31T00:00:00"
[ [ "McTague", "Carl S.", "" ], [ "Crutchfield", "James P.", "" ] ]
cs/0410039
Sara Cohen
Sara Cohen and Yehoshua Sagiv
Generating All Maximal Induced Subgraphs for Hereditary, Connected-Hereditary and Rooted-Hereditary Properties
null
null
null
null
cs.DS cs.DM
null
The problem of computing all maximal induced subgraphs of a graph G that have a graph property P, also called the maximal P-subgraphs problem, is considered. This problem is studied for hereditary, connected-hereditary and rooted-hereditary graph properties. The maximal P-subgraphs problem is reduced to restricted versions of this problem by providing algorithms that solve the general problem, assuming that an algorithm for a restricted version is given. The complexity of the algorithms are analyzed in terms of total polynomial time, incremental polynomial time and the complexity class P-enumerable. The general results presented allow simple proofs that the maximal P-subgraphs problem can be solved efficiently (in terms of the input and output) for many different properties.
[ { "version": "v1", "created": "Sun, 17 Oct 2004 20:30:43 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Cohen", "Sara", "" ], [ "Sagiv", "Yehoshua", "" ] ]
cs/0410046
Christoph D\"urr
Marek Chrobak, Christoph Durr, Wojciech Jawor, Lukasz Kowalik, Maciej Kurowski
A Note on Scheduling Equal-Length Jobs to Maximize Throughput
null
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
We study the problem of scheduling equal-length jobs with release times and deadlines, where the objective is to maximize the number of completed jobs. Preemptions are not allowed. In Graham's notation, the problem is described as 1|r_j;p_j=p|\sum U_j. We give the following results: (1) We show that the often cited algorithm by Carlier from 1981 is not correct. (2) We give an algorithm for this problem with running time O(n^5).
[ { "version": "v1", "created": "Mon, 18 Oct 2004 22:41:30 GMT" }, { "version": "v2", "created": "Wed, 12 May 2021 10:44:58 GMT" } ]
"2021-05-13T00:00:00"
[ [ "Chrobak", "Marek", "" ], [ "Durr", "Christoph", "" ], [ "Jawor", "Wojciech", "" ], [ "Kowalik", "Lukasz", "" ], [ "Kurowski", "Maciej", "" ] ]
cs/0410048
Erik Demaine
Erik D. Demaine and John Iacono and Stefan Langerman
Worst-Case Optimal Tree Layout in External Memory
10 pages, 1 figure. To appear in Algorithmica
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Consider laying out a fixed-topology tree of N nodes into external memory with block size B so as to minimize the worst-case number of block memory transfers required to traverse a path from the root to a node of depth D. We prove that the optimal number of memory transfers is $$ \cases{ \displaystyle \Theta\left( {D \over \lg (1{+}B)} \right) & when $D = O(\lg N)$, \cr \displaystyle \Theta\left( {\lg N \over \lg \left(1{+}{B \lg N \over D}\right)} \right) & when $D = \Omega(\lg N)$ and $D = O(B \lg N)$, \cr \displaystyle \Theta\left( {D \over B} \right) & when $D = \Omega(B \lg N)$. } $$
[ { "version": "v1", "created": "Tue, 19 Oct 2004 15:17:57 GMT" }, { "version": "v2", "created": "Sat, 23 Apr 2011 13:32:17 GMT" }, { "version": "v3", "created": "Mon, 2 May 2011 13:31:48 GMT" }, { "version": "v4", "created": "Wed, 27 Nov 2013 19:12:04 GMT" } ]
"2013-11-28T00:00:00"
[ [ "Demaine", "Erik D.", "" ], [ "Iacono", "John", "" ], [ "Langerman", "Stefan", "" ] ]
cs/0411027
Vlady Ravelomanana
Vlady Ravelomanana (LIPN)
Extremal Properties of Three Dimensional Sensor Networks with Applications
null
IEEE Transactions on Mobile Computing Vol 3 (2004) pages 246--257
null
null
cs.DS cs.DC cs.DM
null
In this paper, we analyze various critical transmitting/sensing ranges for connectivity and coverage in three-dimensional sensor networks. As in other large-scale complex systems, many global parameters of sensor networks undergo phase transitions: For a given property of the network, there is a critical threshold, corresponding to the minimum amount of the communication effort or power expenditure by individual nodes, above (resp. below) which the property exists with high (resp. a low) probability. For sensor networks, properties of interest include simple and multiple degrees of connectivity/coverage. First, we investigate the network topology according to the region of deployment, the number of deployed sensors and their transmitting/sensing ranges. More specifically, we consider the following problems: Assume that $n$ nodes, each capable of sensing events within a radius of $r$, are randomly and uniformly distributed in a 3-dimensional region $\mathcal{R}$ of volume $V$, how large must the sensing range be to ensure a given degree of coverage of the region to monitor? For a given transmission range, what is the minimum (resp. maximum) degree of the network? What is then the typical hop-diameter of the underlying network? Next, we show how these results affect algorithmic aspects of the network by designing specific distributed protocols for sensor networks.
[ { "version": "v1", "created": "Wed, 10 Nov 2004 09:10:34 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Ravelomanana", "Vlady", "", "LIPN" ] ]
cs/0411064
Daniel A. Spielman
Michael Elkin, Yuval Emek, Daniel A. Spielman and Shang-Hua Teng
Lower-Stretch Spanning Trees
null
null
null
null
cs.DS cs.DM
null
We prove that every weighted graph contains a spanning tree subgraph of average stretch O((log n log log n)^2). Moreover, we show how to construct such a tree in time O(m log^2 n).
[ { "version": "v1", "created": "Wed, 17 Nov 2004 22:07:46 GMT" }, { "version": "v2", "created": "Wed, 5 Jan 2005 21:06:44 GMT" }, { "version": "v3", "created": "Mon, 14 Feb 2005 21:17:11 GMT" }, { "version": "v4", "created": "Mon, 21 Mar 2005 21:52:20 GMT" }, { "version": "v5", "created": "Fri, 13 May 2005 17:20:22 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Elkin", "Michael", "" ], [ "Emek", "Yuval", "" ], [ "Spielman", "Daniel A.", "" ], [ "Teng", "Shang-Hua", "" ] ]
cs/0411093
Vlady Ravelomanana
Vlady Ravelomanana (LIPN), Loys Thimonier (LARIA)
Forbidden Subgraphs in Connected Graphs
null
null
null
null
cs.DS cs.DM math.CO
null
Given a set $\xi=\{H_1,H_2,...\}$ of connected non acyclic graphs, a $\xi$-free graph is one which does not contain any member of $% \xi$ as copy. Define the excess of a graph as the difference between its number of edges and its number of vertices. Let ${\gr{W}}_{k,\xi}$ be theexponential generating function (EGF for brief) of connected $\xi$-free graphs of excess equal to $k$ ($k \geq 1$). For each fixed $\xi$, a fundamental differential recurrence satisfied by the EGFs ${\gr{W}}_{k,\xi}$ is derived. We give methods on how to solve this nonlinear recurrence for the first few values of $k$ by means of graph surgery. We also show that for any finite collection $\xi$ of non-acyclic graphs, the EGFs ${\gr{W}}_{k,\xi}$ are always rational functions of the generating function, $T$, of Cayley's rooted (non-planar) labelled trees. From this, we prove that almost all connected graphs with $n$ nodes and $n+k$ edges are $\xi$-free, whenever $k=o(n^{1/3})$ and $|\xi| < \infty$ by means of Wright's inequalities and saddle point method. Limiting distributions are derived for sparse connected $\xi$-free components that are present when a random graph on $n$ nodes has approximately $\frac{n}{2}$ edges. In particular, the probability distribution that it consists of trees, unicyclic components, $...$, $(q+1)$-cyclic components all $\xi$-free is derived. Similar results are also obtained for multigraphs, which are graphs where self-loops and multiple-edges are allowed.
[ { "version": "v1", "created": "Thu, 25 Nov 2004 09:32:25 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Ravelomanana", "Vlady", "", "LIPN" ], [ "Thimonier", "Loys", "", "LARIA" ] ]
cs/0411095
Christian Lavault
Christian Lavault (LIPN)
Embeddings into the Pancake Interconnection Network
Article paru en 2002 dans Parallel Processing Letters
Parallel Processing Letters 12, 3-4 (2002) 297-310
null
null
cs.DC cs.DM cs.DS
null
Owing to its nice properties, the pancake is one of the Cayley graphs that were proposed as alternatives to the hypercube for interconnecting processors in parallel computers. In this paper, we present embeddings of rings, grids and hypercubes into the pancake with constant dilation and congestion. We also extend the results to similar efficient embeddings into the star graph.
[ { "version": "v1", "created": "Fri, 26 Nov 2004 20:13:10 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Lavault", "Christian", "", "LIPN" ] ]
cs/0412004
Lloyd Allison
L. Allison
Finding Approximate Palindromes in Strings Quickly and Simply
4 pages, 3 figures, code of the simple algorithm will soon be placed at http://www.csse.monash.edu.au/~lloyd/tildeProgLang/Java2/Palindromes/
null
null
2004/162
cs.DS
null
Described are two algorithms to find long approximate palindromes in a string, for example a DNA sequence. A simple algorithm requires O(n)-space and almost always runs in $O(k.n)$-time where n is the length of the string and k is the number of ``errors'' allowed in the palindrome. Its worst-case time-complexity is $O(n^2)$ but this does not occur with real biological sequences. A more complex algorithm guarantees $O(k.n)$ worst-case time complexity.
[ { "version": "v1", "created": "Wed, 1 Dec 2004 17:08:55 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Allison", "L.", "" ] ]
cs/0412006
Sidi Mohamed Sedjelmaci
Sidi Mohamed Sedjelmaci (LIPN)
The Accelerated Euclidean Algorithm
null
Proceedings of the EACA, (2004) 283-287
null
null
cs.DS
null
We present a new GCD algorithm of two integers or polynomials. The algorithm is iterative and its time complexity is still $O(n \\log^2 n ~ log \\log n)$ for $n$-bit inputs.
[ { "version": "v1", "created": "Thu, 2 Dec 2004 15:01:39 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Sedjelmaci", "Sidi Mohamed", "", "LIPN" ] ]
cs/0412008
Manor Mendel
Robert Krauthgamer, James R. Lee, Manor Mendel, Assaf Naor
Measured descent: A new embedding method for finite metrics
17 pages. No figures. Appeared in FOCS '04. To appeaer in Geometric & Functional Analysis. This version fixes a subtle error in Section 2.2
Geom. Funct. Anal. 15(4):839-858, 2005
10.1007/s00039-005-0527-6
null
cs.DS math.MG
null
We devise a new embedding technique, which we call measured descent, based on decomposing a metric space locally, at varying speeds, according to the density of some probability measure. This provides a refined and unified framework for the two primary methods of constructing Frechet embeddings for finite metrics, due to [Bourgain, 1985] and [Rao, 1999]. We prove that any n-point metric space (X,d) embeds in Hilbert space with distortion O(sqrt{alpha_X log n}), where alpha_X is a geometric estimate on the decomposability of X. As an immediate corollary, we obtain an O(sqrt{(log lambda_X) \log n}) distortion embedding, where \lambda_X is the doubling constant of X. Since \lambda_X\le n, this result recovers Bourgain's theorem, but when the metric X is, in a sense, ``low-dimensional,'' improved bounds are achieved. Our embeddings are volume-respecting for subsets of arbitrary size. One consequence is the existence of (k, O(log n)) volume-respecting embeddings for all 1 \leq k \leq n, which is the best possible, and answers positively a question posed by U. Feige. Our techniques are also used to answer positively a question of Y. Rabinovich, showing that any weighted n-point planar graph embeds in l_\infty^{O(log n)} with O(1) distortion. The O(log n) bound on the dimension is optimal, and improves upon the previously known bound of O((log n)^2).
[ { "version": "v1", "created": "Thu, 2 Dec 2004 17:06:41 GMT" }, { "version": "v2", "created": "Thu, 18 Aug 2005 06:56:42 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Krauthgamer", "Robert", "" ], [ "Lee", "James R.", "" ], [ "Mendel", "Manor", "" ], [ "Naor", "Assaf", "" ] ]
cs/0412029
Vladimir Migunov
Vladimir V. Migunov, Rustem R. Kafiyatullov, Ilsur T. Safin
The modular technology of development of the CAD expansions: profiles of outside networks of water supply and water drain
8 pages, 2 figures, in Russian
null
null
null
cs.CE cs.DS
null
The modular technology of development of the problem-oriented CAD expansions is applied to a task of designing of profiles of outside networks of water supply and water drain with realization in program system TechnoCAD GlassX. The unity of structure of this profiles is revealed, the system model of the drawings of profiles of networks is developed including the structured parametric representation (properties of objects and their interdependence, general settings and default settings) and operations with it, which efficiently automate designing
[ { "version": "v1", "created": "Wed, 8 Dec 2004 08:42:53 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Migunov", "Vladimir V.", "" ], [ "Kafiyatullov", "Rustem R.", "" ], [ "Safin", "Ilsur T.", "" ] ]
cs/0412030
Vladimir Migunov
Vladimir V. Migunov, Rustem R. Kafiyatullov, Ilsur T. Safin
The modular technology of development of the CAD expansions: protection of the buildings from the lightning
8 pages, 2 figures, in Russian
null
null
null
cs.CE cs.DS
null
The modular technology of development of the problem-oriented CAD expansions is applied to a task of designing of protection of the buildings from the lightning with realization in program system TechnoCAD GlassX. The system model of the drawings of lightning protection is developed including the structured parametric representation (properties of objects and their interdependence, general settings and default settings) and operations with it, which efficiently automate designing
[ { "version": "v1", "created": "Wed, 8 Dec 2004 08:49:08 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Migunov", "Vladimir V.", "" ], [ "Kafiyatullov", "Rustem R.", "" ], [ "Safin", "Ilsur T.", "" ] ]
cs/0412032
Vladimir Migunov
Vladimir V. Migunov
The methods of support of the requirements of the Russian standards at development of a CAD of industrial objects
8 pages, 4 figures, in Russian
null
null
null
cs.CE cs.DS
null
The methods of support of the requirements of the Russian standards in a CAD of industrial objects are explained, which were implemented in the CAD system TechnoCAD GlassX with an own graphics core and own structures of data storage. It is rotined, that the binding of storage structures and program code of a CAD to the requirements of standards enable not only to fulfil these requirements in project documentation, but also to increase a degree of compactness of storage of drawings both on the disk and in the RAM
[ { "version": "v1", "created": "Wed, 8 Dec 2004 08:57:49 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Migunov", "Vladimir V.", "" ] ]
cs/0412047
Marko Rodriguez
Marko Rodriguez and Daniel Steinbock
A Social Network for Societal-Scale Decision-Making Systems
Dynamically Distributed Democracy algorithm presented in the arena of a societal-scale decision support system
North American Association for Computational Social and Organizational Science Conference Proceedings 2004
null
null
cs.CY cs.DS cs.HC
null
In societal-scale decision-making systems the collective is faced with the problem of ensuring that the derived group decision is in accord with the collective's intention. In modern systems, political institutions have instatiated representative forms of decision-making to ensure that every individual in the society has a participatory voice in the decision-making behavior of the whole--even if only indirectly through representation. An agent-based simulation demonstrates that in modern representative systems, as the ratio of representatives increases, there exists an exponential decrease in the ability for the group to behave in accord with the desires of the whole. To remedy this issue, this paper provides a novel representative power structure for decision-making that utilizes a social network and power distribution algorithm to maintain the collective's perspective over varying degrees of participation and/or ratios of representation. This work shows promise for the future development of policy-making systems that are supported by the computer and network infrastructure of our society.
[ { "version": "v1", "created": "Sat, 11 Dec 2004 00:32:51 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Rodriguez", "Marko", "" ], [ "Steinbock", "Daniel", "" ] ]
cs/0412089
Evgeny Yanenko O.
Evgeny Yanenko
Evolving Categories: Consistent Framework for Representation of Data and Algorithms
10 pages, 20 pictures
null
null
null
cs.DS
null
A concept of "evolving categories" is suggested to build a simple, scalable, mathematically consistent framework for representing in uniform way both data and algorithms. A state machine for executing algorithms becomes clear, rich and powerful semantics, based on category theory, and still allows easy implementation. Moreover, it gives an original insight into the nature and semantics of algorithms.
[ { "version": "v1", "created": "Fri, 17 Dec 2004 22:58:13 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Yanenko", "Evgeny", "" ] ]
cs/0412094
Christoph Durr
Philippe Baptiste, Marek Chrobak, Christoph Durr, Francis Sourd
Preemptive Multi-Machine Scheduling of Equal-Length Jobs to Minimize the Average Flow Time
null
null
null
This paper is now part of the report cs.DS/0605078.
cs.DS
null
We study the problem of preemptive scheduling of n equal-length jobs with given release times on m identical parallel machines. The objective is to minimize the average flow time. Recently, Brucker and Kravchenko proved that the optimal schedule can be computed in polynomial time by solving a linear program with O(n^3) variables and constraints, followed by some substantial post-processing (where n is the number of jobs.) In this note we describe a simple linear program with only O(mn) variables and constraints. Our linear program produces directly the optimal schedule and does not require any post-processing.
[ { "version": "v1", "created": "Mon, 20 Dec 2004 16:15:59 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Baptiste", "Philippe", "" ], [ "Chrobak", "Marek", "" ], [ "Durr", "Christoph", "" ], [ "Sourd", "Francis", "" ] ]
cs/0412100
Stephan Tobies
Peter H. Deussen and Stephan Tobies
Formal Test Purposes and The Validity of Test Cases
This paper appeared in the proceedings of the 22nd IFIP WG 6.1 International Conference on Formal Techniques for Networked and Distributed Systems (FORTE 2002), number 2529 Lecture Notes in Computer Science
null
null
null
cs.DS
null
We give a formalization of the notion of test purpose based on (suitably restricted) Message Sequence Charts. We define the validity of test cases with respect to such a formal test purpose and provide a simple decision procedure for validity.
[ { "version": "v1", "created": "Wed, 22 Dec 2004 08:53:49 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Deussen", "Peter H.", "" ], [ "Tobies", "Stephan", "" ] ]
cs/0412107
Carlos Cabrillo
L. A. Garcia-Cortes and C. Cabrillo
A Monte Carlo algorithm for efficient large matrix inversion
13 pages, no figure. Title corrected
null
null
null
cs.DS cs.NA hep-lat
null
This paper introduces a new Monte Carlo algorithm to invert large matrices. It is based on simultaneous coupled draws from two random vectors whose covariance is the required inverse. It can be considered a generalization of a previously reported algorithm for hermitian matrices inversion based in only one draw. The use of two draws allows the inversion on non-hermitian matrices. Both the conditions for convergence and the rate of convergence are similar to the Gauss-Seidel algorithm. Results on two examples are presented, a real non-symmetric matrix related to quantitative genetics and a complex non-hermitian matrix relevant for physicists. Compared with other Monte Carlo algorithms it reveals a large reduction of the processing time showing eight times faster processing in the examples studied.
[ { "version": "v1", "created": "Thu, 23 Dec 2004 17:01:14 GMT" }, { "version": "v2", "created": "Mon, 10 Jan 2005 16:41:32 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Garcia-Cortes", "L. A.", "" ], [ "Cabrillo", "C.", "" ] ]
cs/0501020
Gianluca Lax
Francesco Buccafurri, Gianluca Lax, Domenico Sacca', Luigi Pontieri and Domenico Rosaci
Enhancing Histograms by Tree-Like Bucket Indices
26 pages, 9 figures
null
null
null
cs.DS
null
Histograms are used to summarize the contents of relations into a number of buckets for the estimation of query result sizes. Several techniques (e.g., MaxDiff and V-Optimal) have been proposed in the past for determining bucket boundaries which provide accurate estimations. However, while search strategies for optimal bucket boundaries are rather sophisticated, no much attention has been paid for estimating queries inside buckets and all of the above techniques adopt naive methods for such an estimation. This paper focuses on the problem of improving the estimation inside a bucket once its boundaries have been fixed. The proposed technique is based on the addition, to each bucket, of 32-bit additional information (organized into a 4-level tree index), storing approximate cumulative frequencies at 7 internal intervals of the bucket. Both theoretical analysis and experimental results show that, among a number of alternative ways to organize the additional information, the 4-level tree index provides the best frequency estimation inside a bucket. The index is later added to two well-known histograms, MaxDiff and V-Optimal, obtaining the non-obvious result that despite the spatial cost of 4LT which reduces the number of allowed buckets once the storage space has been fixed, the original methods are strongly improved in terms of accuracy.
[ { "version": "v1", "created": "Tue, 11 Jan 2005 10:15:31 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Buccafurri", "Francesco", "" ], [ "Lax", "Gianluca", "" ], [ "Sacca'", "Domenico", "" ], [ "Pontieri", "Luigi", "" ], [ "Rosaci", "Domenico", "" ] ]
cs/0501042
Martin Bernauer
Martin Bernauer
Maintaining Consistency of Data on the Web
null
null
null
null
cs.DB cs.DS
null
Increasingly more data is becoming available on the Web, estimates speaking of 1 billion documents in 2002. Most of the documents are Web pages whose data is considered to be in XML format, expecting it to eventually replace HTML. A common problem in designing and maintaining a Web site is that data on a Web page often replicates or derives from other data, the so-called base data, that is usually not contained in the deriving or replicating page. Consequently, replicas and derivations become inconsistent upon modifying base data in a Web page or a relational database. For example, after assigning a thesis to a student and modifying the Web page that describes it in detail, the thesis is still incorrectly contained in the list of offered thesis, missing in the list of ongoing thesis, and missing in the advisor's teaching record. The thesis presents a solution by proposing a combined approach that provides for maintaining consistency of data in Web pages that (i) replicate data in relational databases, or (ii) replicate or derive from data in Web pages. Upon modifying base data, the modification is immediately pushed to affected Web pages. There, maintenance is performed incrementally by only modifying the affected part of the page instead of re-generating the whole page from scratch.
[ { "version": "v1", "created": "Thu, 20 Jan 2005 14:11:03 GMT" }, { "version": "v2", "created": "Tue, 25 Jan 2005 16:00:49 GMT" }, { "version": "v3", "created": "Wed, 9 Feb 2005 19:59:46 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Bernauer", "Martin", "" ] ]
cs/0501045
Kenneth Clarkson
Kenneth L. Clarkson and Kasturi Varadarajan
Improved Approximation Algorithms for Geometric Set Cover
null
null
null
null
cs.CG cs.DS
null
Given a collection S of subsets of some set U, and M a subset of U, the set cover problem is to find the smallest subcollection C of S such that M is a subset of the union of the sets in C. While the general problem is NP-hard to solve, even approximately, here we consider some geometric special cases, where usually U = R^d. Extending prior results, we show that approximation algorithms with provable performance exist, under a certain general condition: that for a random subset R of S and function f(), there is a decomposition of the portion of U not covered by R into an expected f(|R|) regions, each region of a particular simple form. We show that under this condition, a cover of size O(f(|C|)) can be found. Our proof involves the generalization of shallow cuttings to more general geometric situations. We obtain constant-factor approximation algorithms for covering by unit cubes in R^3, for guarding a one-dimensional terrain, and for covering by similar-sized fat triangles in R^2. We also obtain improved approximation guarantees for fat triangles, of arbitrary size, and for a class of fat objects.
[ { "version": "v1", "created": "Thu, 20 Jan 2005 21:31:22 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Clarkson", "Kenneth L.", "" ], [ "Varadarajan", "Kasturi", "" ] ]
cs/0501073
Tom Schrijvers
Tom Schrijvers and Thom Fruehwirth
Optimal Union-Find in Constraint Handling Rules
12 pages, 3 figures, to appear in Theory and Practice of Logic Programming (TPLP)
null
null
null
cs.PL cs.CC cs.DS cs.PF
null
Constraint Handling Rules (CHR) is a committed-choice rule-based language that was originally intended for writing constraint solvers. In this paper we show that it is also possible to write the classic union-find algorithm and variants in CHR. The programs neither compromise in declarativeness nor efficiency. We study the time complexity of our programs: they match the almost-linear complexity of the best known imperative implementations. This fact is illustrated with experimental results.
[ { "version": "v1", "created": "Tue, 25 Jan 2005 13:28:38 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Schrijvers", "Tom", "" ], [ "Fruehwirth", "Thom", "" ] ]
cs/0502014
Philippe Robert
Philippe Robert (RAP UR-R)
On the asymptotic behavior of some Algorithms
November 2004
Random Structures and Algorithms 27 (2005) 235--250
10.1002/rsa.20075
null
cs.DS math.CA math.PR
null
A simple approach is presented to study the asymptotic behavior of some algorithms with an underlying tree structure. It is shown that some asymptotic oscillating behaviors can be precisely analyzed without resorting to complex analysis techniques as it is usually done in this context. A new explicit representation of periodic functions involved is obtained at the same time.
[ { "version": "v1", "created": "Thu, 3 Feb 2005 08:25:09 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Robert", "Philippe", "", "RAP UR-R" ] ]
cs/0502032
Mihai P?tra\c{s}cu
Christian Worm Mortensen, Rasmus Pagh and Mihai Patrascu
On Dynamic Range Reporting in One Dimension
18 pages. Full version of a paper that will appear in STOC'05
null
null
null
cs.DS
null
We consider the problem of maintaining a dynamic set of integers and answering queries of the form: report a point (equivalently, all points) in a given interval. Range searching is a natural and fundamental variant of integer search, and can be solved using predecessor search. However, for a RAM with w-bit words, we show how to perform updates in O(lg w) time and answer queries in O(lglg w) time. The update time is identical to the van Emde Boas structure, but the query time is exponentially faster. Existing lower bounds show that achieving our query time for predecessor search requires doubly-exponentially slower updates. We present some arguments supporting the conjecture that our solution is optimal. Our solution is based on a new and interesting recursion idea which is "more extreme" that the van Emde Boas recursion. Whereas van Emde Boas uses a simple recursion (repeated halving) on each path in a trie, we use a nontrivial, van Emde Boas-like recursion on every such path. Despite this, our algorithm is quite clean when seen from the right angle. To achieve linear space for our data structure, we solve a problem which is of independent interest. We develop the first scheme for dynamic perfect hashing requiring sublinear space. This gives a dynamic Bloomier filter (an approximate storage scheme for sparse vectors) which uses low space. We strengthen previous lower bounds to show that these results are optimal.
[ { "version": "v1", "created": "Sat, 5 Feb 2005 23:22:37 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Mortensen", "Christian Worm", "" ], [ "Pagh", "Rasmus", "" ], [ "Patrascu", "Mihai", "" ] ]
cs/0502041
Mihai Patrascu
Mihai Patrascu and Erik D. Demaine
Logarithmic Lower Bounds in the Cell-Probe Model
Second version contains significant changes to the presentation. 32 pages, 1 figure. Journal version of two conference publications: "Tight Bounds for the Partial-Sums Problem" Proc. 15th ACM-SIAM Symposium on Discrete Algorithms (SODA'04), pp 20-29. "Lower Bounds for Dynamic Connectivity" Proc. 36th ACM Symposium on Theory of Computing (STOC'04), pp 546-553
null
null
null
cs.DS cs.CC
null
We develop a new technique for proving cell-probe lower bounds on dynamic data structures. This technique enables us to prove an amortized randomized Omega(lg n) lower bound per operation for several data structural problems on n elements, including partial sums, dynamic connectivity among disjoint paths (or a forest or a graph), and several other dynamic graph problems (by simple reductions). Such a lower bound breaks a long-standing barrier of Omega(lg n / lglg n) for any dynamic language membership problem. It also establishes the optimality of several existing data structures, such as Sleator and Tarjan's dynamic trees. We also prove the first Omega(log_B n) lower bound in the external-memory model without assumptions on the data structure (such as the comparison model). Our lower bounds also give a query-update trade-off curve matched, e.g., by several data structures for dynamic connectivity in graphs. We also prove matching upper and lower bounds for partial sums when parameterized by the word size and the maximum additive change in an update.
[ { "version": "v1", "created": "Tue, 8 Feb 2005 03:03:55 GMT" }, { "version": "v2", "created": "Sat, 28 May 2005 21:49:32 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Patrascu", "Mihai", "" ], [ "Demaine", "Erik D.", "" ] ]
cs/0502054
Ion Mandoiu
Ion I. Mandoiu, Claudia Prajescu, Dragos Trinca
Improved Tag Set Design and Multiplexing Algorithms for Universal Arrays
null
null
null
null
cs.DS
null
In this paper we address two optimization problems arising in the design of genomic assays based on universal tag arrays. First, we address the universal array tag set design problem. For this problem, we extend previous formulations to incorporate antitag-to-antitag hybridization constraints in addition to constraints on antitag-to-tag hybridization specificity, establish a constructive upper bound on the maximum number of tags satisfying the extended constraints, and propose a simple greedy tag selection algorithm. Second, we give methods for improving the multiplexing rate in large-scale genomic assays by combining primer selection with tag assignment. Experimental results on simulated data show that this integrated optimization leads to reductions of up to 50% in the number of required arrays.
[ { "version": "v1", "created": "Thu, 10 Feb 2005 20:20:53 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Mandoiu", "Ion I.", "" ], [ "Prajescu", "Claudia", "" ], [ "Trinca", "Dragos", "" ] ]
cs/0502065
Ion Mandoiu
Bhaskar DasGupta, Kishori M. Konwar, Ion I. Mandoiu, Alex A. Shvartsman
Highly Scalable Algorithms for Robust String Barcoding
null
null
null
null
cs.DS
null
String barcoding is a recently introduced technique for genomic-based identification of microorganisms. In this paper we describe the engineering of highly scalable algorithms for robust string barcoding. Our methods enable distinguisher selection based on whole genomic sequences of hundreds of microorganisms of up to bacterial size on a well-equipped workstation, and can be easily parallelized to further extend the applicability range to thousands of bacterial size genomes. Experimental results on both randomly generated and NCBI genomic data show that whole-genome based selection results in a number of distinguishers nearly matching the information theoretic lower bounds for the problem.
[ { "version": "v1", "created": "Mon, 14 Feb 2005 22:19:52 GMT" } ]
"2016-08-31T00:00:00"
[ [ "DasGupta", "Bhaskar", "" ], [ "Konwar", "Kishori M.", "" ], [ "Mandoiu", "Ion I.", "" ], [ "Shvartsman", "Alex A.", "" ] ]
cs/0502070
Erik Demaine
Erik D. Demaine and MohammadTaghi Hajiaghayi
Bidimensionality, Map Graphs, and Grid Minors
12 pages
null
null
null
cs.DM cs.DS
null
In this paper we extend the theory of bidimensionality to two families of graphs that do not exclude fixed minors: map graphs and power graphs. In both cases we prove a polynomial relation between the treewidth of a graph in the family and the size of the largest grid minor. These bounds improve the running times of a broad class of fixed-parameter algorithms. Our novel technique of using approximate max-min relations between treewidth and size of grid minors is powerful, and we show how it can also be used, e.g., to prove a linear relation between the treewidth of a bounded-genus graph and the treewidth of its dual.
[ { "version": "v1", "created": "Wed, 16 Feb 2005 19:01:50 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Demaine", "Erik D.", "" ], [ "Hajiaghayi", "MohammadTaghi", "" ] ]
cs/0502073
Maxime Crochemore
Maxime Crochemore (IGM), Jacques D\'esarm\'enien (IGM), Dominique Perrin (IGM)
A note on the Burrows-Wheeler transformation
2004
null
null
CDP04tcs
cs.DS
null
We relate the Burrows-Wheeler transformation with a result in combinatorics on words known as the Gessel-Reutenauer transformation.
[ { "version": "v1", "created": "Thu, 17 Feb 2005 07:06:28 GMT" } ]
"2016-08-16T00:00:00"
[ [ "Crochemore", "Maxime", "", "IGM" ], [ "Désarménien", "Jacques", "", "IGM" ], [ "Perrin", "Dominique", "", "IGM" ] ]
cs/0502075
Sudipto Guha
Sudipto Guha
How far will you walk to find your shortcut: Space Efficient Synopsis Construction Algorithms
null
null
null
null
cs.DS cs.DB
null
In this paper we consider the wavelet synopsis construction problem without the restriction that we only choose a subset of coefficients of the original data. We provide the first near optimal algorithm. We arrive at the above algorithm by considering space efficient algorithms for the restricted version of the problem. In this context we improve previous algorithms by almost a linear factor and reduce the required space to almost linear. Our techniques also extend to histogram construction, and improve the space-running time tradeoffs for V-Opt and range query histograms. We believe the idea applies to a broad range of dynamic programs and demonstrate it by showing improvements in a knapsack-like setting seen in construction of Extended Wavelets.
[ { "version": "v1", "created": "Fri, 18 Feb 2005 00:35:58 GMT" } ]
"2009-09-29T00:00:00"
[ [ "Guha", "Sudipto", "" ] ]
cs/0503023
Josiah Carlson
Josiah Carlson and David Eppstein
The Weighted Maximum-Mean Subtree and Other Bicriterion Subtree Problems
10 pages
null
null
null
cs.CG cs.DS
null
We consider problems in which we are given a rooted tree as input, and must find a subtree with the same root, optimizing some objective function of the nodes in the subtree. When this function is the sum of constant node weights, the problem is trivially solved in linear time. When the objective is the sum of weights that are linear functions of a parameter, we show how to list all optima for all possible parameter values in O(n log n) time; this parametric optimization problem can be used to solve many bicriterion optimizations problems, in which each node has two values xi and yi associated with it, and the objective function is a bivariate function f(SUM(xi),SUM(yi)) of the sums of these two values. A special case, when f is the ratio of the two sums, is the Weighted Maximum-Mean Subtree Problem, or equivalently the Fractional Prize-Collecting Steiner Tree Problem on Trees; for this special case, we provide a linear time algorithm for this problem when all weights are positive, improving a previous O(n log n) solution, and prove that the problem is NP-complete when negative weights are allowed.
[ { "version": "v1", "created": "Wed, 9 Mar 2005 18:16:14 GMT" }, { "version": "v2", "created": "Wed, 4 May 2005 21:45:54 GMT" }, { "version": "v3", "created": "Tue, 16 Aug 2005 06:06:57 GMT" }, { "version": "v4", "created": "Tue, 6 Dec 2005 01:56:17 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Carlson", "Josiah", "" ], [ "Eppstein", "David", "" ] ]
cs/0503057
Ion Mandoiu
Ion I. Mandoiu and Dragos Trinca
Exact and Approximation Algorithms for DNA Tag Set Design
null
null
null
null
cs.DS
null
In this paper we propose new solution methods for designing tag sets for use in universal DNA arrays. First, we give integer linear programming formulations for two previous formalizations of the tag set design problem, and show that these formulations can be solved to optimality for instance sizes of practical interest by using general purpose optimization packages. Second, we note the benefits of periodic tags, and establish an interesting connection between the tag design problem and the problem of packing the maximum number of vertex-disjoint directed cycles in a given graph. We show that combining a simple greedy cycle packing algorithm with a previously proposed alphabetic tree search strategy yields an increase of over 40% in the number of tags compared to previous methods.
[ { "version": "v1", "created": "Wed, 23 Mar 2005 02:36:14 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Mandoiu", "Ion I.", "" ], [ "Trinca", "Dragos", "" ] ]
cs/0503065
Rachid Echahed
Dominique Duval (LMC - IMAG), Rachid Echahed (Leibniz - IMAG), Frederic Prost (Leibniz - IMAG)
Data-Structure Rewriting
null
null
null
null
cs.PL cs.DS
null
We tackle the problem of data-structure rewriting including pointer redirections. We propose two basic rewrite steps: (i) Local Redirection and Replacement steps the aim of which is redirecting specific pointers determined by means of a pattern, as well as adding new information to an existing data ; and (ii) Global Redirection steps which are aimed to redirect all pointers targeting a node towards another one. We define these two rewriting steps following the double pushout approach. We define first the category of graphs we consider and then define rewrite rules as pairs of graph homomorphisms of the form "L <- K ->R". Unfortunately, inverse pushouts (complement pushouts) are not unique in our setting and pushouts do not always exist. Therefore, we define rewriting steps so that a rewrite rule can always be performed once a matching is found.
[ { "version": "v1", "created": "Thu, 24 Mar 2005 09:55:42 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Duval", "Dominique", "", "LMC - IMAG" ], [ "Echahed", "Rachid", "", "Leibniz - IMAG" ], [ "Prost", "Frederic", "", "Leibniz - IMAG" ] ]
cs/0504023
Ioannis Giotis
Ioannis Giotis and Venkatesan Guruswami
Correlation Clustering with a Fixed Number of Clusters
16 pages
null
null
null
cs.DS
null
We continue the investigation of problems concerning correlation clustering or clustering with qualitative information, which is a clustering formulation that has been studied recently. The basic setup here is that we are given as input a complete graph on n nodes (which correspond to nodes to be clustered) whose edges are labeled + (for similar pairs of items) and - (for dissimilar pairs of items). Thus we have only as input qualitative information on similarity and no quantitative distance measure between items. The quality of a clustering is measured in terms of its number of agreements, which is simply the number of edges it correctly classifies, that is the sum of number of - edges whose endpoints it places in different clusters plus the number of + edges both of whose endpoints it places within the same cluster. In this paper, we study the problem of finding clusterings that maximize the number of agreements, and the complementary minimization version where we seek clusterings that minimize the number of disagreements. We focus on the situation when the number of clusters is stipulated to be a small constant k. Our main result is that for every k, there is a polynomial time approximation scheme for both maximizing agreements and minimizing disagreements. (The problems are NP-hard for every k >= 2.) The main technical work is for the minimization version, as the PTAS for maximizing agreements follows along the lines of the property tester for Max k-CUT. In contrast, when the number of clusters is not specified, the problem of minimizing disagreements was shown to be APX-hard, even though the maximization version admits a PTAS.
[ { "version": "v1", "created": "Wed, 6 Apr 2005 22:36:03 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Giotis", "Ioannis", "" ], [ "Guruswami", "Venkatesan", "" ] ]
cs/0504026
Yongxi Cheng
Yongxi Cheng, Xiaoming Sun, Yiqun Lisa Yin
Searching Monotone Multi-dimensional Arrays
13 pages, 2 figures; same results, presentation improved, add two figures
null
null
null
cs.DS cs.DM
null
In this paper we investigate the problem of searching monotone multi-dimensional arrays. We generalize Linial and Saks' search algorithm \cite{LS1} for monotone 3-dimensional arrays to $d$-dimensions with $d\geq 4$. Our new search algorithm is asymptotically optimal for $d=4$.
[ { "version": "v1", "created": "Thu, 7 Apr 2005 15:58:18 GMT" }, { "version": "v2", "created": "Mon, 21 Aug 2006 07:52:29 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Cheng", "Yongxi", "" ], [ "Sun", "Xiaoming", "" ], [ "Yin", "Yiqun Lisa", "" ] ]
cs/0504029
Devavrat Shah
Damon Mosk-Aoyama and Devavrat Shah
Fast Distributed Algorithms for Computing Separable Functions
15 pages
null
null
null
cs.NI cs.DC cs.DS
null
The problem of computing functions of values at the nodes in a network in a totally distributed manner, where nodes do not have unique identities and make decisions based only on local information, has applications in sensor, peer-to-peer, and ad-hoc networks. The task of computing separable functions, which can be written as linear combinations of functions of individual variables, is studied in this context. Known iterative algorithms for averaging can be used to compute the normalized values of such functions, but these algorithms do not extend in general to the computation of the actual values of separable functions. The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions. The running time of the algorithm is shown to depend on the running time of a minimum computation algorithm used as a subroutine. Using a randomized gossip mechanism for minimum computation as the subroutine yields a complete totally distributed algorithm for computing separable functions. For a class of graphs with small spectral gap, such as grid graphs, the time used by the algorithm to compute averages is of a smaller order than the time required by a known iterative averaging scheme.
[ { "version": "v1", "created": "Fri, 8 Apr 2005 06:49:29 GMT" }, { "version": "v2", "created": "Sat, 9 Apr 2005 02:53:53 GMT" }, { "version": "v3", "created": "Sat, 4 Feb 2006 21:47:36 GMT" }, { "version": "v4", "created": "Sun, 22 Apr 2007 23:19:43 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Mosk-Aoyama", "Damon", "" ], [ "Shah", "Devavrat", "" ] ]
cs/0504103
Neal E. Young
Marek Chrobak and Claire Kenyon and John Noga and Neal E. Young
Incremental Medians via Online Bidding
conference version appeared in LATIN 2006 as "Oblivious Medians via Online Bidding"
Algorithmica 50(4):455-478(2008)
10.1007/s00453-007-9005-x
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the k-median problem we are given sets of facilities and customers, and distances between them. For a given set F of facilities, the cost of serving a customer u is the minimum distance between u and a facility in F. The goal is to find a set F of k facilities that minimizes the sum, over all customers, of their service costs. Following Mettu and Plaxton, we study the incremental medians problem, where k is not known in advance, and the algorithm produces a nested sequence of facility sets where the kth set has size k. The algorithm is c-cost-competitive if the cost of each set is at most c times the cost of the optimum set of size k. We give improved incremental algorithms for the metric version: an 8-cost-competitive deterministic algorithm, a 2e ~ 5.44-cost-competitive randomized algorithm, a (24+epsilon)-cost-competitive, poly-time deterministic algorithm, and a (6e+epsilon ~ .31)-cost-competitive, poly-time randomized algorithm. The algorithm is s-size-competitive if the cost of the kth set is at most the minimum cost of any set of size k, and has size at most s k. The optimal size-competitive ratios for this problem are 4 (deterministic) and e (randomized). We present the first poly-time O(log m)-size-approximation algorithm for the offline problem and first poly-time O(log m)-size-competitive algorithm for the incremental problem. Our proofs reduce incremental medians to the following online bidding problem: faced with an unknown threshold T, an algorithm submits "bids" until it submits a bid that is at least the threshold. It pays the sum of all its bids. We prove that folklore algorithms for online bidding are optimally competitive.
[ { "version": "v1", "created": "Wed, 27 Apr 2005 00:07:32 GMT" }, { "version": "v2", "created": "Tue, 24 Jan 2006 22:53:09 GMT" }, { "version": "v3", "created": "Thu, 28 May 2020 12:58:50 GMT" } ]
"2020-05-29T00:00:00"
[ [ "Chrobak", "Marek", "" ], [ "Kenyon", "Claire", "" ], [ "Noga", "John", "" ], [ "Young", "Neal E.", "" ] ]
cs/0504104
Neal E. Young
Marek Chrobak and Claire Kenyon and Neal E. Young
The reverse greedy algorithm for the metric k-median problem
to appear in IPL. preliminary version in COCOON '05
Information Processing Letters 97:68-72(2006)
10.1016/j.ipl.2005.09.009
null
cs.DS
null
The Reverse Greedy algorithm (RGreedy) for the k-median problem works as follows. It starts by placing facilities on all nodes. At each step, it removes a facility to minimize the resulting total distance from the customers to the remaining facilities. It stops when k facilities remain. We prove that, if the distance function is metric, then the approximation ratio of RGreedy is between ?(log n/ log log n) and O(log n).
[ { "version": "v1", "created": "Wed, 27 Apr 2005 19:36:08 GMT" }, { "version": "v2", "created": "Tue, 27 Sep 2005 17:43:50 GMT" } ]
"2015-06-02T00:00:00"
[ [ "Chrobak", "Marek", "" ], [ "Kenyon", "Claire", "" ], [ "Young", "Neal E.", "" ] ]
cs/0504110
Lawrence Ioannou
Lawrence M. Ioannou
Computing finite-dimensional bipartite quantum separability
Replaced orginal archive submission with PhD thesis, which subsumes and mildly corrects it
null
null
null
cs.DS quant-ph
null
Ever since entanglement was identified as a computational and cryptographic resource, effort has been made to find an efficient way to tell whether a given density matrix represents an unentangled, or separable, state. Essentially, this is the quantum separability problem. Chapters 1 to 3 motivate a new interior-point algorithm which, given the expected values of a subset of an orthogonal basis of observables of an otherwise unknown quantum state, searches for an entanglement witness in the span of the subset of observables. When all the expected values are known, the algorithm solves the separability problem. In Chapter 4, I give the motivation for the algorithm and show how it can be used in a particular physical scenario to detect entanglement (or decide separability) of an unknown quantum state using as few quantum resources as possible. I then explain the intuitive idea behind the algorithm and relate it to the standard algorithms of its kind. I end the chapter with a comparison of the complexities of the algorithms surveyed in Chapter 3. Finally, in Chapter 5, I present the details of the algorithm and discuss its performance relative to standard methods.
[ { "version": "v1", "created": "Fri, 29 Apr 2005 16:42:54 GMT" }, { "version": "v2", "created": "Sat, 30 Apr 2005 12:11:17 GMT" }, { "version": "v3", "created": "Wed, 15 Feb 2006 15:47:45 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Ioannou", "Lawrence M.", "" ] ]
cs/0505005
Sandor P. Fekete
Jan van der Veen and Sandor P. Fekete and Ali Ahmadinia and Christophe Bobda and Frank Hannig and Juergen Teich
Defragmenting the Module Layout of a Partially Reconfigurable Device
10 pages, 11 figures, 1 table, Latex, to appear in "Engineering of Reconfigurable Systems and Algorithms" as a "Distinguished Paper"
null
null
null
cs.AR cs.DS
null
Modern generations of field-programmable gate arrays (FPGAs) allow for partial reconfiguration. In an online context, where the sequence of modules to be loaded on the FPGA is unknown beforehand, repeated insertion and deletion of modules leads to progressive fragmentation of the available space, making defragmentation an important issue. We address this problem by propose an online and an offline component for the defragmentation of the available space. We consider defragmenting the module layout on a reconfigurable device. This corresponds to solving a two-dimensional strip packing problem. Problems of this type are NP-hard in the strong sense, and previous algorithmic results are rather limited. Based on a graph-theoretic characterization of feasible packings, we develop a method that can solve two-dimensional defragmentation instances of practical size to optimality. Our approach is validated for a set of benchmark instances.
[ { "version": "v1", "created": "Mon, 2 May 2005 01:10:04 GMT" } ]
"2007-05-23T00:00:00"
[ [ "van der Veen", "Jan", "" ], [ "Fekete", "Sandor P.", "" ], [ "Ahmadinia", "Ali", "" ], [ "Bobda", "Christophe", "" ], [ "Hannig", "Frank", "" ], [ "Teich", "Juergen", "" ] ]
cs/0505007
Dragos Trinca
Dragos Trinca
Adaptive Codes: A New Class of Non-standard Variable-length Codes
10 pages
null
null
null
cs.DS
null
We introduce a new class of non-standard variable-length codes, called adaptive codes. This class of codes associates a variable-length codeword to the symbol being encoded depending on the previous symbols in the input data string. An efficient algorithm for constructing adaptive codes of order one is presented. Then, we introduce a natural generalization of adaptive codes, called GA codes.
[ { "version": "v1", "created": "Mon, 2 May 2005 09:40:02 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Trinca", "Dragos", "" ] ]
cs/0505009
Arindam Mitra
Arindam Mitra
Human being is a living random number generator
PDF, Revised
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
General wisdom is, mathematical operation is needed to generate number by numbers. It is pointed out that without any mathematical operation true random numbers can be generated by numbers through algorithmic process. It implies that human brain itself is a living true random number generator. Human brain can meet the enormous human demand of true random numbers.
[ { "version": "v1", "created": "Tue, 3 May 2005 15:42:24 GMT" }, { "version": "v10", "created": "Tue, 21 Aug 2007 15:28:48 GMT" }, { "version": "v11", "created": "Thu, 23 Aug 2007 15:10:06 GMT" }, { "version": "v12", "created": "Wed, 14 Nov 2007 16:04:01 GMT" }, { "version": "v13", "created": "Fri, 13 Jun 2008 15:41:34 GMT" }, { "version": "v14", "created": "Mon, 14 Jul 2008 13:44:30 GMT" }, { "version": "v15", "created": "Thu, 24 Jul 2008 14:43:57 GMT" }, { "version": "v16", "created": "Sat, 27 Dec 2008 15:56:26 GMT" }, { "version": "v17", "created": "Tue, 16 Jun 2009 10:57:47 GMT" }, { "version": "v2", "created": "Thu, 5 May 2005 13:09:26 GMT" }, { "version": "v3", "created": "Fri, 8 Jul 2005 06:20:22 GMT" }, { "version": "v4", "created": "Thu, 26 Oct 2006 14:34:45 GMT" }, { "version": "v5", "created": "Tue, 9 Jan 2007 15:54:27 GMT" }, { "version": "v6", "created": "Wed, 31 Jan 2007 12:44:08 GMT" }, { "version": "v7", "created": "Wed, 7 Feb 2007 15:48:03 GMT" }, { "version": "v8", "created": "Thu, 8 Mar 2007 14:49:33 GMT" }, { "version": "v9", "created": "Wed, 18 Jul 2007 15:24:15 GMT" } ]
"2009-06-16T00:00:00"
[ [ "Mitra", "Arindam", "" ] ]
cs/0505015
Tomasz Suslo
Tomasz Suslo
Complex Mean and Variance of Linear Regression Model for High-Noised Systems by Kriging
3 pages
null
null
null
cs.NA cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The aim of the paper is to derive the complex-valued least-squares estimator for bias-noise mean and variance.
[ { "version": "v1", "created": "Sat, 7 May 2005 12:11:56 GMT" }, { "version": "v2", "created": "Thu, 30 Nov 2006 13:06:35 GMT" }, { "version": "v3", "created": "Sat, 8 Nov 2008 12:02:44 GMT" }, { "version": "v4", "created": "Wed, 29 Jul 2009 10:31:49 GMT" } ]
"2009-07-29T00:00:00"
[ [ "Suslo", "Tomasz", "" ] ]
cs/0505027
Vincent Lefevre
Vincent Lef\`evre (INRIA Lorraine - LORIA)
The Generic Multiple-Precision Floating-Point Addition With Exact Rounding (as in the MPFR Library)
Conference website at http://cca-net.de/rnc6/
null
null
null
cs.DS
null
We study the multiple-precision addition of two positive floating-point numbers in base 2, with exact rounding, as specified in the MPFR library, i.e. where each number has its own precision. We show how the best possible complexity (up to a constant factor that depends on the implementation) can be obtain.
[ { "version": "v1", "created": "Wed, 11 May 2005 14:22:54 GMT" } ]
"2016-08-16T00:00:00"
[ [ "Lefèvre", "Vincent", "", "INRIA Lorraine - LORIA" ] ]
cs/0505028
Irmtraud Meyer
Istvan Miklos, Irmtraud M. Meyer
A linear memory algorithm for Baum-Welch training
14 pages, 1 figure version 2: fixed some errors, final version of paper
BMC Bioinformatics (2005) 6:231
null
null
cs.LG cs.DS q-bio.QM
null
Background: Baum-Welch training is an expectation-maximisation algorithm for training the emission and transition probabilities of hidden Markov models in a fully automated way. Methods and results: We introduce a linear space algorithm for Baum-Welch training. For a hidden Markov model with M states, T free transition and E free emission parameters, and an input sequence of length L, our new algorithm requires O(M) memory and O(L M T_max (T + E)) time for one Baum-Welch iteration, where T_max is the maximum number of states that any state is connected to. The most memory efficient algorithm until now was the checkpointing algorithm with O(log(L) M) memory and O(log(L) L M T_max) time requirement. Our novel algorithm thus renders the memory requirement completely independent of the length of the training sequences. More generally, for an n-hidden Markov model and n input sequences of length L, the memory requirement of O(log(L) L^(n-1) M) is reduced to O(L^(n-1) M) memory while the running time is changed from O(log(L) L^n M T_max + L^n (T + E)) to O(L^n M T_max (T + E)). Conclusions: For the large class of hidden Markov models used for example in gene prediction, whose number of states does not scale with the length of the input sequence, our novel algorithm can thus be both faster and more memory-efficient than any of the existing algorithms.
[ { "version": "v1", "created": "Wed, 11 May 2005 16:45:58 GMT" }, { "version": "v2", "created": "Mon, 30 May 2005 19:46:56 GMT" }, { "version": "v3", "created": "Tue, 16 Aug 2005 12:43:07 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Miklos", "Istvan", "" ], [ "Meyer", "Irmtraud M.", "" ] ]
cs/0505031
Rudini Sampaio
Rudini M. Sampaio, Horacio H. Yanasse
Estudo e Implementacao de Algoritmos de Roteamento sobre Grafos em um Sistema de Informacoes Geograficas
INFOCOMP Journal of Computer Science
INFOCOMP Journal of Computer Science, 3(1), 2004
null
null
cs.MS cs.DS
null
This article presents an implementation of a graphical software with various algorithms in Operations research, like minimum path, minimum tree, chinese postman problem and travelling salesman.
[ { "version": "v1", "created": "Wed, 11 May 2005 18:50:32 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Sampaio", "Rudini M.", "" ], [ "Yanasse", "Horacio H.", "" ] ]
cs/0505048
David Eppstein
David Eppstein, Michael T. Goodrich, and Daniel S. Hirschberg
Improved Combinatorial Group Testing Algorithms for Real-World Problem Sizes
18 pages; an abbreviated version of this paper is to appear at the 9th Worksh. Algorithms and Data Structures
SIAM J. Computing 36(5):1360-1375, 2007
10.1137/050631847
null
cs.DS
null
We study practically efficient methods for performing combinatorial group testing. We present efficient non-adaptive and two-stage combinatorial group testing algorithms, which identify the at most d items out of a given set of n items that are defective, using fewer tests for all practical set sizes. For example, our two-stage algorithm matches the information theoretic lower bound for the number of tests in a combinatorial group testing regimen.
[ { "version": "v1", "created": "Wed, 18 May 2005 20:25:16 GMT" } ]
"2011-11-09T00:00:00"
[ [ "Eppstein", "David", "" ], [ "Goodrich", "Michael T.", "" ], [ "Hirschberg", "Daniel S.", "" ] ]
cs/0505061
Dragos Trinca
Dragos Trinca
EAH: A New Encoder based on Adaptive Variable-length Codes
16 pages
null
null
null
cs.DS
null
Adaptive variable-length codes associate a variable-length codeword to the symbol being encoded depending on the previous symbols in the input string. This class of codes has been recently presented in [Dragos Trinca, arXiv:cs.DS/0505007] as a new class of non-standard variable-length codes. New algorithms for data compression, based on adaptive variable-length codes of order one and Huffman's algorithm, have been recently presented in [Dragos Trinca, ITCC 2004]. In this paper, we extend the work done so far by the following contributions: first, we propose an improved generalization of these algorithms, called EAHn. Second, we compute the entropy bounds for EAHn, using the well-known bounds for Huffman's algorithm. Third, we discuss implementation details and give reports of experimental results obtained on some well-known corpora. Finally, we describe a parallel version of EAHn using the PRAM model of computation.
[ { "version": "v1", "created": "Tue, 24 May 2005 06:53:33 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Trinca", "Dragos", "" ] ]
cs/0505066
Udayan Khurana
Udayan Khuarana
Decision Sort and its Parallel Implementation
5 pages, 3 tables, 1 figure, National Conference on Bioinformatics Computing'05
null
null
null
cs.DS
null
In this paper, a sorting technique is presented that takes as input a data set whose primary key domain is known to the sorting algorithm, and works with an time efficiency of O(n+k), where k is the primary key domain. It is shown that the algorithm has applicability over a wide range of data sets. Later, a parallel formulation of the same is proposed and its effectiveness is argued. Though this algorithm is applicable over a wide range of general data sets, it finds special application (much superior to others) in places where sorting information that arrives in parts and in cases where input data is huge in size.
[ { "version": "v1", "created": "Tue, 24 May 2005 15:41:27 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Khuarana", "Udayan", "" ] ]
cs/0505071
Taneli Mielik\"ainen
Taneli Mielik\"ainen
Summarization Techniques for Pattern Collections in Data Mining
PhD Thesis, Department of Computer Science, University of Helsinki
null
null
A-2005-1, Department of Computer Science, University of Helsinki
cs.DB cs.AI cs.DS
null
Discovering patterns from data is an important task in data mining. There exist techniques to find large collections of many kinds of patterns from data very efficiently. A collection of patterns can be regarded as a summary of the data. A major difficulty with patterns is that pattern collections summarizing the data well are often very large. In this dissertation we describe methods for summarizing pattern collections in order to make them also more understandable. More specifically, we focus on the following themes: 1) Quality value simplifications. 2) Pattern orderings. 3) Pattern chains and antichains. 4) Change profiles. 5) Inverse pattern discovery.
[ { "version": "v1", "created": "Thu, 26 May 2005 04:41:15 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Mielikäinen", "Taneli", "" ] ]
cs/0505075
Yongxi Cheng
Yongxi Cheng, Xi Chen, Yiqun Lisa Yin
On Searching a Table Consistent with Division Poset
16 pages, no figure; same results, representation improved, add references
null
null
null
cs.DM cs.DS
null
Suppose $P_n=\{1,2,...,n\}$ is a partially ordered set with the partial order defined by divisibility, that is, for any two distinct elements $i,j\in P_n$ satisfying $i$ divides $j$, $i<_{P_n} j$. A table $A_n=\{a_i|i=1,2,...,n\}$ of distinct real numbers is said to be \emph{consistent} with $P_n$, provided for any two distinct elements $i,j\in \{1,2,...,n\}$ satisfying $i$ divides $j$, $a_i< a_j$. Given an real number $x$, we want to determine whether $x\in A_n$, by comparing $x$ with as few entries of $A_n$ as possible. In this paper we investigate the complexity $\tau(n)$, measured in the number of comparisons, of the above search problem. We present a $\frac{55n}{72}+O(\ln^2 n)$ search algorithm for $A_n$ and prove a lower bound $({3/4}+{17/2160})n+O(1)$ on $\tau(n)$ by using an adversary argument.
[ { "version": "v1", "created": "Thu, 26 May 2005 17:45:48 GMT" }, { "version": "v2", "created": "Thu, 6 Apr 2006 04:46:28 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Cheng", "Yongxi", "" ], [ "Chen", "Xi", "" ], [ "Yin", "Yiqun Lisa", "" ] ]
cs/0505077
Sagi Snir
Shlomo Moran and Sagi Snir
Efficient Approximation of Convex Recolorings
null
null
null
null
cs.DS
null
A coloring of a tree is convex if the vertices that pertain to any color induce a connected subtree; a partial coloring (which assigns colors to some of the vertices) is convex if it can be completed to a convex (total) coloring. Convex coloring of trees arise in areas such as phylogenetics, linguistics, etc. eg, a perfect phylogenetic tree is one in which the states of each character induce a convex coloring of the tree. Research on perfect phylogeny is usually focused on finding a tree so that few predetermined partial colorings of its vertices are convex. When a coloring of a tree is not convex, it is desirable to know "how far" it is from a convex one. In [19], a natural measure for this distance, called the recoloring distance was defined: the minimal number of color changes at the vertices needed to make the coloring convex. This can be viewed as minimizing the number of "exceptional vertices" w.r.t. to a closest convex coloring. The problem was proved to be NP-hard even for colored string. In this paper we continue the work of [19], and present a 2-approximation algorithm of convex recoloring of strings whose running time O(cn), where c is the number of colors and n is the size of the input, and an O(cn^2)-time 3-approximation algorithm for convex recoloring of trees.
[ { "version": "v1", "created": "Fri, 27 May 2005 23:16:48 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Moran", "Shlomo", "" ], [ "Snir", "Sagi", "" ] ]
cs/0506027
Travis Gagie
Travis Gagie
Sorting a Low-Entropy Sequence
null
null
null
null
cs.DS
null
We give the first sorting algorithm with bounds in terms of higher-order entropies: let $S$ be a sequence of length $m$ containing $n$ distinct elements and let (H_\ell (S)) be the $\ell$th-order empirical entropy of $S$, with (n^{\ell + 1} \log n \in O (m)); our algorithm sorts $S$ using ((H_\ell (S) + O (1)) m) comparisons.
[ { "version": "v1", "created": "Wed, 8 Jun 2005 22:15:18 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Gagie", "Travis", "" ] ]
cs/0506104
Miroslaw Truszczynski
Z. Lonc, M. Truszczynski
Computing minimal models, stable models and answer sets
55 pages, 1 figure. To appear in Theory and Practice of Logic Programming (TPLP)
null
null
null
cs.LO cs.DS
null
We propose and study algorithms to compute minimal models, stable models and answer sets of t-CNF theories, and normal and disjunctive t-programs. We are especially interested in algorithms with non-trivial worst-case performance bounds. The bulk of the paper is concerned with the classes of 2- and 3-CNF theories, and normal and disjunctive 2- and 3-programs, for which we obtain significantly stronger results than those implied by our general considerations. We show that one can find all minimal models of 2-CNF theories and all answer sets of disjunctive 2-programs in time O(m 1.4422..^n). Our main results concern computing stable models of normal 3-programs, minimal models of 3-CNF theories and answer sets of disjunctive 3-programs. We design algorithms that run in time O(m 1.6701..^n), in the case of the first problem, and in time O(mn^2 2.2782..^n), in the case of the latter two. All these bounds improve by exponential factors the best algorithms known previously. We also obtain closely related upper bounds on the number of minimal models, stable models and answer sets a t-CNF theory, a normal t-program or a disjunctive t-program may have. To appear in Theory and Practice of Logic Programming (TPLP).
[ { "version": "v1", "created": "Thu, 30 Jun 2005 01:51:41 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Lonc", "Z.", "" ], [ "Truszczynski", "M.", "" ] ]
cs/0507014
Moshe Schwartz
Moshe Schwartz
Isomorphism of graphs-a polynomial test
null
null
null
null
cs.DS
null
An explicit algorithm is presented for testing whether two non-directed graphs are isomorphic or not. It is shown that for a graph of n vertices, the number of n independent operations needed for the test is polynomial in n. A proof that the algorithm actually performs the test is presented.
[ { "version": "v1", "created": "Wed, 6 Jul 2005 11:35:42 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Schwartz", "Moshe", "" ] ]
cs/0507047
Dmitri Krioukov
Xenofontas Dimitropoulos, Dmitri Krioukov, Bradley Huffaker, kc claffy, George Riley
Inferring AS Relationships: Dead End or Lively Beginning?
null
WEA 2005; LNCS 3503, p. 113, 2005
10.1007/11427186_12
null
cs.NI cs.DS
null
Recent techniques for inferring business relationships between ASs have yielded maps that have extremely few invalid BGP paths in the terminology of Gao. However, some relationships inferred by these newer algorithms are incorrect, leading to the deduction of unrealistic AS hierarchies. We investigate this problem and discover what causes it. Having obtained such insight, we generalize the problem of AS relationship inference as a multiobjective optimization problem with node-degree-based corrections to the original objective function of minimizing the number of invalid paths. We solve the generalized version of the problem using the semidefinite programming relaxation of the MAX2SAT problem. Keeping the number of invalid paths small, we obtain a more veracious solution than that yielded by recent heuristics.
[ { "version": "v1", "created": "Tue, 19 Jul 2005 09:32:16 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Dimitropoulos", "Xenofontas", "" ], [ "Krioukov", "Dmitri", "" ], [ "Huffaker", "Bradley", "" ], [ "claffy", "kc", "" ], [ "Riley", "George", "" ] ]
cs/0507050
David Eppstein
Lars Arge, David Eppstein, Michael T. Goodrich
Skip-Webs: Efficient Distributed Data Structures for Multi-Dimensional Data Sets
8 pages, 4 figures. Appearing at 24th ACM SIGACT-SIGOPS Symp. Principles of Distributed Computing (PODC 2005), Las Vegas
null
null
null
cs.DC cs.CG cs.DS
null
We present a framework for designing efficient distributed data structures for multi-dimensional data. Our structures, which we call skip-webs, extend and improve previous randomized distributed data structures, including skipnets and skip graphs. Our framework applies to a general class of data querying scenarios, which include linear (one-dimensional) data, such as sorted sets, as well as multi-dimensional data, such as d-dimensional octrees and digital tries of character strings defined over a fixed alphabet. We show how to perform a query over such a set of n items spread among n hosts using O(log n / log log n) messages for one-dimensional data, or O(log n) messages for fixed-dimensional data, while using only O(log n) space per host. We also show how to make such structures dynamic so as to allow for insertions and deletions in O(log n) messages for quadtrees, octrees, and digital tries, and O(log n / log log n) messages for one-dimensional data. Finally, we show how to apply a blocking strategy to skip-webs to further improve message complexity for one-dimensional data when hosts can store more data.
[ { "version": "v1", "created": "Tue, 19 Jul 2005 20:30:33 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Arge", "Lars", "" ], [ "Eppstein", "David", "" ], [ "Goodrich", "Michael T.", "" ] ]
cs/0507051
David Eppstein
David Eppstein, Michael T. Goodrich, Jeremy Yu Meng
Confluent Layered Drawings
11 pages, 6 figures. A preliminary version of this paper appeared in Proc. 12th Int. Symp. Graph Drawing, New York, 2004, Lecture Notes in Comp. Sci. 3383, 2004, pp. 184-194
Algorithmica 47(4):439-452, 2007
10.1007/s00453-006-0159-8
null
cs.CG cs.DS
null
We combine the idea of confluent drawings with Sugiyama style drawings, in order to reduce the edge crossings in the resultant drawings. Furthermore, it is easier to understand the structures of graphs from the mixed style drawings. The basic idea is to cover a layered graph by complete bipartite subgraphs (bicliques), then replace bicliques with tree-like structures. The biclique cover problem is reduced to a special edge coloring problem and solved by heuristic coloring algorithms. Our method can be extended to obtain multi-depth confluent layered drawings.
[ { "version": "v1", "created": "Tue, 19 Jul 2005 22:25:53 GMT" } ]
"2007-06-14T00:00:00"
[ [ "Eppstein", "David", "" ], [ "Goodrich", "Michael T.", "" ], [ "Meng", "Jeremy Yu", "" ] ]
cs/0507053
David Eppstein
David Eppstein
Nonrepetitive Paths and Cycles in Graphs with Application to Sudoku
17 pages, 11 figures
null
null
null
cs.DS cs.AI
null
We provide a simple linear time transformation from a directed or undirected graph with labeled edges to an unlabeled digraph, such that paths in the input graph in which no two consecutive edges have the same label correspond to paths in the transformed graph and vice versa. Using this transformation, we provide efficient algorithms for finding paths and cycles with no two consecutive equal labels. We also consider related problems where the paths and cycles are required to be simple; we find efficient algorithms for the undirected case of these problems but show the directed case to be NP-complete. We apply our path and cycle finding algorithms in a program for generating and solving Sudoku puzzles, and show experimentally that they lead to effective puzzle-solving rules that may also be of interest to human Sudoku puzzle solvers.
[ { "version": "v1", "created": "Wed, 20 Jul 2005 15:58:30 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Eppstein", "David", "" ] ]
cs/0508006
Sandor P. Fekete
Sandor P. Fekete, Michael Kaufmann, Alexander Kroeller, and Katharina Lehmann
A New Approach for Boundary Recognition in Geometric Sensor Networks
4 pages, 5 figures, Latex, to appear in Canadian Conference on Computational Geometry (CCCG 2005)
null
null
null
cs.DS cs.DC
null
We describe a new approach for dealing with the following central problem in the self-organization of a geometric sensor network: Given a polygonal region R, and a large, dense set of sensor nodes that are scattered uniformly at random in R. There is no central control unit, and nodes can only communicate locally by wireless radio to all other nodes that are within communication radius r, without knowing their coordinates or distances to other nodes. The objective is to develop a simple distributed protocol that allows nodes to identify themselves as being located near the boundary of R and form connected pieces of the boundary. We give a comparison of several centrality measures commonly used in the analysis of social networks and show that restricted stress centrality is particularly suited for geometric networks; we provide mathematical as well as experimental evidence for the quality of this measure.
[ { "version": "v1", "created": "Mon, 1 Aug 2005 19:44:53 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Fekete", "Sandor P.", "" ], [ "Kaufmann", "Michael", "" ], [ "Kroeller", "Alexander", "" ], [ "Lehmann", "Katharina", "" ] ]
cs/0508045
Ion Mandoiu
Christoph Albrecht, Andrew B. Kahng, Ion I. Mandoiu, and Alexander Zelikovsky
Multicommodity Flow Algorithms for Buffered Global Routing
null
null
null
null
cs.DS
null
In this paper we describe a new algorithm for buffered global routing according to a prescribed buffer site map. Specifically, we describe a provably good multi-commodity flow based algorithm that finds a global routing minimizing buffer and wire congestion subject to given constraints on routing area (wirelength and number of buffers) and sink delays. Our algorithm allows computing the tradeoff curve between routing area and wire/buffer congestion under any combination of delay and capacity constraints, and simultaneously performs buffer/wire sizing, as well as layer and pin assignment. Experimental results show that near-optimal results are obtained with a practical runtime.
[ { "version": "v1", "created": "Sat, 6 Aug 2005 12:44:09 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Albrecht", "Christoph", "" ], [ "Kahng", "Andrew B.", "" ], [ "Mandoiu", "Ion I.", "" ], [ "Zelikovsky", "Alexander", "" ] ]
cs/0508083
Michael Baer
Michael B. Baer
A General Framework for Codes Involving Redundancy Minimization
7 pages, 5 figures, submitted to IEEE Trans. Inform. Theory
IEEE Transactions on Information Theory (2006)
10.1109/TIT.2005.860469
null
cs.IT cs.DS math.IT
null
A framework with two scalar parameters is introduced for various problems of finding a prefix code minimizing a coding penalty function. The framework encompasses problems previously proposed by Huffman, Campbell, Nath, and Drmota and Szpankowski, shedding light on the relationships among these problems. In particular, Nath's range of problems can be seen as bridging the minimum average redundancy problem of Huffman with the minimum maximum pointwise redundancy problem of Drmota and Szpankowski. Using this framework, two linear-time Huffman-like algorithms are devised for the minimum maximum pointwise redundancy problem, the only one in the framework not previously solved with a Huffman-like algorithm. Both algorithms provide solutions common to this problem and a subrange of Nath's problems, the second algorithm being distinguished by its ability to find the minimum variance solution among all solutions common to the minimum maximum pointwise redundancy and Nath problems. Simple redundancy bounds are also presented.
[ { "version": "v1", "created": "Thu, 18 Aug 2005 20:22:45 GMT" }, { "version": "v2", "created": "Tue, 1 Nov 2005 06:39:52 GMT" } ]
"2007-07-16T00:00:00"
[ [ "Baer", "Michael B.", "" ] ]
cs/0508084
Michael Baer
Michael B. Baer
Source Coding for Quasiarithmetic Penalties
22 pages, 3 figures, submitted to IEEE Trans. Inform. Theory, revised per suggestions of readers
IEEE Transactions on Information Theory (2006)
10.1109/TIT.2006.881728
null
cs.IT cs.DS math.IT
null
Huffman coding finds a prefix code that minimizes mean codeword length for a given probability distribution over a finite number of items. Campbell generalized the Huffman problem to a family of problems in which the goal is to minimize not mean codeword length but rather a generalized mean known as a quasiarithmetic or quasilinear mean. Such generalized means have a number of diverse applications, including applications in queueing. Several quasiarithmetic-mean problems have novel simple redundancy bounds in terms of a generalized entropy. A related property involves the existence of optimal codes: For ``well-behaved'' cost functions, optimal codes always exist for (possibly infinite-alphabet) sources having finite generalized entropy. Solving finite instances of such problems is done by generalizing an algorithm for finding length-limited binary codes to a new algorithm for finding optimal binary codes for any quasiarithmetic mean with a convex cost function. This algorithm can be performed using quadratic time and linear space, and can be extended to other penalty functions, some of which are solvable with similar space and time complexity, and others of which are solvable with slightly greater complexity. This reduces the computational complexity of a problem involving minimum delay in a queue, allows combinations of previously considered problems to be optimized, and greatly expands the space of problems solvable in quadratic time and linear space. The algorithm can be extended for purposes such as breaking ties among possibly different optimal codes, as with bottom-merge Huffman coding.
[ { "version": "v1", "created": "Thu, 18 Aug 2005 20:29:04 GMT" }, { "version": "v2", "created": "Tue, 1 Nov 2005 06:46:48 GMT" }, { "version": "v3", "created": "Tue, 14 Feb 2006 00:36:34 GMT" }, { "version": "v4", "created": "Sat, 11 Mar 2006 22:14:33 GMT" }, { "version": "v5", "created": "Mon, 22 May 2006 20:28:12 GMT" } ]
"2007-07-16T00:00:00"
[ [ "Baer", "Michael B.", "" ] ]
cs/0508086
Dragos Trinca
Dragos Trinca
High-performance BWT-based Encoders
12 pages
null
null
null
cs.DS
null
In 1994, Burrows and Wheeler developed a data compression algorithm which performs significantly better than Lempel-Ziv based algorithms. Since then, a lot of work has been done in order to improve their algorithm, which is based on a reversible transformation of the input string, called BWT (the Burrows-Wheeler transformation). In this paper, we propose a compression scheme based on BWT, MTF (move-to-front coding), and a version of the algorithms presented in [Dragos Trinca, ITCC-2004].
[ { "version": "v1", "created": "Sun, 21 Aug 2005 05:47:00 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Trinca", "Dragos", "" ] ]
cs/0508087
Dragos Trinca
Dragos Trinca
Modelling the Eulerian Path Problem using a String Matching Framework
10 pages
null
null
null
cs.DS
null
The well-known Eulerian path problem can be solved in polynomial time (more exactly, there exists a linear time algorithm for this problem). In this paper, we model the problem using a string matching framework, and then initiate an algorithmic study on a variant of this problem, called the (2,1)-STRING-MATCH problem (which is actually a generalization of the Eulerian path problem). Then, we present a polynomial-time algorithm for the (2,1)-STRING-MATCH problem, which is the most important result of this paper. Specifically, we get a lower bound of Omega(n), and an upper bound of O(n^{2}).
[ { "version": "v1", "created": "Sun, 21 Aug 2005 06:08:40 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Trinca", "Dragos", "" ] ]
cs/0508089
Dragos Trinca
Dragos Trinca
Modelling the EAH Data Compression Algorithm using Graph Theory
10 pages
null
null
null
cs.DS
null
Adaptive codes associate variable-length codewords to symbols being encoded depending on the previous symbols in the input data string. This class of codes has been introduced in [Dragos Trinca, cs.DS/0505007] as a new class of non-standard variable-length codes. New algorithms for data compression, based on adaptive codes of order one, have been presented in [Dragos Trinca, ITCC-2004], where we have behaviorally shown that for a large class of input data strings, these algorithms substantially outperform the Lempel-Ziv universal data compression algorithm. EAH has been introduced in [Dragos Trinca, cs.DS/0505061], as an improved generalization of these algorithms. In this paper, we present a translation of the EAH algorithm into the graph theory.
[ { "version": "v1", "created": "Sun, 21 Aug 2005 19:32:39 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Trinca", "Dragos", "" ] ]
cs/0508090
Dragos Trinca
Dragos Trinca
Translating the EAH Data Compression Algorithm into Automata Theory
9 pages
null
null
null
cs.DS
null
Adaptive codes have been introduced in [Dragos Trinca, cs.DS/0505007] as a new class of non-standard variable-length codes. These codes associate variable-length codewords to symbols being encoded depending on the previous symbols in the input data string. A new data compression algorithm, called EAH, has been introduced in [Dragos Trinca, cs.DS/0505061], where we have behaviorally shown that for a large class of input data strings, this algorithm substantially outperforms the well-known Lempel-Ziv universal data compression algorithm. In this paper, we translate the EAH encoder into automata theory.
[ { "version": "v1", "created": "Sun, 21 Aug 2005 19:56:31 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Trinca", "Dragos", "" ] ]
cs/0508097
Devavrat Shah
Sujay Sanghavi and Devavrat Shah
Tightness of LP via Max-product Belief Propagation
null
null
null
null
cs.DS cs.DM
null
We investigate the question of tightness of linear programming (LP) relaxation for finding a maximum weight independent set (MWIS) in sparse random weighted graphs. We show that an edge-based LP relaxation is asymptotically tight for Erdos-Renyi graph $G(n,c/n)$ for $c \leq 2e$ and random regular graph $G(n,r)$ for $r\leq 4$ when node weights are i.i.d. with exponential distribution of mean 1. We establish these results, through a precise relation between the tightness of LP relaxation and convergence of the max-product belief propagation algorithm. We believe that this novel method of understanding structural properties of combinatorial problems through properties of iterative procedure such as the max-product should be of interest in its own right.
[ { "version": "v1", "created": "Tue, 23 Aug 2005 01:08:06 GMT" }, { "version": "v2", "created": "Sat, 12 Apr 2008 01:19:15 GMT" } ]
"2008-04-14T00:00:00"
[ [ "Sanghavi", "Sujay", "" ], [ "Shah", "Devavrat", "" ] ]
cs/0508122
Andrew McGregor
Sudipto Guha, Andrew McGregor and Suresh Venkatasubramanian
Streaming and Sublinear Approximation of Entropy and Information Distances
18 pages
null
null
null
cs.DS cs.IT math.IT
null
In many problems in data mining and machine learning, data items that need to be clustered or classified are not points in a high-dimensional space, but are distributions (points on a high dimensional simplex). For distributions, natural measures of distance are not the $\ell_p$ norms and variants, but information-theoretic measures like the Kullback-Leibler distance, the Hellinger distance, and others. Efficient estimation of these distances is a key component in algorithms for manipulating distributions. Thus, sublinear resource constraints, either in time (property testing) or space (streaming) are crucial. We start by resolving two open questions regarding property testing of distributions. Firstly, we show a tight bound for estimating bounded, symmetric f-divergences between distributions in a general property testing (sublinear time) framework (the so-called combined oracle model). This yields optimal algorithms for estimating such well known distances as the Jensen-Shannon divergence and the Hellinger distance. Secondly, we close a $(\log n)/H$ gap between upper and lower bounds for estimating entropy $H$ in this model. In a stream setting (sublinear space), we give the first algorithm for estimating the entropy of a distribution. Our algorithm runs in polylogarithmic space and yields an asymptotic constant factor approximation scheme. We also provide other results along the space/time/approximation tradeoff curve.
[ { "version": "v1", "created": "Sat, 27 Aug 2005 23:10:52 GMT" }, { "version": "v2", "created": "Mon, 29 Aug 2005 22:42:42 GMT" } ]
"2007-07-13T00:00:00"
[ [ "Guha", "Sudipto", "" ], [ "McGregor", "Andrew", "" ], [ "Venkatasubramanian", "Suresh", "" ] ]
cs/0508125
Sheng Bao
Sheng Bao, De-Shun Zheng
A Sorting Algorithm Based on Calculation
null
null
null
null
cs.DS
null
This article introduces an adaptive sorting algorithm that can relocate elements accurately by substituting their values into a function which we name it the guessing function. We focus on building this function which is the mapping relationship between record values and their corresponding sorted locations essentially. The time complexity of this algorithm O(n),when records distributed uniformly. Additionally, similar approach can be used in the searching algorithm.
[ { "version": "v1", "created": "Mon, 29 Aug 2005 14:22:57 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Bao", "Sheng", "" ], [ "Zheng", "De-Shun", "" ] ]
cs/0509015
Amr Elmasry
Ahmed Belal and Amr Elmasry
Optimal Prefix Codes with Fewer Distinct Codeword Lengths are Faster to Construct
23 pages, a preliminary version appeared in STACS 2006
null
null
null
cs.DS cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new method for constructing minimum-redundancy binary prefix codes is described. Our method does not explicitly build a Huffman tree; instead it uses a property of optimal prefix codes to compute the codeword lengths corresponding to the input weights. Let $n$ be the number of weights and $k$ be the number of distinct codeword lengths as produced by the algorithm for the optimum codes. The running time of our algorithm is $O(k \cdot n)$. Following our previous work in \cite{be}, no algorithm can possibly construct optimal prefix codes in $o(k \cdot n)$ time. When the given weights are presorted our algorithm performs $O(9^k \cdot \log^{2k}{n})$ comparisons.
[ { "version": "v1", "created": "Tue, 6 Sep 2005 10:58:22 GMT" }, { "version": "v2", "created": "Thu, 11 Feb 2010 12:05:25 GMT" }, { "version": "v3", "created": "Tue, 21 Dec 2010 14:22:41 GMT" }, { "version": "v4", "created": "Thu, 29 Sep 2016 15:28:49 GMT" } ]
"2016-09-30T00:00:00"
[ [ "Belal", "Ahmed", "" ], [ "Elmasry", "Amr", "" ] ]
cs/0509026
Mikkel Thorup
Nick Duffield, Carsten Lund, Mikkel Thorup
Sampling to estimate arbitrary subset sums
null
null
null
null
cs.DS
null
Starting with a set of weighted items, we want to create a generic sample of a certain size that we can later use to estimate the total weight of arbitrary subsets. For this purpose, we propose priority sampling which tested on Internet data performed better than previous methods by orders of magnitude. Priority sampling is simple to define and implement: we consider a steam of items i=0,...,n-1 with weights w_i. For each item i, we generate a random number r_i in (0,1) and create a priority q_i=w_i/r_i. The sample S consists of the k highest priority items. Let t be the (k+1)th highest priority. Each sampled item i in S gets a weight estimate W_i=max{w_i,t}, while non-sampled items get weight estimate W_i=0. Magically, it turns out that the weight estimates are unbiased, that is, E[W_i]=w_i, and by linearity of expectation, we get unbiased estimators over any subset sum simply by adding the sampled weight estimates from the subset. Also, we can estimate the variance of the estimates, and surpricingly, there is no co-variance between different weight estimates W_i and W_j. We conjecture an extremely strong near-optimality; namely that for any weight sequence, there exists no specialized scheme for sampling k items with unbiased estimators that gets smaller total variance than priority sampling with k+1 items. Very recently Mario Szegedy has settled this conjecture.
[ { "version": "v1", "created": "Fri, 9 Sep 2005 21:47:52 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Duffield", "Nick", "" ], [ "Lund", "Carsten", "" ], [ "Thorup", "Mikkel", "" ] ]
cs/0509031
David S. Johnson
Janos Csirik, David S. Johnson, and Claire Kenyon
On the Worst-case Performance of the Sum-of-Squares Algorithm for Bin Packing
null
null
null
null
cs.DS
null
The Sum of Squares algorithm for bin packing was defined in [2] and studied in great detail in [1], where it was proved that its worst case performance ratio is at most 3. In this note, we improve the asymptotic worst case bound to 2.7777...
[ { "version": "v1", "created": "Mon, 12 Sep 2005 14:49:48 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Csirik", "Janos", "" ], [ "Johnson", "David S.", "" ], [ "Kenyon", "Claire", "" ] ]
cs/0509038
Vilhelm Dahll\"of
Vilhelm Dahllof
Algorithms for Max Hamming Exact Satisfiability
null
null
null
null
cs.DS
null
We here study Max Hamming XSAT, ie, the problem of finding two XSAT models at maximum Hamming distance. By using a recent XSAT solver as an auxiliary function, an O(1.911^n) time algorithm can be constructed, where n is the number of variables. This upper time bound can be further improved to O(1.8348^n) by introducing a new kind of branching, more directly suited for finding models at maximum Hamming distance. The techniques presented here are likely to be of practical use as well as of theoretical value, proving that there are non-trivial algorithms for maximum Hamming distance problems.
[ { "version": "v1", "created": "Wed, 14 Sep 2005 09:04:20 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Dahllof", "Vilhelm", "" ] ]
cs/0509061
Lane A. Hemaspaandra
Christopher M. Homan and Lane A. Hemaspaandra
Guarantees for the Success Frequency of an Algorithm for Finding Dodgson-Election Winners
null
null
null
URCS-TR-2005-881
cs.DS cs.MA
null
In the year 1876 the mathematician Charles Dodgson, who wrote fiction under the now more famous name of Lewis Carroll, devised a beautiful voting system that has long fascinated political scientists. However, determining the winner of a Dodgson election is known to be complete for the \Theta_2^p level of the polynomial hierarchy. This implies that unless P=NP no polynomial-time solution to this problem exists, and unless the polynomial hierarchy collapses to NP the problem is not even in NP. Nonetheless, we prove that when the number of voters is much greater than the number of candidates--although the number of voters may still be polynomial in the number of candidates--a simple greedy algorithm very frequently finds the Dodgson winners in such a way that it ``knows'' that it has found them, and furthermore the algorithm never incorrectly declares a nonwinner to be a winner.
[ { "version": "v1", "created": "Mon, 19 Sep 2005 21:59:24 GMT" }, { "version": "v2", "created": "Sun, 11 Jun 2006 18:36:12 GMT" }, { "version": "v3", "created": "Sat, 23 Jun 2007 13:25:26 GMT" } ]
"2007-06-25T00:00:00"
[ [ "Homan", "Christopher M.", "" ], [ "Hemaspaandra", "Lane A.", "" ] ]
cs/0509069
Philip Bille
Philip Bille and Martin Farach-Colton
Fast and Compact Regular Expression Matching
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way.
[ { "version": "v1", "created": "Thu, 22 Sep 2005 13:30:20 GMT" }, { "version": "v2", "created": "Thu, 15 Dec 2005 10:07:46 GMT" }, { "version": "v3", "created": "Mon, 22 Sep 2008 08:27:28 GMT" } ]
"2008-09-22T00:00:00"
[ [ "Bille", "Philip", "" ], [ "Farach-Colton", "Martin", "" ] ]
cs/0510017
Svante Janson
Svante Janson and Wojciech Szpankowski
Partial fillup and search time in LC tries
13 pages
null
null
null
cs.DS math.PR
null
Andersson and Nilsson introduced in 1993 a level-compressed trie (in short: LC trie) in which a full subtree of a node is compressed to a single node of degree being the size of the subtree. Recent experimental results indicated a 'dramatic improvement' when full subtrees are replaced by partially filled subtrees. In this paper, we provide a theoretical justification of these experimental results showing, among others, a rather moderate improvement of the search time over the original LC tries. For such an analysis, we assume that n strings are generated independently by a binary memoryless source with p denoting the probability of emitting a 1. We first prove that the so called alpha-fillup level (i.e., the largest level in a trie with alpha fraction of nodes present at this level) is concentrated on two values with high probability. We give these values explicitly up to O(1), and observe that the value of alpha (strictly between 0 and 1) does not affect the leading term. This result directly yields the typical depth (search time) in the alpha-LC tries with p not equal to 1/2, which turns out to be C loglog n for an explicitly given constant C (depending on p but not on alpha). This should be compared with recently found typical depth in the original LC tries which is C' loglog n for a larger constant C'. The search time in alpha-LC tries is thus smaller but of the same order as in the original LC tries.
[ { "version": "v1", "created": "Thu, 6 Oct 2005 10:04:16 GMT" } ]
"2007-05-23T00:00:00"
[ [ "Janson", "Svante", "" ], [ "Szpankowski", "Wojciech", "" ] ]