id
stringlengths 9
16
| submitter
stringlengths 4
52
⌀ | authors
stringlengths 4
937
| title
stringlengths 7
243
| comments
stringlengths 1
472
⌀ | journal-ref
stringlengths 4
244
⌀ | doi
stringlengths 14
55
⌀ | report-no
stringlengths 3
125
⌀ | categories
stringlengths 5
97
| license
stringclasses 9
values | abstract
stringlengths 33
2.95k
| versions
list | update_date
timestamp[s] | authors_parsed
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cs/0510019 | Panigrahy Rina | Rina Panigrahy | Entropy based Nearest Neighbor Search in High Dimensions | null | null | null | null | cs.DS | null | In this paper we study the problem of finding the approximate nearest
neighbor of a query point in the high dimensional space, focusing on the
Euclidean space. The earlier approaches use locality-preserving hash functions
(that tend to map nearby points to the same value) to construct several hash
tables to ensure that the query point hashes to the same bucket as its nearest
neighbor in at least one table. Our approach is different -- we use one (or a
few) hash table and hash several randomly chosen points in the neighborhood of
the query point showing that at least one of them will hash to the bucket
containing its nearest neighbor. We show that the number of randomly chosen
points in the neighborhood of the query point $q$ required depends on the
entropy of the hash value $h(p)$ of a random point $p$ at the same distance
from $q$ at its nearest neighbor, given $q$ and the locality preserving hash
function $h$ chosen randomly from the hash family. Precisely, we show that if
the entropy $I(h(p)|q,h) = M$ and $g$ is a bound on the probability that two
far-off points will hash to the same bucket, then we can find the approximate
nearest neighbor in $O(n^\rho)$ time and near linear $\tilde O(n)$ space where
$\rho = M/\log(1/g)$. Alternatively we can build a data structure of size
$\tilde O(n^{1/(1-\rho)})$ to answer queries in $\tilde O(d)$ time. By applying
this analysis to the locality preserving hash functions in and adjusting the
parameters we show that the $c$ nearest neighbor can be computed in time
$\tilde O(n^\rho)$ and near linear space where $\rho \approx 2.06/c$ as $c$
becomes large.
| [
{
"version": "v1",
"created": "Fri, 7 Oct 2005 00:55:06 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Nov 2005 16:55:50 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Panigrahy",
"Rina",
""
]
] |
cs/0510086 | Krishnaram Kenthapadi | K. Kenthapadi and R. Panigrahy | Balanced Allocation on Graphs | null | null | null | null | cs.DS | null | In this paper, we study the two choice balls and bins process when balls are
not allowed to choose any two random bins, but only bins that are connected by
an edge in an underlying graph. We show that for $n$ balls and $n$ bins, if the
graph is almost regular with degree $n^\epsilon$, where $\epsilon$ is not too
small, the previous bounds on the maximum load continue to hold. Precisely, the
maximum load is $\log \log n + O(1/\epsilon) + O(1)$. For general
$\Delta$-regular graphs, we show that the maximum load is $\log\log n +
O(\frac{\log n}{\log (\Delta/\log^4 n)}) + O(1)$ and also provide an almost
matching lower bound of $\log \log n + \frac{\log n}{\log (\Delta \log n)}$.
V{\"o}cking [Voc99] showed that the maximum bin size with $d$ choice load
balancing can be further improved to $O(\log\log n /d)$ by breaking ties to the
left. This requires $d$ random bin choices. We show that such bounds can be
achieved by making only two random accesses and querying $d/2$ contiguous bins
in each access. By grouping a sequence of $n$ bins into $2n/d$ groups, each of
$d/2$ consecutive bins, if each ball chooses two groups at random and inserts
the new ball into the least-loaded bin in the lesser loaded group, then the
maximum load is $O(\log\log n/d)$ with high probability.
| [
{
"version": "v1",
"created": "Thu, 27 Oct 2005 21:59:21 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Kenthapadi",
"K.",
""
],
[
"Panigrahy",
"R.",
""
]
] |
cs/0511003 | Michael Baer | Michael B. Baer | Optimal Prefix Codes for Infinite Alphabets with Nonlinear Costs | 14 pages, 6 figures, accepted to IEEE Trans. Inform. Theory | null | 10.1109/TIT.2007.915696 | null | cs.IT cs.DS math.IT | null | Let $P = \{p(i)\}$ be a measure of strictly positive probabilities on the set
of nonnegative integers. Although the countable number of inputs prevents usage
of the Huffman algorithm, there are nontrivial $P$ for which known methods find
a source code that is optimal in the sense of minimizing expected codeword
length. For some applications, however, a source code should instead minimize
one of a family of nonlinear objective functions, $\beta$-exponential means,
those of the form $\log_a \sum_i p(i) a^{n(i)}$, where $n(i)$ is the length of
the $i$th codeword and $a$ is a positive constant. Applications of such
minimizations include a novel problem of maximizing the chance of message
receipt in single-shot communications ($a<1$) and a previously known problem of
minimizing the chance of buffer overflow in a queueing system ($a>1$). This
paper introduces methods for finding codes optimal for such exponential means.
One method applies to geometric distributions, while another applies to
distributions with lighter tails. The latter algorithm is applied to Poisson
distributions and both are extended to alphabetic codes, as well as to
minimizing maximum pointwise redundancy. The aforementioned application of
minimizing the chance of buffer overflow is also considered.
| [
{
"version": "v1",
"created": "Tue, 1 Nov 2005 07:00:11 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Dec 2006 02:20:18 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Nov 2007 04:21:19 GMT"
}
] | 2016-11-17T00:00:00 | [
[
"Baer",
"Michael B.",
""
]
] |
cs/0511020 | David P{\l}aneta S | David S. P{\l}aneta | Pbit and other list sorting algorithms | 25 pages, 4 tables | Cornell University Computing and Information Science Technical
Reports, 2006 | null | TR2006-2013 | cs.DS | null | Pbit, besides its simplicity, is definitely the fastest list sorting
algorithm. It considerably surpasses all already known methods. Among many
advantages, it is stable, linear and be made to run in place. I will compare
Pbit with algorithm described by Donald E. Knuth in the third volume of ''The
Art of Computer Programming'' and other (QuickerSort, MergeSort) list sorting
algorithms.
| [
{
"version": "v1",
"created": "Fri, 4 Nov 2005 01:52:02 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Jan 2006 23:48:40 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Płaneta",
"David S.",
""
]
] |
cs/0511030 | Gregory Gutin | G. Gutin, A. Rafiey, S. Szeider, A. Yeo | The Linear Arrangement Problem Parameterized Above Guaranteed Value | null | null | null | null | cs.DS cs.CC | null | A linear arrangement (LA) is an assignment of distinct integers to the
vertices of a graph. The cost of an LA is the sum of lengths of the edges of
the graph, where the length of an edge is defined as the absolute value of the
difference of the integers assigned to its ends. For many application one hopes
to find an LA with small cost. However, it is a classical NP-complete problem
to decide whether a given graph $G$ admits an LA of cost bounded by a given
integer. Since every edge of $G$ contributes at least one to the cost of any
LA, the problem becomes trivially fixed-parameter tractable (FPT) if
parameterized by the upper bound of the cost. Fernau asked whether the problem
remains FPT if parameterized by the upper bound of the cost minus the number of
edges of the given graph; thus whether the problem is FPT ``parameterized above
guaranteed value.'' We answer this question positively by deriving an algorithm
which decides in time $O(m+n+5.88^k)$ whether a given graph with $m$ edges and
$n$ vertices admits an LA of cost at most $m+k$ (the algorithm computes such an
LA if it exists). Our algorithm is based on a procedure which generates a
problem kernel of linear size in linear time for a connected graph $G$. We also
prove that more general parameterized LA problems stated by Serna and Thilikos
are not FPT, unless P=NP.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2005 17:47:55 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Dec 2005 18:26:42 GMT"
},
{
"version": "v3",
"created": "Mon, 13 Mar 2006 10:00:27 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Gutin",
"G.",
""
],
[
"Rafiey",
"A.",
""
],
[
"Szeider",
"S.",
""
],
[
"Yeo",
"A.",
""
]
] |
cs/0511044 | Mimmo Parente | J. Gruska, S. La Torre, M. Napoli, M. Parente | Various Solutions to the Firing Squad Synchronization Problems | null | null | null | null | cs.DS cs.CC | null | We present different classes of solutions to the Firing Squad Synchronization
Problem on networks of different shapes. The nodes are finite state processors
that work at unison discrete steps. The networks considered are the line, the
ring and the square. For all of these models we have considered one and two-way
communication modes and also constrained the quantity of information that
adjacent processors can exchange each step. We are given a particular time
expressed as a function of the number of nodes of the network, $f(n)$ and
present synchronization algorithms in time $n^2$, $n \log n$, $n\sqrt n$,
$2^n$. The solutions are presented as {\em signals} that are used as building
blocks to compose new solutions for all times expressed by polynomials with
nonnegative coefficients.
| [
{
"version": "v1",
"created": "Sat, 12 Nov 2005 06:44:20 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Gruska",
"J.",
""
],
[
"La Torre",
"S.",
""
],
[
"Napoli",
"M.",
""
],
[
"Parente",
"M.",
""
]
] |
cs/0511071 | Francesco Capasso | Francesco Capasso | A polynomial-time heuristic for Circuit-SAT | 20 pages, 8 figures | null | null | null | cs.CC cs.DS | null | In this paper is presented an heuristic that, in polynomial time and space in
the input dimension, determines if a circuit describes a tautology or a
contradiction. If the circuit is neither a tautology nor a contradiction, then
the heuristic finds an assignment to the circuit inputs such that the circuit
is satisfied.
| [
{
"version": "v1",
"created": "Fri, 18 Nov 2005 20:23:46 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Nov 2005 21:56:11 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Nov 2005 22:19:18 GMT"
},
{
"version": "v4",
"created": "Mon, 28 Nov 2005 19:52:12 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Capasso",
"Francesco",
""
]
] |
cs/0511082 | Gianluca Della Vedova | Paola Bonizzoni, Gianluca Della Vedova, Riccardo Dondi | Approximating Clustering of Fingerprint Vectors with Missing Values | 13 pages, 4 figures | null | 10.1007/s00453-008-9265-0 | null | cs.DS | null | The problem of clustering fingerprint vectors is an interesting problem in
Computational Biology that has been proposed in (Figureroa et al. 2004). In
this paper we show some improvements in closing the gaps between the known
lower bounds and upper bounds on the approximability of some variants of the
biological problem. Namely we are able to prove that the problem is APX-hard
even when each fingerprint contains only two unknown position. Moreover we have
studied some variants of the orginal problem, and we give two 2-approximation
algorithm for the IECMV and OECMV problems when the number of unknown entries
for each vector is at most a constant.
| [
{
"version": "v1",
"created": "Wed, 23 Nov 2005 10:32:47 GMT"
}
] | 2011-08-02T00:00:00 | [
[
"Bonizzoni",
"Paola",
""
],
[
"Della Vedova",
"Gianluca",
""
],
[
"Dondi",
"Riccardo",
""
]
] |
cs/0511084 | Assaf Naor | Manor Mendel and Assaf Naor | Ramsey partitions and proximity data structures | 21 pages. Two explanatory figures were added, a few typos were fixed | J. European Math. Soc. 9(2): 253-275, 2007 | 10.4171/JEMS/79 | null | cs.DS cs.CG math.FA math.MG | null | This paper addresses two problems lying at the intersection of geometric
analysis and theoretical computer science: The non-linear isomorphic Dvoretzky
theorem and the design of good approximate distance oracles for large
distortion. We introduce the notion of Ramsey partitions of a finite metric
space, and show that the existence of good Ramsey partitions implies a solution
to the metric Ramsey problem for large distortion (a.k.a. the non-linear
version of the isomorphic Dvoretzky theorem, as introduced by Bourgain, Figiel,
and Milman). We then proceed to construct optimal Ramsey partitions, and use
them to show that for every e\in (0,1), any n-point metric space has a subset
of size n^{1-e} which embeds into Hilbert space with distortion O(1/e). This
result is best possible and improves part of the metric Ramsey theorem of
Bartal, Linial, Mendel and Naor, in addition to considerably simplifying its
proof. We use our new Ramsey partitions to design the best known approximate
distance oracles when the distortion is large, closing a gap left open by
Thorup and Zwick. Namely, we show that for any $n$ point metric space X, and
k>1, there exists an O(k)-approximate distance oracle whose storage requirement
is O(n^{1+1/k}), and whose query time is a universal constant. We also discuss
applications of Ramsey partitions to various other geometric data structure
problems, such as the design of efficient data structures for approximate
ranking.
| [
{
"version": "v1",
"created": "Wed, 23 Nov 2005 20:06:15 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Dec 2005 06:35:16 GMT"
},
{
"version": "v3",
"created": "Wed, 10 May 2006 19:00:50 GMT"
}
] | 2012-11-15T00:00:00 | [
[
"Mendel",
"Manor",
""
],
[
"Naor",
"Assaf",
""
]
] |
cs/0511108 | Abdelhadi Benabdallah | A. Benabdallah and G. Radons | Parameter Estimation of Hidden Diffusion Processes: Particle Filter vs.
Modified Baum-Welch Algorithm | 15 pages, 3 figures, 2 tables | null | null | null | cs.DS cs.LG | null | We propose a new method for the estimation of parameters of hidden diffusion
processes. Based on parametrization of the transition matrix, the Baum-Welch
algorithm is improved. The algorithm is compared to the particle filter in
application to the noisy periodic systems. It is shown that the modified
Baum-Welch algorithm is capable of estimating the system parameters with better
accuracy than particle filters.
| [
{
"version": "v1",
"created": "Wed, 30 Nov 2005 20:23:19 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Benabdallah",
"A.",
""
],
[
"Radons",
"G.",
""
]
] |
cs/0512016 | Mikl\'os Cs\H{u}r\"os | Mikl\'os Cs\H{u}r\"os | A linear-time algorithm for finding the longest segment which scores
above a given threshold | null | null | null | null | cs.DS cs.CE | null | This paper describes a linear-time algorithm that finds the longest stretch
in a sequence of real numbers (``scores'') in which the sum exceeds an input
parameter. The algorithm also solves the problem of finding the longest
interval in which the average of the scores is above a fixed threshold. The
problem originates from molecular sequence analysis: for instance, the
algorithm can be employed to identify long GC-rich regions in DNA sequences.
The algorithm can also be used to trim low-quality ends of shotgun sequences in
a preprocessing step of whole-genome assembly.
| [
{
"version": "v1",
"created": "Sun, 4 Dec 2005 04:28:00 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Mar 2006 02:40:49 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Csűrös",
"Miklós",
""
]
] |
cs/0512021 | Ted Herman | Brahim Hamid (1), Ted Herman (2), Morten Mjelde (3) ((1) LaBRI
University of Bordeaux-1 France, (2) University of Iowa, (3) University in
Bergen Norway) | The Poster Session of SSS 2005 | 3 pages, related to Springer LNCS 3764, Proceedings of Symposium on
Self-Stabilizing Systems | null | null | TR-05-13 | cs.DC cs.DS | null | This technical report documents the poster session of SSS 2005, the Symposium
on Self-Stabilizing Systems published by Springer as LNCS volume 3764. The
poster session included five presentations. Two of these presentations are
summarized in brief abstracts contained in this technical report.
| [
{
"version": "v1",
"created": "Mon, 5 Dec 2005 22:51:11 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Hamid",
"Brahim",
""
],
[
"Herman",
"Ted",
""
],
[
"Mjelde",
"Morten",
""
]
] |
cs/0512046 | Georgios Mertzios | George B. Mertzios | A polynomial algorithm for the k-cluster problem on interval graphs | 12 pages, 5 figures | null | null | null | cs.DS | null | This paper deals with the problem of finding, for a given graph and a given
natural number k, a subgraph of k nodes with a maximum number of edges. This
problem is known as the k-cluster problem and it is NP-hard on general graphs
as well as on chordal graphs. In this paper, it is shown that the k-cluster
problem is solvable in polynomial time on interval graphs. In particular, we
present two polynomial time algorithms for the class of proper interval graphs
and the class of general interval graphs, respectively. Both algorithms are
based on a matrix representation for interval graphs. In contrast to
representations used in most of the previous work, this matrix representation
does not make use of the maximal cliques in the investigated graph.
| [
{
"version": "v1",
"created": "Sun, 11 Dec 2005 23:13:44 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Dec 2005 00:02:45 GMT"
},
{
"version": "v3",
"created": "Fri, 11 Jan 2008 15:05:23 GMT"
}
] | 2011-11-09T00:00:00 | [
[
"Mertzios",
"George B.",
""
]
] |
cs/0512052 | Ion Mandoiu | Ion I. Mandoiu and Claudia Prajescu | High-Throughput SNP Genotyping by SBE/SBH | 19 pages | null | null | null | cs.DS q-bio.GN | null | Despite much progress over the past decade, current Single Nucleotide
Polymorphism (SNP) genotyping technologies still offer an insufficient degree
of multiplexing when required to handle user-selected sets of SNPs. In this
paper we propose a new genotyping assay architecture combining multiplexed
solution-phase single-base extension (SBE) reactions with sequencing by
hybridization (SBH) using universal DNA arrays such as all $k$-mer arrays. In
addition to PCR amplification of genomic DNA, SNP genotyping using SBE/SBH
assays involves the following steps: (1) Synthesizing primers complementing the
genomic sequence immediately preceding SNPs of interest; (2) Hybridizing these
primers with the genomic DNA; (3) Extending each primer by a single base using
polymerase enzyme and dideoxynucleotides labeled with 4 different fluorescent
dyes; and finally (4) Hybridizing extended primers to a universal DNA array and
determining the identity of the bases that extend each primer by hybridization
pattern analysis. Our contributions include a study of multiplexing algorithms
for SBE/SBH genotyping assays and preliminary experimental results showing the
achievable tradeoffs between the number of array probes and primer length on
one hand and the number of SNPs that can be assayed simultaneously on the
other. Simulation results on datasets both randomly generated and extracted
from the NCBI dbSNP database suggest that the SBE/SBH architecture provides a
flexible and cost-effective alternative to genotyping assays currently used in
the industry, enabling genotyping of up to hundreds of thousands of
user-specified SNPs per assay.
| [
{
"version": "v1",
"created": "Wed, 14 Dec 2005 18:01:51 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Mandoiu",
"Ion I.",
""
],
[
"Prajescu",
"Claudia",
""
]
] |
cs/0512054 | Vyacheslav Gorshkov Mr | Gennady P.Berman (Los Alamos National Laboratory, T-13), Vyacheslav
N.Gorshkov (Los Alamos National Laboratory, Center for Nonlinear Studies),
Xidi Wang (Citigroup, Sao Paulo, Brasil) | Irreducible Frequent Patterns in Transactional Databases | 30 pages, 18 figures | null | null | null | cs.DS cs.DB | null | Irreducible frequent patters (IFPs) are introduced for transactional
databases. An IFP is such a frequent pattern (FP),(x1,x2,...xn), the
probability of which, P(x1,x2,...xn), cannot be represented as a product of the
probabilities of two (or more) other FPs of the smaller lengths. We have
developed an algorithm for searching IFPs in transactional databases. We argue
that IFPs represent useful tools for characterizing the transactional databases
and may have important applications to bio-systems including the immune systems
and for improving vaccination strategies. The effectiveness of the IFPs
approach has been illustrated in application to a classification problem.
| [
{
"version": "v1",
"created": "Tue, 13 Dec 2005 22:53:17 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Berman",
"Gennady P.",
"",
"Los Alamos National Laboratory, T-13"
],
[
"Gorshkov",
"Vyacheslav N.",
"",
"Los Alamos National Laboratory, Center for Nonlinear Studies"
],
[
"Wang",
"Xidi",
"",
"Citigroup, Sao Paulo, Brasil"
]
] |
cs/0512060 | Chiranjeeb Buragohain | Chiranjeeb Buragohain, Divyakant Agrawal, Subhash Suri | Distributed Navigation Algorithms for Sensor Networks | To Appear in INFOCOM 2006 | null | null | null | cs.NI cs.DC cs.DS | null | We propose efficient distributed algorithms to aid navigation of a user
through a geographic area covered by sensors. The sensors sense the level of
danger at their locations and we use this information to find a safe path for
the user through the sensor field. Traditional distributed navigation
algorithms rely upon flooding the whole network with packets to find an optimal
safe path. To reduce the communication expense, we introduce the concept of a
skeleton graph which is a sparse subset of the true sensor network
communication graph. Using skeleton graphs we show that it is possible to find
approximate safe paths with much lower communication cost. We give tight
theoretical guarantees on the quality of our approximation and by simulation,
show the effectiveness of our algorithms in realistic sensor network
situations.
| [
{
"version": "v1",
"created": "Wed, 14 Dec 2005 22:36:53 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Buragohain",
"Chiranjeeb",
""
],
[
"Agrawal",
"Divyakant",
""
],
[
"Suri",
"Subhash",
""
]
] |
cs/0512061 | Philip Bille | Philip Bille and Inge Li Goertz | Matching Subsequences in Trees | Minor correction of typos, etc | null | null | null | cs.DS | null | Given two rooted, labeled trees $P$ and $T$ the tree path subsequence problem
is to determine which paths in $P$ are subsequences of which paths in $T$. Here
a path begins at the root and ends at a leaf. In this paper we propose this
problem as a useful query primitive for XML data, and provide new algorithms
improving the previously best known time and space bounds.
| [
{
"version": "v1",
"created": "Thu, 15 Dec 2005 10:28:04 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2006 13:53:07 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Dec 2007 08:40:11 GMT"
}
] | 2011-11-09T00:00:00 | [
[
"Bille",
"Philip",
""
],
[
"Goertz",
"Inge Li",
""
]
] |
cs/0512080 | Grigorii Pivovarov | G. B. Pivovarov and S. E. Trunov | EqRank: Theme Evolution in Citation Graphs | 8 pages, 7 figs, 2 tables | null | null | null | cs.DS cs.DL | null | Time evolution of the classification scheme generated by the EqRank algorithm
is studied with hep-th citation graph as an example. Intuitive expectations
about evolution of an adequate classification scheme for a growing set of
objects are formulated. Evolution compliant with these expectations is called
natural. It is demonstrated that EqRank yields a naturally evolving
classification scheme. We conclude that EqRank can be used as a means to detect
new scientific themes, and to track their development.
| [
{
"version": "v1",
"created": "Tue, 20 Dec 2005 14:01:45 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Pivovarov",
"G. B.",
""
],
[
"Trunov",
"S. E.",
""
]
] |
cs/0512081 | Mihai Patrascu | Erik D. Demaine, Friedhelm Meyer auf der Heide, Rasmus Pagh and Mihai
Patrascu | De Dictionariis Dynamicis Pauco Spatio Utentibus | 14 pages. Full version of a paper accepted to LATIN'06 | null | null | null | cs.DS | null | We develop dynamic dictionaries on the word RAM that use asymptotically
optimal space, up to constant factors, subject to insertions and deletions, and
subject to supporting perfect-hashing queries and/or membership queries, each
operation in constant time with high probability. When supporting only
membership queries, we attain the optimal space bound of Theta(n lg(u/n)) bits,
where n and u are the sizes of the dictionary and the universe, respectively.
Previous dictionaries either did not achieve this space bound or had time
bounds that were only expected and amortized. When supporting perfect-hashing
queries, the optimal space bound depends on the range {1,2,...,n+t} of
hashcodes allowed as output. We prove that the optimal space bound is Theta(n
lglg(u/n) + n lg(n/(t+1))) bits when supporting only perfect-hashing queries,
and it is Theta(n lg(u/n) + n lg(n/(t+1))) bits when also supporting membership
queries. All upper bounds are new, as is the Omega(n lg(n/(t+1))) lower bound.
| [
{
"version": "v1",
"created": "Tue, 20 Dec 2005 23:01:41 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Demaine",
"Erik D.",
""
],
[
"der Heide",
"Friedhelm Meyer auf",
""
],
[
"Pagh",
"Rasmus",
""
],
[
"Patrascu",
"Mihai",
""
]
] |
cs/0512090 | Renaud Lambiotte | R. Lambiotte and M. Ausloos | Collaborative tagging as a tripartite network | null | Lecture Notes in Computer Science, 3993 (2006) 1114 - 1117 | 10.1007/11758532_152 | null | cs.DS cs.DL | null | We describe online collaborative communities by tripartite networks, the
nodes being persons, items and tags. We introduce projection methods in order
to uncover the structures of the networks, i.e. communities of users, genre
families...
To do so, we focus on the correlations between the nodes, depending on their
profiles, and use percolation techniques that consist in removing less
correlated links and observing the shaping of disconnected islands. The
structuring of the network is visualised by using a tree representation. The
notion of diversity in the system is also discussed.
| [
{
"version": "v1",
"created": "Fri, 23 Dec 2005 13:38:57 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Dec 2005 09:14:19 GMT"
}
] | 2016-08-31T00:00:00 | [
[
"Lambiotte",
"R.",
""
],
[
"Ausloos",
"M.",
""
]
] |
cs/0512091 | Erik Demaine | Boris Aronov, Prosenjit Bose, Erik D. Demaine, Joachim Gudmundsson,
John Iacono, Stefan Langerman, Michiel Smid | Data Structures for Halfplane Proximity Queries and Incremental Voronoi
Diagrams | 17 pages, 6 figures. Various small improvements. To appear in
Algorithmica | null | null | null | cs.CG cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider preprocessing a set $S$ of $n$ points in convex position in the
plane into a data structure supporting queries of the following form: given a
point $q$ and a directed line $\ell$ in the plane, report the point of $S$ that
is farthest from (or, alternatively, nearest to) the point $q$ among all points
to the left of line $\ell$. We present two data structures for this problem.
The first data structure uses $O(n^{1+\varepsilon})$ space and preprocessing
time, and answers queries in $O(2^{1/\varepsilon} \log n)$ time, for any $0 <
\varepsilon < 1$. The second data structure uses $O(n \log^3 n)$ space and
polynomial preprocessing time, and answers queries in $O(\log n)$ time. These
are the first solutions to the problem with $O(\log n)$ query time and $o(n^2)$
space.
The second data structure uses a new representation of nearest- and
farthest-point Voronoi diagrams of points in convex position. This
representation supports the insertion of new points in clockwise order using
only $O(\log n)$ amortized pointer changes, in addition to $O(\log n)$-time
point-location queries, even though every such update may make $\Theta(n)$
combinatorial changes to the Voronoi diagram. This data structure is the first
demonstration that deterministically and incrementally constructed Voronoi
diagrams can be maintained in $o(n)$ amortized pointer changes per operation
while keeping $O(\log n)$-time point-location queries.
| [
{
"version": "v1",
"created": "Fri, 23 Dec 2005 04:28:12 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jul 2015 13:10:07 GMT"
},
{
"version": "v3",
"created": "Fri, 13 Oct 2017 16:47:53 GMT"
}
] | 2017-10-16T00:00:00 | [
[
"Aronov",
"Boris",
""
],
[
"Bose",
"Prosenjit",
""
],
[
"Demaine",
"Erik D.",
""
],
[
"Gudmundsson",
"Joachim",
""
],
[
"Iacono",
"John",
""
],
[
"Langerman",
"Stefan",
""
],
[
"Smid",
"Michiel",
""
]
] |
cs/0601011 | Avner Magen | Hamed Hatami and Avner Magen and Vangelis Markakis | Integrality gaps of semidefinite programs for Vertex Cover and relations
to $\ell_1$ embeddability of Negative Type metrics | A more complete version. Changed order of results. A complete proof
of (current) Theorem 5 | null | null | null | cs.DS cs.DM math.MG | null | We study various SDP formulations for {\sc Vertex Cover} by adding different
constraints to the standard formulation. We show that {\sc Vertex Cover} cannot
be approximated better than $2-o(1)$ even when we add the so called pentagonal
inequality constraints to the standard SDP formulation, en route answering an
open question of Karakostas~\cite{Karakostas}. We further show the surprising
fact that by strengthening the SDP with the (intractable) requirement that the
metric interpretation of the solution is an $\ell_1$ metric, we get an exact
relaxation (integrality gap is 1), and on the other hand if the solution is
arbitrarily close to being $\ell_1$ embeddable, the integrality gap may be as
big as $2-o(1)$. Finally, inspired by the above findings, we use ideas from the
integrality gap construction of Charikar \cite{Char02} to provide a family of
simple examples for negative type metrics that cannot be embedded into $\ell_1$
with distortion better than $8/7-\eps$. To this end we prove a new
isoperimetric inequality for the hypercube.
| [
{
"version": "v1",
"created": "Thu, 5 Jan 2006 23:10:58 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Apr 2006 14:01:50 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Apr 2006 20:17:01 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Hatami",
"Hamed",
""
],
[
"Magen",
"Avner",
""
],
[
"Markakis",
"Vangelis",
""
]
] |
cs/0601026 | Nicholas Harvey | Nicholas J. A. Harvey | Algebraic Structures and Algorithms for Matching and Matroid Problems
(Preliminary Version) | null | null | null | null | cs.DS cs.DM | null | Basic path-matchings, introduced by Cunningham and Geelen (FOCS 1996), are a
common generalization of matroid intersection and non-bipartite matching. The
main results of this paper are a new algebraic characterization of basic
path-matching problems and an algorithm for constructing basic path-matchings
in O(n^w) time, where n is the number of vertices and w is the exponent for
matrix multiplication. Our algorithms are randomized, and our approach assumes
that the given matroids are linear and can be represented over the same field.
Our main results have interesting consequences for several special cases of
path-matching problems. For matroid intersection, we obtain an algorithm with
running time O(nr^(w-1))=O(nr^1.38), where the matroids have n elements and
rank r. This improves the long-standing bound of O(nr^1.62) due to Gabow and Xu
(FOCS 1989). Also, we obtain a simple, purely algebraic algorithm for
non-bipartite matching with running time O(n^w). This resolves the central open
problem of Mucha and Sankowski (FOCS 2004).
| [
{
"version": "v1",
"created": "Mon, 9 Jan 2006 13:54:41 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Harvey",
"Nicholas J. A.",
""
]
] |
cs/0601081 | Andrej (Andy) Brodnik | Andrej Brodnik, Johan Karlsson, J. Ian Munro, Andreas Nilsson | An O(1) Solution to the Prefix Sum Problem on a Specialized Memory
Architecture | 12 pages | null | null | null | cs.DS cs.CC cs.IR | null | In this paper we study the Prefix Sum problem introduced by Fredman.
We show that it is possible to perform both update and retrieval in O(1) time
simultaneously under a memory model in which individual bits may be shared by
several words.
We also show that two variants (generalizations) of the problem can be solved
optimally in $\Theta(\lg N)$ time under the comparison based model of
computation.
| [
{
"version": "v1",
"created": "Wed, 18 Jan 2006 21:20:10 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Brodnik",
"Andrej",
""
],
[
"Karlsson",
"Johan",
""
],
[
"Munro",
"J. Ian",
""
],
[
"Nilsson",
"Andreas",
""
]
] |
cs/0601084 | Robert Schweller | Ming-Yang Kao, Manan Sanghi, Robert Schweller | Randomized Fast Design of Short DNA Words | null | Proceedings of the 32nd International Colloquium on Automata,
Languages and Programming (ICALP 2005), Lisboa, Portugal, July 11-15, 2005,
pp. 1275-1286 | null | null | cs.DS | null | We consider the problem of efficiently designing sets (codes) of equal-length
DNA strings (words) that satisfy certain combinatorial constraints. This
problem has numerous motivations including DNA computing and DNA self-assembly.
Previous work has extended results from coding theory to obtain bounds on code
size for new biologically motivated constraints and has applied heuristic local
search and genetic algorithm techniques for code design. This paper proposes a
natural optimization formulation of the DNA code design problem in which the
goal is to design n strings that satisfy a given set of constraints while
minimizing the length of the strings. For multiple sets of constraints, we
provide high-probability algorithms that run in time polynomial in n and any
given constraint parameters, and output strings of length within a constant
factor of the optimal. To the best of our knowledge, this work is the first to
consider this type of optimization problem in the context of DNA code design.
| [
{
"version": "v1",
"created": "Thu, 19 Jan 2006 00:22:56 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Kao",
"Ming-Yang",
""
],
[
"Sanghi",
"Manan",
""
],
[
"Schweller",
"Robert",
""
]
] |
cs/0601108 | Alain Lifchitz | Alain Lifchitz, Frederic Maire and Dominique Revuz | Fast Lexically Constrained Viterbi Algorithm (FLCVA): Simultaneous
Optimization of Speed and Memory | 5 pages, 2 figures, 4 tables | null | null | null | cs.CV cs.AI cs.DS | null | Lexical constraints on the input of speech and on-line handwriting systems
improve the performance of such systems. A significant gain in speed can be
achieved by integrating in a digraph structure the different Hidden Markov
Models (HMM) corresponding to the words of the relevant lexicon. This
integration avoids redundant computations by sharing intermediate results
between HMM's corresponding to different words of the lexicon. In this paper,
we introduce a token passing method to perform simultaneously the computation
of the a posteriori probabilities of all the words of the lexicon. The coding
scheme that we introduce for the tokens is optimal in the information theory
sense. The tokens use the minimum possible number of bits. Overall, we optimize
simultaneously the execution speed and the memory requirement of the
recognition systems.
| [
{
"version": "v1",
"created": "Wed, 25 Jan 2006 17:50:13 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Feb 2006 13:05:36 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Feb 2006 23:00:28 GMT"
},
{
"version": "v4",
"created": "Sun, 19 Mar 2006 16:40:45 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Lifchitz",
"Alain",
""
],
[
"Maire",
"Frederic",
""
],
[
"Revuz",
"Dominique",
""
]
] |
cs/0601116 | Laurent Noe | Gregory Kucherov (LIFL), Laurent No\'e (LIFL), Mihkail Roytberg (LIFL) | A unifying framework for seed sensitivity and its application to subset
seeds | null | Journal of Bioinformatics and Computational Biology 4 (2006) 2, pp
553--569 | 10.1142/S0219720006001977 | null | cs.DS q-bio.QM | null | We propose a general approach to compute the seed sensitivity, that can be
applied to different definitions of seeds. It treats separately three
components of the seed sensitivity problem -- a set of target alignments, an
associated probability distribution, and a seed model -- that are specified by
distinct finite automata. The approach is then applied to a new concept of
subset seeds for which we propose an efficient automaton construction.
Experimental results confirm that sensitive subset seeds can be efficiently
designed using our approach, and can then be used in similarity search
producing better results than ordinary spaced seeds.
| [
{
"version": "v1",
"created": "Fri, 27 Jan 2006 18:53:01 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2006 07:05:58 GMT"
}
] | 2010-01-19T00:00:00 | [
[
"Kucherov",
"Gregory",
"",
"LIFL"
],
[
"Noé",
"Laurent",
"",
"LIFL"
],
[
"Roytberg",
"Mihkail",
"",
"LIFL"
]
] |
cs/0601117 | Prashant Singh | Dhananjay D. Kulkarni, Shekhar Verma, Prashant | Finding Cliques of a Graph using Prime Numbers | 7 pages, 1 figure | null | null | null | cs.DS | null | This paper proposes a new algorithm for solving maximal cliques for simple
undirected graphs using the theory of prime numbers. A novel approach using
prime numbers is used to find cliques and ends with a discussion of the
algorithm.
| [
{
"version": "v1",
"created": "Fri, 27 Jan 2006 20:11:14 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Jan 2007 22:48:59 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Kulkarni",
"Dhananjay D.",
""
],
[
"Verma",
"Shekhar",
""
],
[
"Prashant",
"",
""
]
] |
cs/0601127 | Manor Mendel | Amos Fiat, Manor Mendel | Truly Online Paging with Locality of Reference | 37 pages. Preliminary version appeared in FOCS '97 | 38th Annual Symposium on Foundations of Computer Science (FOCS
'97), 1997, pp. 326 | 10.1109/SFCS.1997.646121 | null | cs.DS | null | The competitive analysis fails to model locality of reference in the online
paging problem. To deal with it, Borodin et. al. introduced the access graph
model, which attempts to capture the locality of reference. However, the access
graph model has a number of troubling aspects. The access graph has to be known
in advance to the paging algorithm and the memory required to represent the
access graph itself may be very large.
In this paper we present truly online strongly competitive paging algorithms
in the access graph model that do not have any prior information on the access
sequence. We present both deterministic and randomized algorithms. The
algorithms need only O(k log n) bits of memory, where k is the number of page
slots available and n is the size of the virtual address space. I.e.,
asymptotically no more memory than needed to store the virtual address
translation table.
We also observe that our algorithms adapt themselves to temporal changes in
the locality of reference. We model temporal changes in the locality of
reference by extending the access graph model to the so called extended access
graph model, in which many vertices of the graph can correspond to the same
virtual page. We define a measure for the rate of change in the locality of
reference in G denoted by Delta(G). We then show our algorithms remain strongly
competitive as long as Delta(G) >= (1+ epsilon)k, and no truly online algorithm
can be strongly competitive on a class of extended access graphs that includes
all graphs G with Delta(G) >= k- o(k).
| [
{
"version": "v1",
"created": "Mon, 30 Jan 2006 20:58:23 GMT"
}
] | 2009-03-23T00:00:00 | [
[
"Fiat",
"Amos",
""
],
[
"Mendel",
"Manor",
""
]
] |
cs/0602002 | Marko Antonio Rodriguez | Marko A. Rodriguez and Johan Bollen | Simulating Network Influence Algorithms Using Particle-Swarms: PageRank
and PageRank-Priors | 17 pages, currently in peer-review | null | null | null | cs.DS | null | A particle-swarm is a set of indivisible processing elements that traverse a
network in order to perform a distributed function. This paper will describe a
particular implementation of a particle-swarm that can simulate the behavior of
the popular PageRank algorithm in both its {\it global-rank} and {\it
relative-rank} incarnations. PageRank is compared against the particle-swarm
method on artificially generated scale-free networks of 1,000 nodes constructed
using a common gamma value, $\gamma = 2.5$. The running time of the
particle-swarm algorithm is $O(|P|+|P|t)$ where $|P|$ is the size of the
particle population and $t$ is the number of particle propagation iterations.
The particle-swarm method is shown to be useful due to its ease of extension
and running time.
| [
{
"version": "v1",
"created": "Tue, 31 Jan 2006 23:24:42 GMT"
}
] | 2009-09-29T00:00:00 | [
[
"Rodriguez",
"Marko A.",
""
],
[
"Bollen",
"Johan",
""
]
] |
cs/0602016 | Christoph D\"urr | Christoph Durr and Mathilde Hurand | Finding total unimodularity in optimization problems solved by linear
programs | null | null | null | null | cs.DS cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A popular approach in combinatorial optimization is to model problems as
integer linear programs. Ideally, the relaxed linear program would have only
integer solutions, which happens for instance when the constraint matrix is
totally unimodular. Still, sometimes it is possible to build an integer
solution with the same cost from the fractional solution. Examples are two
scheduling problems and the single disk prefetching/caching problem. We show
that problems such as the three previously mentioned can be separated into two
subproblems: (1) finding an optimal feasible set of slots, and (2) assigning
the jobs or pages to the slots. It is straigthforward to show that the latter
can be solved greedily. We are able to solve the former with a totally
unimodular linear program, from which we obtain simple combinatorial algorithms
with improved worst case running time.
| [
{
"version": "v1",
"created": "Mon, 6 Feb 2006 09:09:03 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2006 13:58:41 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Apr 2009 06:25:26 GMT"
}
] | 2009-09-29T00:00:00 | [
[
"Durr",
"Christoph",
""
],
[
"Hurand",
"Mathilde",
""
]
] |
cs/0602029 | Kevin Wortman | John Augustine and David Eppstein and Kevin A. Wortman | Approximate Weighted Farthest Neighbors and Minimum Dilation Stars | 12 pages, 2 figures | null | null | null | cs.CG cs.DS | null | We provide an efficient reduction from the problem of querying approximate
multiplicatively weighted farthest neighbors in a metric space to the
unweighted problem. Combining our techniques with core-sets for approximate
unweighted farthest neighbors, we show how to find (1+epsilon)-approximate
farthest neighbors in time O(log n) per query in D-dimensional Euclidean space
for any constants D and epsilon. As an application, we find an O(n log n)
expected time algorithm for choosing the center of a star topology network
connecting a given set of points, so as to approximately minimize the maximum
dilation between any pair of points.
| [
{
"version": "v1",
"created": "Tue, 7 Feb 2006 21:09:11 GMT"
}
] | 2009-09-29T00:00:00 | [
[
"Augustine",
"John",
""
],
[
"Eppstein",
"David",
""
],
[
"Wortman",
"Kevin A.",
""
]
] |
cs/0602041 | Radu Mihaescu | Radu Mihaescu, Dan Levy, Lior Pachter | Why neighbor-joining works | Revision 2 | null | null | null | cs.DS cs.DM | null | We show that the neighbor-joining algorithm is a robust quartet method for
constructing trees from distances. This leads to a new performance guarantee
that contains Atteson's optimal radius bound as a special case and explains
many cases where neighbor-joining is successful even when Atteson's criterion
is not satisfied. We also provide a proof for Atteson's conjecture on the
optimal edge radius of the neighbor-joining algorithm. The strong performance
guarantees we provide also hold for the quadratic time fast neighbor-joining
algorithm, thus providing a theoretical basis for inferring very large
phylogenies with neighbor-joining.
| [
{
"version": "v1",
"created": "Fri, 10 Feb 2006 20:22:59 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Dec 2006 21:28:24 GMT"
},
{
"version": "v3",
"created": "Sun, 17 Jun 2007 10:36:20 GMT"
}
] | 2007-06-17T00:00:00 | [
[
"Mihaescu",
"Radu",
""
],
[
"Levy",
"Dan",
""
],
[
"Pachter",
"Lior",
""
]
] |
cs/0602052 | Grigoriev Evgeniy | Evgeniy Grigoriev | The OverRelational Manifesto | 34 pages | null | null | null | cs.DB cs.DS | null | The OverRelational Manifesto (below ORM) proposes a possible approach to
creation of data storage systems of the next generation. ORM starts from the
requirement that information in a relational database is represented by a set
of relation values. Accordingly, it is assumed that the information about any
entity of an enterprise must also be represented as a set of relation values
(the ORM main requirement). A system of types is introduced, which allows one
to fulfill the main requirement. The data are represented in the form of
complex objects, and the state of any object is described as a set of relation
values. Emphasize that the types describing the objects are encapsulated,
inherited, and polymorphic. Then, it is shown that the data represented as a
set of such objects may also be represented as a set of relational values
defined on the set of scalar domains (dual data representation). In the general
case, any class is associated with a set of relation variables (R-variables)
each one containing some data about all objects of this class existing in the
system. One of the key points is the fact that the usage of complex (from the
user's viewpoint) refined names of R-variables and their attributes makes it
possible to preserve the semantics of complex data structures represented in
the form of a set of relation values. The most important part of the data
storage system created on the approach proposed is an object-oriented
translator operating over a relational DBMS. The expressiveness of such a
system is comparable with that of OO programming languages.
| [
{
"version": "v1",
"created": "Tue, 14 Feb 2006 12:19:08 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Feb 2006 10:28:24 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Mar 2006 09:57:07 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Grigoriev",
"Evgeniy",
""
]
] |
cs/0602053 | Thomas Hayes | Varsha Dani and Thomas P. Hayes | How to Beat the Adaptive Multi-Armed Bandit | null | null | null | null | cs.DS cs.LG | null | The multi-armed bandit is a concise model for the problem of iterated
decision-making under uncertainty. In each round, a gambler must pull one of
$K$ arms of a slot machine, without any foreknowledge of their payouts, except
that they are uniformly bounded. A standard objective is to minimize the
gambler's regret, defined as the gambler's total payout minus the largest
payout which would have been achieved by any fixed arm, in hindsight. Note that
the gambler is only told the payout for the arm actually chosen, not for the
unchosen arms.
Almost all previous work on this problem assumed the payouts to be
non-adaptive, in the sense that the distribution of the payout of arm $j$ in
round $i$ is completely independent of the choices made by the gambler on
rounds $1, \dots, i-1$. In the more general model of adaptive payouts, the
payouts in round $i$ may depend arbitrarily on the history of past choices made
by the algorithm.
We present a new algorithm for this problem, and prove nearly optimal
guarantees for the regret against both non-adaptive and adaptive adversaries.
After $T$ rounds, our algorithm has regret $O(\sqrt{T})$ with high probability
(the tail probability decays exponentially). This dependence on $T$ is best
possible, and matches that of the full-information version of the problem, in
which the gambler is told the payouts for all $K$ arms after each round.
Previously, even for non-adaptive payouts, the best high-probability bounds
known were $O(T^{2/3})$, due to Auer, Cesa-Bianchi, Freund and Schapire. The
expected regret of their algorithm is $O(T^{1/2}) for non-adaptive payouts, but
as we show, $\Omega(T^{2/3})$ for adaptive payouts.
| [
{
"version": "v1",
"created": "Tue, 14 Feb 2006 23:57:01 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Dani",
"Varsha",
""
],
[
"Hayes",
"Thomas P.",
""
]
] |
cs/0602057 | Christopher Homan | Melanie J. Agnew and Christopher M. Homan | Plane Decompositions as Tools for Approximation | null | null | null | null | cs.DS | null | Tree decompositions were developed by Robertson and Seymour. Since then
algorithms have been developed to solve intractable problems efficiently for
graphs of bounded treewidth. In this paper we extend tree decompositions to
allow cycles to exist in the decomposition graph; we call these new
decompositions plane decompositions because we require that the decomposition
graph be planar. First, we give some background material about tree
decompositions and an overview of algorithms both for decompositions and for
approximations of planar graphs. Then, we give our plane decomposition
definition and an algorithm that uses this decomposition to approximate the
size of the maximum independent set of the underlying graph in polynomial time.
| [
{
"version": "v1",
"created": "Wed, 15 Feb 2006 19:09:39 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Agnew",
"Melanie J.",
""
],
[
"Homan",
"Christopher M.",
""
]
] |
cs/0602067 | Michael Baer | Michael B. Baer | Renyi to Renyi -- Source Coding under Siege | 5 pages, 1 figure, accepted to ISIT 2006 | null | null | null | cs.IT cs.DS math.IT | null | A novel lossless source coding paradigm applies to problems of unreliable
lossless channels with low bit rates, in which a vital message needs to be
transmitted prior to termination of communications. This paradigm can be
applied to Alfred Renyi's secondhand account of an ancient siege in which a spy
was sent to scout the enemy but was captured. After escaping, the spy returned
to his base in no condition to speak and unable to write. His commander asked
him questions that he could answer by nodding or shaking his head, and the
fortress was defended with this information. Renyi told this story with
reference to prefix coding, but maximizing probability of survival in the siege
scenario is distinct from yet related to the traditional source coding
objective of minimizing expected codeword length. Rather than finding a code
minimizing expected codeword length $\sum_{i=1}^n p(i) l(i)$, the siege problem
involves maximizing $\sum_{i=1}^n p(i) \theta^{l(i)}$ for a known $\theta \in
(0,1)$. When there are no restrictions on codewords, this problem can be solve
using a known generalization of Huffman coding. The optimal solution has coding
bounds which are functions of Renyi entropy; in addition to known bounds, new
bounds are derived here. The alphabetically constrained version of this problem
has applications in search trees and diagnostic testing. A novel dynamic
programming algorithm -- based upon the oldest known algorithm for the
traditional alphabetic problem -- optimizes this problem in $O(n^3)$ time and
$O(n^2)$ space, whereas two novel approximation algorithms can find a
suboptimal solution faster: one in linear time, the other in $O(n \log n)$.
Coding bounds for the alphabetic version of this problem are also presented.
| [
{
"version": "v1",
"created": "Fri, 17 Feb 2006 23:40:26 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2006 20:32:04 GMT"
}
] | 2007-07-16T00:00:00 | [
[
"Baer",
"Michael B.",
""
]
] |
cs/0602069 | Vicky Choi | Vicky Choi | Faster Algorithms for Constructing a Concept (Galois) Lattice | 15 pages, 3 figures | null | null | null | cs.DM cs.DS | null | In this paper, we present a fast algorithm for constructing a concept
(Galois) lattice of a binary relation, including computing all concepts and
their lattice order. We also present two efficient variants of the algorithm,
one for computing all concepts only, and one for constructing a frequent closed
itemset lattice. The running time of our algorithms depends on the lattice
structure and is faster than all other existing algorithms for these problems.
| [
{
"version": "v1",
"created": "Sun, 19 Feb 2006 19:47:56 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2006 19:03:46 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Choi",
"Vicky",
""
]
] |
cs/0602073 | Tobias Friedrich | Deepak Ajwani, Tobias Friedrich and Ulrich Meyer | An O(n^{2.75}) algorithm for online topological ordering | 20 pages, long version of SWAT'06 paper | null | null | null | cs.DS | null | We present a simple algorithm which maintains the topological order of a
directed acyclic graph with n nodes under an online edge insertion sequence in
O(n^{2.75}) time, independent of the number of edges m inserted. For dense
DAGs, this is an improvement over the previous best result of O(min(m^{3/2}
log(n), m^{3/2} + n^2 log(n)) by Katriel and Bodlaender. We also provide an
empirical comparison of our algorithm with other algorithms for online
topological sorting. Our implementation outperforms them on certain hard
instances while it is still competitive on random edge insertion sequences
leading to complete DAGs.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2006 10:32:15 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2006 16:31:50 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Ajwani",
"Deepak",
""
],
[
"Friedrich",
"Tobias",
""
],
[
"Meyer",
"Ulrich",
""
]
] |
cs/0602079 | Dumitru Mihai Ionescu Dr. | Dumitru Mihai Ionescu, Haidong Zhu | SISO APP Searches in Lattices with Tanner Graphs | 15 pages, 6 figures, 2 tables, uses IEEEtran.cls | IEEE Trans. Inf. Theory, pp. 2672-2688, vol. 58, May 2012 | 10.1109/TIT.2011.2178130 | null | cs.IT cs.DS math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An efficient, low-complexity, soft-output detector for general lattices is
presented, based on their Tanner graph (TG) representations. Closest-point
searches in lattices can be performed as non-binary belief propagation on
associated TGs; soft-information output is naturally generated in the process;
the algorithm requires no backtrack (cf. classic sphere decoding), and extracts
extrinsic information. A lattice's coding gain enables equivalence relations
between lattice points, which can be thereby partitioned in cosets. Total and
extrinsic a posteriori probabilities at the detector's output further enable
the use of soft detection information in iterative schemes. The algorithm is
illustrated via two scenarios that transmit a 32-point, uncoded
super-orthogonal (SO) constellation for multiple-input multiple-output (MIMO)
channels, carved from an 8-dimensional non-orthogonal lattice (a direct sum of
two 4-dimensional checkerboard lattice): it achieves maximum likelihood
performance in quasistatic fading; and, performs close to interference-free
transmission, and identically to list sphere decoding, in independent fading
with coordinate interleaving and iterative equalization and detection. Latter
scenario outperforms former despite the absence of forward error correction
coding---because the inherent lattice coding gain allows for the refining of
extrinsic information. The lattice constellation is the same as the one
employed in the SO space-time trellis codes first introduced for 2-by-2 MIMO by
Ionescu et al., then independently by Jafarkhani and Seshadri. Complexity is
log-linear in lattice dimensionality, vs. cubic in sphere decoders.
| [
{
"version": "v1",
"created": "Wed, 22 Feb 2006 03:28:46 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Sep 2011 21:45:19 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Sep 2011 17:43:48 GMT"
}
] | 2012-05-29T00:00:00 | [
[
"Ionescu",
"Dumitru Mihai",
""
],
[
"Zhu",
"Haidong",
""
]
] |
cs/0602085 | Michael Baer | Michael B. Baer | Twenty (or so) Questions: $D$-ary Length-Bounded Prefix Coding | 12 pages, 4 figures, extended version of cs/0701012 (accepted to ISIT
2007), formerly "Twenty (or so) Questions: $D$-ary Bounded-Length Huffman
Coding" | null | null | null | cs.IT cs.DS math.IT | null | Efficient optimal prefix coding has long been accomplished via the Huffman
algorithm. However, there is still room for improvement and exploration
regarding variants of the Huffman problem. Length-limited Huffman coding,
useful for many practical applications, is one such variant, for which codes
are restricted to the set of codes in which none of the $n$ codewords is longer
than a given length, $l_{\max}$. Binary length-limited coding can be done in
$O(n l_{\max})$ time and O(n) space via the widely used Package-Merge algorithm
and with even smaller asymptotic complexity using a lesser-known algorithm. In
this paper these algorithms are generalized without increasing complexity in
order to introduce a minimum codeword length constraint $l_{\min}$, to allow
for objective functions other than the minimization of expected codeword
length, and to be applicable to both binary and nonbinary codes; nonbinary
codes were previously addressed using a slower dynamic programming approach.
These extensions have various applications -- including fast decompression and
a modified version of the game ``Twenty Questions'' -- and can be used to solve
the problem of finding an optimal code with limited fringe, that is, finding
the best code among codes with a maximum difference between the longest and
shortest codewords. The previously proposed method for solving this problem was
nonpolynomial time, whereas solving this using the novel linear-space algorithm
requires only $O(n (l_{\max}- l_{\min})^2)$ time, or even less if $l_{\max}-
l_{\min}$ is not $O(\log n)$.
| [
{
"version": "v1",
"created": "Sat, 25 Feb 2006 19:09:11 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2006 01:20:41 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Apr 2006 05:39:00 GMT"
},
{
"version": "v4",
"created": "Wed, 20 Jun 2007 19:47:12 GMT"
}
] | 2007-07-13T00:00:00 | [
[
"Baer",
"Michael B.",
""
]
] |
cs/0603012 | Nils Hebbinghaus | Benjamin Doerr, Nils Hebbinghaus, S\"oren Werth | Improved Bounds and Schemes for the Declustering Problem | 19 pages, 1 figure | null | null | null | cs.DM cs.DS | null | The declustering problem is to allocate given data on parallel working
storage devices in such a manner that typical requests find their data evenly
distributed on the devices. Using deep results from discrepancy theory, we
improve previous work of several authors concerning range queries to
higher-dimensional data. We give a declustering scheme with an additive error
of $O_d(\log^{d-1} M)$ independent of the data size, where $d$ is the
dimension, $M$ the number of storage devices and $d-1$ does not exceed the
smallest prime power in the canonical decomposition of $M$ into prime powers.
In particular, our schemes work for arbitrary $M$ in dimensions two and three.
For general $d$, they work for all $M\geq d-1$ that are powers of two.
Concerning lower bounds, we show that a recent proof of a
$\Omega_d(\log^{\frac{d-1}{2}} M)$ bound contains an error. We close the gap in
the proof and thus establish the bound.
| [
{
"version": "v1",
"created": "Thu, 2 Mar 2006 15:51:09 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Doerr",
"Benjamin",
""
],
[
"Hebbinghaus",
"Nils",
""
],
[
"Werth",
"Sören",
""
]
] |
cs/0603026 | Valentin Polishchuk | Esther M. Arkin (1), Michael A. Bender (2), Joseph S. B. Mitchell (1),
Valentin Polishchuk (1) ((1) Department of Applied Mathematics and
Statistics, Stony Brook University, (2) Department of Computer Science, Stony
Brook University) | The Snowblower Problem | 19 pages, 10 figures, 1 table. Submitted to WAFR 2006 | null | null | null | cs.DS cs.CC cs.RO | null | We introduce the snowblower problem (SBP), a new optimization problem that is
closely related to milling problems and to some material-handling problems. The
objective in the SBP is to compute a short tour for the snowblower to follow to
remove all the snow from a domain (driveway, sidewalk, etc.). When a snowblower
passes over each region along the tour, it displaces snow into a nearby region.
The constraint is that if the snow is piled too high, then the snowblower
cannot clear the pile.
We give an algorithmic study of the SBP. We show that in general, the problem
is NP-complete, and we present polynomial-time approximation algorithms for
removing snow under various assumptions about the operation of the snowblower.
Most commercially-available snowblowers allow the user to control the direction
in which the snow is thrown. We differentiate between the cases in which the
snow can be thrown in any direction, in any direction except backwards, and
only to the right. For all cases, we give constant-factor approximation
algorithms; the constants increase as the throw direction becomes more
restricted.
Our results are also applicable to robotic vacuuming (or lawnmowing) with
bounded capacity dust bin and to some versions of material-handling problems,
in which the goal is to rearrange cartons on the floor of a warehouse.
| [
{
"version": "v1",
"created": "Tue, 7 Mar 2006 20:35:48 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Arkin",
"Esther M.",
""
],
[
"Bender",
"Michael A.",
""
],
[
"Mitchell",
"Joseph S. B.",
""
],
[
"Polishchuk",
"Valentin",
""
]
] |
cs/0603043 | Mihai Patrascu | Mihai Patrascu and Mikkel Thorup | Time-Space Trade-Offs for Predecessor Search | 29 pages. Full version (preliminary) of a paper appearing in STOC'06 | null | null | null | cs.CC cs.DS | null | We develop a new technique for proving cell-probe lower bounds for static
data structures. Previous lower bounds used a reduction to communication games,
which was known not to be tight by counting arguments. We give the first lower
bound for an explicit problem which breaks this communication complexity
barrier. In addition, our bounds give the first separation between polynomial
and near linear space. Such a separation is inherently impossible by
communication complexity.
Using our lower bound technique and new upper bound constructions, we obtain
tight bounds for searching predecessors among a static set of integers. Given a
set Y of n integers of l bits each, the goal is to efficiently find
predecessor(x) = max{y in Y | y <= x}, by representing Y on a RAM using space
S.
In external memory, it follows that the optimal strategy is to use either
standard B-trees, or a RAM algorithm ignoring the larger block size. In the
important case of l = c*lg n, for c>1 (i.e. polynomial universes), and near
linear space (such as S = n*poly(lg n)), the optimal search time is Theta(lg
l). Thus, our lower bound implies the surprising conclusion that van Emde Boas'
classic data structure from [FOCS'75] is optimal in this case. Note that for
space n^{1+eps}, a running time of O(lg l / lglg l) was given by Beame and Fich
[STOC'99].
| [
{
"version": "v1",
"created": "Fri, 10 Mar 2006 14:50:20 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Patrascu",
"Mihai",
""
],
[
"Thorup",
"Mikkel",
""
]
] |
cs/0603048 | Vincent Limouzy | Binh Minh Bui Xuan (LIRMM), Michel Habib (LIAFA), Vincent Limouzy
(LIAFA), Fabien De Montgolfier (LIAFA) | Homogeneity vs. Adjacency: generalising some graph decomposition
algorithms | soumis \`{a} WG 2006 | Graph-Theoretic Concepts in Computer Science Springer (Ed.)
(22/06/2006) 278-288 | 10.1007/11917496\_25 | null | cs.DS | null | In this paper, a new general decomposition theory inspired from modular graph
decomposition is presented. Our main result shows that, within this general
theory, most of the nice algorithmic tools developed for modular decomposition
are still efficient. This theory not only unifies the usual modular
decomposition generalisations such as modular decomposition of directed graphs
or decomposition of 2-structures, but also star cutsets and bimodular
decomposition. Our general framework provides a decomposition algorithm which
improves the best known algorithms for bimodular decomposition.
| [
{
"version": "v1",
"created": "Mon, 13 Mar 2006 09:48:49 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Xuan",
"Binh Minh Bui",
"",
"LIRMM"
],
[
"Habib",
"Michel",
"",
"LIAFA"
],
[
"Limouzy",
"Vincent",
"",
"LIAFA"
],
[
"De Montgolfier",
"Fabien",
"",
"LIAFA"
]
] |
cs/0603050 | Irene Guessarian | Patrick Cegielski (LACL), Irene Guessarian (LIAFA), Yuri Matiyasevich
(PDMI) | Multiple serial episode matching | 12 | CSIT05 (2005) 26-38 | 10.1016/j.ipl.2006.02.008 | null | cs.DS | null | In a previous paper we generalized the Knuth-Morris-Pratt (KMP) pattern
matching algorithm and defined a non-conventional kind of RAM, the MP--RAMs
(RAMS equipped with extra operations), and designed an O(n) on-line algorithm
for solving the serial episode matching problem on MP--RAMs when there is only
one single episode. We here give two extensions of this algorithm to the case
when we search for several patterns simultaneously and compare them. More
preciseley, given $q+1$ strings (a text $t$ of length $n$ and $q$ patterns
$m\_1,...,m\_q$) and a natural number $w$, the {\em multiple serial episode
matching problem} consists in finding the number of size $w$ windows of text
$t$ which contain patterns $m\_1,...,m\_q$ as subsequences, i.e. for each
$m\_i$, if $m\_i=p\_1,..., p\_k$, the letters $p\_1,..., p\_k$ occur in the
window, in the same order as in $m\_i$, but not necessarily consecutively (they
may be interleaved with other letters).} The main contribution is an algorithm
solving this problem on-line in time $O(nq)$.
| [
{
"version": "v1",
"created": "Mon, 13 Mar 2006 11:03:34 GMT"
}
] | 2021-10-26T00:00:00 | [
[
"Cegielski",
"Patrick",
"",
"LACL"
],
[
"Guessarian",
"Irene",
"",
"LIAFA"
],
[
"Matiyasevich",
"Yuri",
"",
"PDMI"
]
] |
cs/0603053 | Irene Guessarian | A. Ai T -Bouziad (LIAFA), Irene Guessarian (LIAFA), L. Vieille (NCM) | Automatic generation of simplified weakest preconditions for integrity
constraint verification | null | null | null | null | cs.DS cs.DB | null | Given a constraint $c$ assumed to hold on a database $B$ and an update $u$ to
be performed on $B$, we address the following question: will $c$ still hold
after $u$ is performed? When $B$ is a relational database, we define a
confluent terminating rewriting system which, starting from $c$ and $u$,
automatically derives a simplified weakest precondition $wp(c,u)$ such that,
whenever $B$ satisfies $wp(c,u)$, then the updated database $u(B)$ will satisfy
$c$, and moreover $wp(c,u)$ is simplified in the sense that its computation
depends only upon the instances of $c$ that may be modified by the update. We
then extend the definition of a simplified $wp(c,u)$ to the case of deductive
databases; we prove it using fixpoint induction.
| [
{
"version": "v1",
"created": "Tue, 14 Mar 2006 14:30:10 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"-Bouziad",
"A. Ai T",
"",
"LIAFA"
],
[
"Guessarian",
"Irene",
"",
"LIAFA"
],
[
"Vieille",
"L.",
"",
"NCM"
]
] |
cs/0603077 | Bryan Ford | Bryan Ford | Packrat Parsing: Simple, Powerful, Lazy, Linear Time | 12 pages, 5 figures | International Conference on Functional Programming (ICFP '02),
October 2002, Pittsburgh, PA | null | null | cs.DS cs.CC cs.PL | null | Packrat parsing is a novel technique for implementing parsers in a lazy
functional programming language. A packrat parser provides the power and
flexibility of top-down parsing with backtracking and unlimited lookahead, but
nevertheless guarantees linear parse time. Any language defined by an LL(k) or
LR(k) grammar can be recognized by a packrat parser, in addition to many
languages that conventional linear-time algorithms do not support. This
additional power simplifies the handling of common syntactic idioms such as the
widespread but troublesome longest-match rule, enables the use of sophisticated
disambiguation strategies such as syntactic and semantic predicates, provides
better grammar composition properties, and allows lexical analysis to be
integrated seamlessly into parsing. Yet despite its power, packrat parsing
shares the same simplicity and elegance as recursive descent parsing; in fact
converting a backtracking recursive descent parser into a linear-time packrat
parser often involves only a fairly straightforward structural change. This
paper describes packrat parsing informally with emphasis on its use in
practical applications, and explores its advantages and disadvantages with
respect to the more conventional alternatives.
| [
{
"version": "v1",
"created": "Sat, 18 Mar 2006 17:49:45 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Ford",
"Bryan",
""
]
] |
cs/0603084 | Eran Ofek | Uriel Feige and Eran Ofek | Random 3CNF formulas elude the Lovasz theta function | 14 pages | null | null | null | cs.CC cs.DS cs.LO | null | Let $\phi$ be a 3CNF formula with n variables and m clauses. A simple
nonconstructive argument shows that when m is sufficiently large compared to n,
most 3CNF formulas are not satisfiable. It is an open question whether there is
an efficient refutation algorithm that for most such formulas proves that they
are not satisfiable. A possible approach to refute a formula $\phi$ is: first,
translate it into a graph $G_{\phi}$ using a generic reduction from 3-SAT to
max-IS, then bound the maximum independent set of $G_{\phi}$ using the Lovasz
$\vartheta$ function. If the $\vartheta$ function returns a value $< m$, this
is a certificate for the unsatisfiability of $\phi$. We show that for random
formulas with $m < n^{3/2 -o(1)}$ clauses, the above approach fails, i.e. the
$\vartheta$ function is likely to return a value of m.
| [
{
"version": "v1",
"created": "Wed, 22 Mar 2006 10:30:36 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Feige",
"Uriel",
""
],
[
"Ofek",
"Eran",
""
]
] |
cs/0603089 | Lawrence Ioannou | Lawrence M. Ioannou and Benjamin C. Travaglione and Donny Cheung | Convex Separation from Optimization via Heuristics | null | null | null | null | cs.DS math.OC | null | Let $K$ be a full-dimensional convex subset of $\mathbb{R}^n$. We describe a
new polynomial-time Turing reduction from the weak separation problem for $K$
to the weak optimization problem for $K$ that is based on a geometric
heuristic. We compare our reduction, which relies on analytic centers, with the
standard, more general reduction.
| [
{
"version": "v1",
"created": "Wed, 22 Mar 2006 19:46:58 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Ioannou",
"Lawrence M.",
""
],
[
"Travaglione",
"Benjamin C.",
""
],
[
"Cheung",
"Donny",
""
]
] |
cs/0603122 | Irene Guessarian | Eug\'enie Foustoucos (MPLA), Irene Guessarian (LIAFA) | Complexity of Monadic inf-datalog. Application to temporal logic | null | Proc. 4th Panhellenic Logic Symposium (2003) 95-99 | null | null | cs.DS | null | In [11] we defined Inf-Datalog and characterized the fragments of Monadic
inf-Datalog that have the same expressive power as Modal Logic (resp. $CTL$,
alternation-free Modal $\mu$-calculus and Modal $\mu$-calculus). We study here
the time and space complexity of evaluation of Monadic inf-Datalog programs on
finite models. We deduce a new unified proof that model checking has 1. linear
data and program complexities (both in time and space) for $CTL$ and
alternation-free Modal $\mu$-calculus, and 2. linear-space (data and program)
complexities, linear-time program complexity and polynomial-time data
complexity for $L\mu\_k$ (Modal $\mu$-calculus with fixed alternation-depth at
most $k$).}
| [
{
"version": "v1",
"created": "Thu, 30 Mar 2006 15:25:11 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Foustoucos",
"Eugénie",
"",
"MPLA"
],
[
"Guessarian",
"Irene",
"",
"LIAFA"
]
] |
cs/0604008 | Sandor P. Fekete | Esther M. Arkin and Herve Broennimann and Jeff Erickson and Sandor P.
Fekete and Christian Knauer and Jonathan Lenchner and Joseph S. B. Mitchell
and Kim Whittlesey | Minimum-Cost Coverage of Point Sets by Disks | 10 pages, 4 figures, Latex, to appear in ACM Symposium on
Computational Geometry 2006 | null | null | null | cs.DS cs.CG | null | We consider a class of geometric facility location problems in which the goal
is to determine a set X of disks given by their centers (t_j) and radii (r_j)
that cover a given set of demand points Y in the plane at the smallest possible
cost. We consider cost functions of the form sum_j f(r_j), where f(r)=r^alpha
is the cost of transmission to radius r. Special cases arise for alpha=1 (sum
of radii) and alpha=2 (total area); power consumption models in wireless
network design often use an exponent alpha>2. Different scenarios arise
according to possible restrictions on the transmission centers t_j, which may
be constrained to belong to a given discrete set or to lie on a line, etc. We
obtain several new results, including (a) exact and approximation algorithms
for selecting transmission points t_j on a given line in order to cover demand
points Y in the plane; (b) approximation algorithms (and an algebraic
intractability result) for selecting an optimal line on which to place
transmission points to cover Y; (c) a proof of NP-hardness for a discrete set
of transmission points in the plane and any fixed alpha>1; and (d) a
polynomial-time approximation scheme for the problem of computing a minimum
cost covering tour (MCCT), in which the total cost is a linear combination of
the transmission cost for the set of disks and the length of a tour/path that
connects the centers of the disks.
| [
{
"version": "v1",
"created": "Tue, 4 Apr 2006 17:24:09 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Arkin",
"Esther M.",
""
],
[
"Broennimann",
"Herve",
""
],
[
"Erickson",
"Jeff",
""
],
[
"Fekete",
"Sandor P.",
""
],
[
"Knauer",
"Christian",
""
],
[
"Lenchner",
"Jonathan",
""
],
[
"Mitchell",
"Joseph S. B.",
""
],
[
"Whittlesey",
"Kim",
""
]
] |
cs/0604016 | Michael Baer | Michael B. Baer | On Conditional Branches in Optimal Search Trees | 8 pages, 5 figures (with 10 illustrations total), 1 table;
reformatted with some additional notes | null | null | null | cs.PF cs.DS cs.IR | null | Algorithms for efficiently finding optimal alphabetic decision trees -- such
as the Hu-Tucker algorithm -- are well established and commonly used. However,
such algorithms generally assume that the cost per decision is uniform and thus
independent of the outcome of the decision. The few algorithms without this
assumption instead use one cost if the decision outcome is ``less than'' and
another cost otherwise. In practice, neither assumption is accurate for
software optimized for today's microprocessors. Such software generally has one
cost for the more likely decision outcome and a greater cost -- often far
greater -- for the less likely decision outcome. This problem and
generalizations thereof are thus applicable to hard coding static decision tree
instances in software, e.g., for optimizing program bottlenecks or for
compiling switch statements. An O(n^3)-time O(n^2)-space dynamic programming
algorithm can solve this optimal binary decision tree problem, and this
approach has many generalizations that optimize for the behavior of processors
with predictive branch capabilities, both static and dynamic. Solutions to this
formulation are often faster in practice than ``optimal'' decision trees as
formulated in the literature. Different search paradigms can sometimes yield
even better performance.
| [
{
"version": "v1",
"created": "Thu, 6 Apr 2006 00:54:44 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2006 05:07:31 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Baer",
"Michael B.",
""
]
] |
cs/0604020 | Bodo Manthey | Bodo Manthey | Approximation Algorithms for Restricted Cycle Covers Based on Cycle
Decompositions | This paper has been joint with "On Approximating Restricted Cycle
Covers" (cs.CC/0504038). Please refer to that paper. The paper "Approximation
Algorithms for Restricted Cycle Covers Based on Cycle Decompositions" is now
obsolete | null | null | null | cs.DS cs.CC cs.DM | null | A cycle cover of a graph is a set of cycles such that every vertex is part of
exactly one cycle. An L-cycle cover is a cycle cover in which the length of
every cycle is in the set L. The weight of a cycle cover of an edge-weighted
graph is the sum of the weights of its edges.
We come close to settling the complexity and approximability of computing
L-cycle covers. On the one hand, we show that for almost all L, computing
L-cycle covers of maximum weight in directed and undirected graphs is APX-hard
and NP-hard. Most of our hardness results hold even if the edge weights are
restricted to zero and one.
On the other hand, we show that the problem of computing L-cycle covers of
maximum weight can be approximated within a factor of 2 for undirected graphs
and within a factor of 8/3 in the case of directed graphs. This holds for
arbitrary sets L.
| [
{
"version": "v1",
"created": "Thu, 6 Apr 2006 13:53:25 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2006 13:01:24 GMT"
},
{
"version": "v3",
"created": "Thu, 29 Jun 2006 07:32:39 GMT"
},
{
"version": "v4",
"created": "Fri, 15 Dec 2006 14:16:53 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Manthey",
"Bodo",
""
]
] |
cs/0604037 | Oren Weimann | Erik D. Demaine, Shay Mozes, Benjamin Rossman, Oren Weimann | An O(n^3)-Time Algorithm for Tree Edit Distance | 10 pages, 5 figures, 5 .tex files where TED.tex is the main one | ACM Transactions on Algorithms 6(1): (2009) | 10.1145/1644015.1644017 | null | cs.DS | null | The {\em edit distance} between two ordered trees with vertex labels is the
minimum cost of transforming one tree into the other by a sequence of
elementary operations consisting of deleting and relabeling existing nodes, as
well as inserting new nodes. In this paper, we present a worst-case
$O(n^3)$-time algorithm for this problem, improving the previous best
$O(n^3\log n)$-time algorithm~\cite{Klein}. Our result requires a novel
adaptive strategy for deciding how a dynamic program divides into subproblems
(which is interesting in its own right), together with a deeper understanding
of the previous algorithms for the problem. We also prove the optimality of our
algorithm among the family of \emph{decomposition strategy} algorithms--which
also includes the previous fastest algorithms--by tightening the known lower
bound of $\Omega(n^2\log^2 n)$~\cite{Touzet} to $\Omega(n^3)$, matching our
algorithm's running time. Furthermore, we obtain matching upper and lower
bounds of $\Theta(n m^2 (1 + \log \frac{n}{m}))$ when the two trees have
different sizes $m$ and~$n$, where $m < n$.
| [
{
"version": "v1",
"created": "Mon, 10 Apr 2006 00:39:11 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2006 21:06:17 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Apr 2006 06:19:35 GMT"
}
] | 2010-12-01T00:00:00 | [
[
"Demaine",
"Erik D.",
""
],
[
"Mozes",
"Shay",
""
],
[
"Rossman",
"Benjamin",
""
],
[
"Weimann",
"Oren",
""
]
] |
cs/0604045 | Sandor P. Fekete | Sandor P. Fekete and Joerg Schepers and Jan C. van der Veen | An exact algorithm for higher-dimensional orthogonal packing | 31 pages, 6 figures, 9 tables, to appear in Operations Research; full
and updated journal version of sketches that appeared as parts of an extended
abstract in ESA'97 | null | null | null | cs.DS | null | Higher-dimensional orthogonal packing problems have a wide range of practical
applications, including packing, cutting, and scheduling. Combining the use of
our data structure for characterizing feasible packings with our new classes of
lower bounds, and other heuristics, we develop a two-level tree search
algorithm for solving higher-dimensional packing problems to optimality.
Computational results are reported, including optimal solutions for all
two--dimensional test problems from recent literature.
This is the third in a series of articles describing new approaches to
higher-dimensional packing; see cs.DS/0310032 and cs.DS/0402044.
| [
{
"version": "v1",
"created": "Tue, 11 Apr 2006 13:55:03 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Fekete",
"Sandor P.",
""
],
[
"Schepers",
"Joerg",
""
],
[
"van der Veen",
"Jan C.",
""
]
] |
cs/0604051 | Michael Brinkmeier | Michael Brinkmeier | Structural Alignments of pseudo-knotted RNA-molecules in polynomial time | 16 pages | null | null | null | cs.DS cs.CC cs.DM | null | An RNA molecule is structured on several layers. The primary and most obvious
structure is its sequence of bases, i.e. a word over the alphabet {A,C,G,U}.
The higher structure is a set of one-to-one base-pairings resulting in a
two-dimensional folding of the one-dimensional sequence. One speaks of a
secondary structure if these pairings do not cross and of a tertiary structure
otherwise.
Since the folding of the molecule is important for its function, the search
for related RNA molecules should not only be restricted to the primary
structure. It seems sensible to incorporate the higher structures in the
search. Based on this assumption and certain edit-operations a distance between
two arbitrary structures can be defined. It is known that the general
calculation of this measure is NP-complete \cite{zhang02similarity}. But for
some special cases polynomial algorithms are known. Using a new formal
description of secondary and tertiary structures, we extend the class of
structures for which the distance can be calculated in polynomial time. In
addition the presented algorithm may be used to approximate the edit-distance
between two arbitrary structures with a constant ratio.
| [
{
"version": "v1",
"created": "Wed, 12 Apr 2006 06:46:04 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Brinkmeier",
"Michael",
""
]
] |
cs/0604055 | Roman Vershynin | Roman Vershynin | Beyond Hirsch Conjecture: walks on random polytopes and smoothed
complexity of the simplex method | 39 pages, 3 figures. Final version. Parts of the argument are
reorganized to make the paper more transparent. Figures added. Small mistakes
and typos corrected | SIAM Journal on Computing 39 (2009), 646--678. Conference version
in: FOCS'06, 133--142 | null | null | cs.DS math.FA | null | The smoothed analysis of algorithms is concerned with the expected running
time of an algorithm under slight random perturbations of arbitrary inputs.
Spielman and Teng proved that the shadow-vertex simplex method has polynomial
smoothed complexity. On a slight random perturbation of an arbitrary linear
program, the simplex method finds the solution after a walk on polytope(s) with
expected length polynomial in the number of constraints n, the number of
variables d and the inverse standard deviation of the perturbation 1/sigma.
We show that the length of walk in the simplex method is actually
polylogarithmic in the number of constraints n. Spielman-Teng's bound on the
walk was O(n^{86} d^{55} sigma^{-30}), up to logarithmic factors. We improve
this to O(log^7 n (d^9 + d^3 \s^{-4})). This shows that the tight Hirsch
conjecture n-d on the length of walk on polytopes is not a limitation for the
smoothed Linear Programming. Random perturbations create short paths between
vertices.
We propose a randomized phase-I for solving arbitrary linear programs, which
is of independent interest. Instead of finding a vertex of a feasible set, we
add a vertex at random to the feasible set. This does not affect the solution
of the linear program with constant probability. This overcomes one of the
major difficulties of smoothed analysis of the simplex method -- one can now
statistically decouple the walk from the smoothed linear program. This yields a
much better reduction of the smoothed complexity to a geometric quantity -- the
size of planar sections of random polytopes. We also improve upon the known
estimates for that size, showing that it is polylogarithmic in the number of
vertices.
| [
{
"version": "v1",
"created": "Wed, 12 Apr 2006 22:36:59 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2006 13:15:07 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Apr 2008 23:13:47 GMT"
}
] | 2016-12-23T00:00:00 | [
[
"Vershynin",
"Roman",
""
]
] |
cs/0604058 | Yury Lifshits | Yury Lifshits | Solving Classical String Problems on Compressed Texts | 10 pages, 6 figures, submitted | null | null | null | cs.DS cs.CC | null | Here we study the complexity of string problems as a function of the size of
a program that generates input. We consider straight-line programs (SLP), since
all algorithms on SLP-generated strings could be applied to processing
LZ-compressed texts.
The main result is a new algorithm for pattern matching when both a text T
and a pattern P are presented by SLPs (so-called fully compressed pattern
matching problem). We show how to find a first occurrence, count all
occurrences, check whether any given position is an occurrence or not in time
O(n^2m). Here m,n are the sizes of straight-line programs generating
correspondingly P and T.
Then we present polynomial algorithms for computing fingerprint table and
compressed representation of all covers (for the first time) and for finding
periods of a given compressed string (our algorithm is faster than previously
known). On the other hand, we show that computing the Hamming distance between
two SLP-generated strings is NP- and coNP-hard.
| [
{
"version": "v1",
"created": "Thu, 13 Apr 2006 08:12:39 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Lifshits",
"Yury",
""
]
] |
cs/0604065 | Vincent Limouzy | Binh-Minh Bui-Xuan (LIRMM), Michel Habib (LIAFA), Vincent Limouzy
(LIAFA), Fabien De Montgolfier (LIAFA) | Unifying two Graph Decompositions with Modular Decomposition | Soumis \`a ISAAC 2007 | Dans Lecture Notes in Computer Science - International Symposium
on Algorithms and Computation (ISAAC, Sendai : Japon (2007) | 10.1007/978-3-540-77120-3 | null | cs.DS | null | We introduces the umodules, a generalisation of the notion of graph module.
The theory we develop captures among others undirected graphs, tournaments,
digraphs, and $2-$structures. We show that, under some axioms, a unique
decomposition tree exists for umodules. Polynomial-time algorithms are provided
for: non-trivial umodule test, maximal umodule computation, and decomposition
tree computation when the tree exists. Our results unify many known
decomposition like modular and bi-join decomposition of graphs, and a new
decomposition of tournaments.
| [
{
"version": "v1",
"created": "Sun, 16 Apr 2006 19:41:38 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Sep 2006 12:10:31 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Jun 2007 17:57:45 GMT"
}
] | 2009-09-29T00:00:00 | [
[
"Bui-Xuan",
"Binh-Minh",
"",
"LIRMM"
],
[
"Habib",
"Michel",
"",
"LIAFA"
],
[
"Limouzy",
"Vincent",
"",
"LIAFA"
],
[
"De Montgolfier",
"Fabien",
"",
"LIAFA"
]
] |
cs/0604068 | Christian Klein | Benjamin Doerr and Tobias Friedrich and Christian Klein and Ralf
Osbild | Unbiased Matrix Rounding | 10th Scandinavian Workshop on Algorithm Theory (SWAT), 2006, to
appear | null | null | null | cs.DS cs.DM | null | We show several ways to round a real matrix to an integer one such that the
rounding errors in all rows and columns as well as the whole matrix are less
than one. This is a classical problem with applications in many fields, in
particular, statistics.
We improve earlier solutions of different authors in two ways. For rounding
matrices of size $m \times n$, we reduce the runtime from $O((m n)^2 Second,
our roundings also have a rounding error of less than one in all initial
intervals of rows and columns. Consequently, arbitrary intervals have an error
of at most two. This is particularly useful in the statistics application of
controlled rounding.
The same result can be obtained via (dependent) randomized rounding. This has
the additional advantage that the rounding is unbiased, that is, for all
entries $y_{ij}$ of our rounding, we have $E(y_{ij}) = x_{ij}$, where $x_{ij}$
is the corresponding entry of the input matrix.
| [
{
"version": "v1",
"created": "Tue, 18 Apr 2006 15:08:37 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2006 18:25:42 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Doerr",
"Benjamin",
""
],
[
"Friedrich",
"Tobias",
""
],
[
"Klein",
"Christian",
""
],
[
"Osbild",
"Ralf",
""
]
] |
cs/0604095 | Gregory Gutin | Gregory Gutin, Stefan Szeider, Anders Yeo | Fixed-Parameter Complexity of Minimum Profile Problems | null | null | null | null | cs.DS cs.DM | null | Let $G=(V,E)$ be a graph. An ordering of $G$ is a bijection $\alpha: V\dom
\{1,2,..., |V|\}.$ For a vertex $v$ in $G$, its closed neighborhood is
$N[v]=\{u\in V: uv\in E\}\cup \{v\}.$ The profile of an ordering $\alpha$ of
$G$ is $\prf_{\alpha}(G)=\sum_{v\in V}(\alpha(v)-\min\{\alpha(u): u\in
N[v]\}).$ The profile $\prf(G)$ of $G$ is the minimum of $\prf_{\alpha}(G)$
over all orderings $\alpha$ of $G$. It is well-known that $\prf(G)$ is the
minimum number of edges in an interval graph $H$ that contains $G$ is a
subgraph. Since $|V|-1$ is a tight lower bound for the profile of connected
graphs $G=(V,E)$, the parametrization above the guaranteed value $|V|-1$ is of
particular interest. We show that deciding whether the profile of a connected
graph $G=(V,E)$ is at most $|V|-1+k$ is fixed-parameter tractable with respect
to the parameter $k$. We achieve this result by reduction to a problem kernel
of linear size.
| [
{
"version": "v1",
"created": "Mon, 24 Apr 2006 17:30:16 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Gutin",
"Gregory",
""
],
[
"Szeider",
"Stefan",
""
],
[
"Yeo",
"Anders",
""
]
] |
cs/0604097 | Boulos Harb | Sudipto Guha and Boulos Harb | Approximation algorithms for wavelet transform coding of data streams | Added a universal representation that provides a provable
approximation guarantee under all p-norms simultaneously | null | null | null | cs.DS | null | This paper addresses the problem of finding a B-term wavelet representation
of a given discrete function $f \in \real^n$ whose distance from f is
minimized. The problem is well understood when we seek to minimize the
Euclidean distance between f and its representation. The first known algorithms
for finding provably approximate representations minimizing general $\ell_p$
distances (including $\ell_\infty$) under a wide variety of compactly supported
wavelet bases are presented in this paper. For the Haar basis, a polynomial
time approximation scheme is demonstrated. These algorithms are applicable in
the one-pass sublinear-space data stream model of computation. They generalize
naturally to multiple dimensions and weighted norms. A universal representation
that provides a provable approximation guarantee under all p-norms
simultaneously; and the first approximation algorithms for bit-budget versions
of the problem, known as adaptive quantization, are also presented. Further, it
is shown that the algorithms presented here can be used to select a basis from
a tree-structured dictionary of bases and find a B-term representation of the
given function that provably approximates its best dictionary-basis
representation.
| [
{
"version": "v1",
"created": "Tue, 25 Apr 2006 01:27:37 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Apr 2006 23:39:35 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Sep 2006 18:14:50 GMT"
},
{
"version": "v4",
"created": "Sun, 22 Jul 2007 17:33:29 GMT"
}
] | 2007-07-23T00:00:00 | [
[
"Guha",
"Sudipto",
""
],
[
"Harb",
"Boulos",
""
]
] |
cs/0604108 | Francesc Rossell\'o | Francesc Rossello, Gabriel Valiente | An Algebraic View of the Relation between Largest Common Subtrees and
Smallest Common Supertrees | 32 pages | null | null | null | cs.DS cs.DM math.CT | null | The relationship between two important problems in tree pattern matching, the
largest common subtree and the smallest common supertree problems, is
established by means of simple constructions, which allow one to obtain a
largest common subtree of two trees from a smallest common supertree of them,
and vice versa. These constructions are the same for isomorphic, homeomorphic,
topological, and minor embeddings, they take only time linear in the size of
the trees, and they turn out to have a clear algebraic meaning.
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2006 10:32:43 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Rossello",
"Francesc",
""
],
[
"Valiente",
"Gabriel",
""
]
] |
cs/0605002 | Chao Pang Yang | Chao-Yang Pang, Zheng-Wei Zhou, and Guang-Can Guo | A Hybrid Quantum Encoding Algorithm of Vector Quantization for Image
Compression | Modify on June 21. 10pages, 3 figures | null | 10.1088/1009-1963/15/12/044 | null | cs.MM cs.DS | null | Many classical encoding algorithms of Vector Quantization (VQ) of image
compression that can obtain global optimal solution have computational
complexity O(N). A pure quantum VQ encoding algorithm with probability of
success near 100% has been proposed, that performs operations 45sqrt(N) times
approximately. In this paper, a hybrid quantum VQ encoding algorithm between
classical method and quantum algorithm is presented. The number of its
operations is less than sqrt(N) for most images, and it is more efficient than
the pure quantum algorithm.
Key Words: Vector Quantization, Grover's Algorithm, Image Compression,
Quantum Algorithm
| [
{
"version": "v1",
"created": "Sun, 30 Apr 2006 13:35:54 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2006 18:12:20 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Jun 2006 03:22:38 GMT"
}
] | 2009-11-11T00:00:00 | [
[
"Pang",
"Chao-Yang",
""
],
[
"Zhou",
"Zheng-Wei",
""
],
[
"Guo",
"Guang-Can",
""
]
] |
cs/0605013 | L. Sunil Chandran | L. Sunil Chandran and Mathew C Francis and Naveen Sivadasan | Geometric representation of graphs in low dimension | preliminary version appeared in Cocoon 2006 | null | null | null | cs.DM cs.DS | null | We give an efficient randomized algorithm to construct a box representation
of any graph G on n vertices in $1.5 (\Delta + 2) \ln n$ dimensions, where
$\Delta$ is the maximum degree of G. We also show that $\boxi(G) \le (\Delta +
2) \ln n$ for any graph G. Our bound is tight up to a factor of $\ln n$. We
also show that our randomized algorithm can be derandomized to get a polynomial
time deterministic algorithm. Though our general upper bound is in terms of
maximum degree $\Delta$, we show that for almost all graphs on n vertices, its
boxicity is upper bound by $c\cdot(d_{av} + 1) \ln n$ where d_{av} is the
average degree and c is a small constant. Also, we show that for any graph G,
$\boxi(G) \le \sqrt{8 n d_{av} \ln n}$, which is tight up to a factor of $b
\sqrt{\ln n}$ for a constant b.
| [
{
"version": "v1",
"created": "Thu, 4 May 2006 16:53:29 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Jul 2007 17:22:41 GMT"
}
] | 2007-07-31T00:00:00 | [
[
"Chandran",
"L. Sunil",
""
],
[
"Francis",
"Mathew C",
""
],
[
"Sivadasan",
"Naveen",
""
]
] |
cs/0605050 | V. Arvind | V. Arvind and Piyush P Kurur | A Polynomial Time Nilpotence Test for Galois Groups and Related Results | 12 pages | null | null | null | cs.CC cs.DS | null | We give a deterministic polynomial-time algorithm to check whether the Galois
group $\Gal{f}$ of an input polynomial $f(X) \in \Q[X]$ is nilpotent: the
running time is polynomial in $\size{f}$. Also, we generalize the Landau-Miller
solvability test to an algorithm that tests if $\Gal{f}$ is in $\Gamma_d$: this
algorithm runs in time polynomial in $\size{f}$ and $n^d$ and, moreover, if
$\Gal{f}\in\Gamma_d$ it computes all the prime factors of $# \Gal{f}$.
| [
{
"version": "v1",
"created": "Thu, 11 May 2006 08:20:44 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Arvind",
"V.",
""
],
[
"Kurur",
"Piyush P",
""
]
] |
cs/0605078 | Christoph D\"urr | Philippe Baptiste, Peter Brucker, Marek Chrobak, Christoph Durr,
Svetlana A. Kravchenko, Francis Sourd | The Complexity of Mean Flow Time Scheduling Problems with Release Times | Subsumes and replaces cs.DS/0412094 and "Complexity of mean flow time
scheduling problems with release dates" by P.B, S.K | null | null | null | cs.DS | null | We study the problem of preemptive scheduling n jobs with given release times
on m identical parallel machines. The objective is to minimize the average flow
time. We show that when all jobs have equal processing times then the problem
can be solved in polynomial time using linear programming. Our algorithm can
also be applied to the open-shop problem with release times and unit processing
times. For the general case (when processing times are arbitrary), we show that
the problem is unary NP-hard.
| [
{
"version": "v1",
"created": "Wed, 17 May 2006 22:07:17 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Baptiste",
"Philippe",
""
],
[
"Brucker",
"Peter",
""
],
[
"Chrobak",
"Marek",
""
],
[
"Durr",
"Christoph",
""
],
[
"Kravchenko",
"Svetlana A.",
""
],
[
"Sourd",
"Francis",
""
]
] |
cs/0605099 | Michael Baer | Michael B. Baer | Alphabetic Coding with Exponential Costs | 7 pages, submitted to Elsevier | null | null | null | cs.IT cs.DS math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An alphabetic binary tree formulation applies to problems in which an outcome
needs to be determined via alphabetically ordered search prior to the
termination of some window of opportunity. Rather than finding a decision tree
minimizing $\sum_{i=1}^n w(i) l(i)$, this variant involves minimizing $\log_a
\sum_{i=1}^n w(i) a^{l(i)}$ for a given $a \in (0,1)$. This note introduces a
dynamic programming algorithm that finds the optimal solution in polynomial
time and space, and shows that methods traditionally used to improve the speed
of optimizations in related problems, such as the Hu-Tucker procedure, fail for
this problem. This note thus also introduces two approximation algorithms which
can find a suboptimal solution in linear time (for one) or $\order(n \log n)$
time (for the other), with associated coding redundancy bounds.
| [
{
"version": "v1",
"created": "Tue, 23 May 2006 19:55:47 GMT"
},
{
"version": "v2",
"created": "Sat, 27 May 2006 17:07:32 GMT"
},
{
"version": "v3",
"created": "Sat, 28 Mar 2009 01:06:51 GMT"
}
] | 2009-03-28T00:00:00 | [
[
"Baer",
"Michael B.",
""
]
] |
cs/0605102 | Shaili Jain | Adam L. Buchsbaum, Alon Efrat, Shaili Jain, Suresh Venkatasubramanian
and Ke Yi | Restricted Strip Covering and the Sensor Cover Problem | 14 pages, 6 figures | null | null | null | cs.DS cs.CG | null | Given a set of objects with durations (jobs) that cover a base region, can we
schedule the jobs to maximize the duration the original region remains covered?
We call this problem the sensor cover problem. This problem arises in the
context of covering a region with sensors. For example, suppose you wish to
monitor activity along a fence by sensors placed at various fixed locations.
Each sensor has a range and limited battery life. The problem is to schedule
when to turn on the sensors so that the fence is fully monitored for as long as
possible. This one dimensional problem involves intervals on the real line.
Associating a duration to each yields a set of rectangles in space and time,
each specified by a pair of fixed horizontal endpoints and a height. The
objective is to assign a position to each rectangle to maximize the height at
which the spanning interval is fully covered. We call this one dimensional
problem restricted strip covering. If we replace the covering constraint by a
packing constraint, the problem is identical to dynamic storage allocation, a
scheduling problem that is a restricted case of the strip packing problem. We
show that the restricted strip covering problem is NP-hard and present an O(log
log n)-approximation algorithm. We present better approximations or exact
algorithms for some special cases. For the uniform-duration case of restricted
strip covering we give a polynomial-time, exact algorithm but prove that the
uniform-duration case for higher-dimensional regions is NP-hard. Finally, we
consider regions that are arbitrary sets, and we present an O(log
n)-approximation algorithm.
| [
{
"version": "v1",
"created": "Wed, 24 May 2006 03:27:07 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Buchsbaum",
"Adam L.",
""
],
[
"Efrat",
"Alon",
""
],
[
"Jain",
"Shaili",
""
],
[
"Venkatasubramanian",
"Suresh",
""
],
[
"Yi",
"Ke",
""
]
] |
cs/0605112 | Marko Antonio Rodriguez | Marko A. Rodriguez, Johan Bollen | An Algorithm to Determine Peer-Reviewers | Rodriguez, M.A., Bollen, J., "An Algorithm to Determine
Peer-Reviewers", Conference on Information and Knowledge Management, in
press, ACM, LA-UR-06-2261, October 2008; ISBN:978-1-59593-991-3 | Conference on Information and Knowledge Management (CIKM), ACM,
pages 319-328, (October 2008) | 10.1145/1458082.1458127 | LA-UR-06-2261 | cs.DL cs.AI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The peer-review process is the most widely accepted certification mechanism
for officially accepting the written results of researchers within the
scientific community. An essential component of peer-review is the
identification of competent referees to review a submitted manuscript. This
article presents an algorithm to automatically determine the most appropriate
reviewers for a manuscript by way of a co-authorship network data structure and
a relative-rank particle-swarm algorithm. This approach is novel in that it is
not limited to a pre-selected set of referees, is computationally efficient,
requires no human-intervention, and, in some instances, can automatically
identify conflict of interest situations. A useful application of this
algorithm would be to open commentary peer-review systems because it provides a
weighting for each referee with respects to their expertise in the domain of a
manuscript. The algorithm is validated using referee bid data from the 2005
Joint Conference on Digital Libraries.
| [
{
"version": "v1",
"created": "Wed, 24 May 2006 17:06:32 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Jul 2008 22:24:28 GMT"
}
] | 2008-12-02T00:00:00 | [
[
"Rodriguez",
"Marko A.",
""
],
[
"Bollen",
"Johan",
""
]
] |
cs/0605126 | David Bunde | David P. Bunde | Power-aware scheduling for makespan and flow | 13 pages, 3 figures. To appear in 18th ACM Symposium on Parallelism
in Algorithms and Architectures (SPAA), 2006 | null | null | null | cs.DS | null | We consider offline scheduling algorithms that incorporate speed scaling to
address the bicriteria problem of minimizing energy consumption and a
scheduling metric. For makespan, we give linear-time algorithms to compute all
non-dominated solutions for the general uniprocessor problem and for the
multiprocessor problem when every job requires the same amount of work. We also
show that the multiprocessor problem becomes NP-hard when jobs can require
different amounts of work.
For total flow, we show that the optimal flow corresponding to a particular
energy budget cannot be exactly computed on a machine supporting arithmetic and
the extraction of roots. This hardness result holds even when scheduling
equal-work jobs on a uniprocessor. We do, however, extend previous work by
Pruhs et al. to give an arbitrarily-good approximation for scheduling
equal-work jobs on a multiprocessor.
| [
{
"version": "v1",
"created": "Fri, 26 May 2006 21:57:35 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Bunde",
"David P.",
""
]
] |
cs/0606001 | David Steurer | David Steurer | Tight Bounds for the Min-Max Boundary Decomposition Cost of Weighted
Graphs | 41 pages, full version of a paper that will appear in SPAA`06 | null | null | null | cs.DS cs.DM | null | Many load balancing problems that arise in scientific computing applications
ask to partition a graph with weights on the vertices and costs on the edges
into a given number of almost equally-weighted parts such that the maximum
boundary cost over all parts is small.
Here, this partitioning problem is considered for bounded-degree graphs
G=(V,E) with edge costs c: E->R+ that have a p-separator theorem for some p>1,
i.e., any (arbitrarily weighted) subgraph of G can be separated into two parts
of roughly the same weight by removing a vertex set S such that the edges
incident to S in the subgraph have total cost at most proportional to (SUM_e
c^p_e)^(1/p), where the sum is over all edges e in the subgraph.
We show for all positive integers k and weights w that the vertices of G can
be partitioned into k parts such that the weight of each part differs from the
average weight by less than MAX{w_v; v in V}, and the boundary edges of each
part have cost at most proportional to (SUM_e c_e^p/k)^(1/p) + MAX_e c_e. The
partition can be computed in time nearly proportional to the time for computing
a separator S of G.
Our upper bound on the boundary costs is shown to be tight up to a constant
factor for infinitely many instances with a broad range of parameters. Previous
results achieved this bound only if one has c=1, w=1, and one allows parts with
weight exceeding the average by a constant fraction.
| [
{
"version": "v1",
"created": "Thu, 1 Jun 2006 01:47:38 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Steurer",
"David",
""
]
] |
cs/0606038 | Shripad Thite | Shripad Thite | Tight Bounds on the Complexity of Recognizing Odd-Ranked Elements | 3 pages | null | null | null | cs.CC cs.DS | null | Let S = <s_1, s_2, s_3, ..., s_n> be a given vector of n real numbers. The
rank of a real z with respect to S is defined as the number of elements s_i in
S such that s_i is less than or equal to z. We consider the following decision
problem: determine whether the odd-numbered elements s_1, s_3, s_5, ... are
precisely the elements of S whose rank with respect to S is odd. We prove a
bound of Theta(n log n) on the number of operations required to solve this
problem in the algebraic computation tree model.
| [
{
"version": "v1",
"created": "Thu, 8 Jun 2006 21:28:09 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Thite",
"Shripad",
""
]
] |
cs/0606040 | Bodo Manthey | Bodo Manthey, L. Shankar Ram | Approximation Algorithms for Multi-Criteria Traveling Salesman Problems | To appear in Algorithmica. A preliminary version has been presented
at the 4th Workshop on Approximation and Online Algorithms (WAOA 2006) | null | null | null | cs.DS cs.CC | null | In multi-criteria optimization problems, several objective functions have to
be optimized. Since the different objective functions are usually in conflict
with each other, one cannot consider only one particular solution as the
optimal solution. Instead, the aim is to compute a so-called Pareto curve of
solutions. Since Pareto curves cannot be computed efficiently in general, we
have to be content with approximations to them.
We design a deterministic polynomial-time algorithm for multi-criteria
g-metric STSP that computes (min{1 +g, 2g^2/(2g^2 -2g +1)} + eps)-approximate
Pareto curves for all 1/2<=g<=1. In particular, we obtain a
(2+eps)-approximation for multi-criteria metric STSP. We also present two
randomized approximation algorithms for multi-criteria g-metric STSP that
achieve approximation ratios of (2g^3 +2g^2)/(3g^2 -2g +1) + eps and (1 +g)/(1
+3g -4g^2) + eps, respectively.
Moreover, we present randomized approximation algorithms for multi-criteria
g-metric ATSP (ratio 1/2 + g^3/(1 -3g^2) + eps) for g < 1/sqrt(3)), STSP with
weights 1 and 2 (ratio 4/3) and ATSP with weights 1 and 2 (ratio 3/2). To do
this, we design randomized approximation schemes for multi-criteria cycle cover
and graph factor problems.
| [
{
"version": "v1",
"created": "Fri, 9 Jun 2006 11:41:53 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Oct 2006 15:16:04 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Aug 2007 13:12:04 GMT"
}
] | 2009-09-29T00:00:00 | [
[
"Manthey",
"Bodo",
""
],
[
"Ram",
"L. Shankar",
""
]
] |
cs/0606042 | Laurent Hascoet | Laurent Hascoet (INRIA Sophia Antipolis), Mauricio Araya-Polo (INRIA
Sophia Antipolis) | Enabling user-driven Checkpointing strategies in Reverse-mode Automatic
Differentiation | null | null | null | null | cs.DS | null | This paper presents a new functionality of the Automatic Differentiation (AD)
tool Tapenade. Tapenade generates adjoint codes which are widely used for
optimization or inverse problems. Unfortunately, for large applications the
adjoint code demands a great deal of memory, because it needs to store a large
set of intermediates values. To cope with that problem, Tapenade implements a
sub-optimal version of a technique called checkpointing, which is a trade-off
between storage and recomputation. Our long-term goal is to provide an optimal
checkpointing strategy for every code, not yet achieved by any AD tool. Towards
that goal, we first introduce modifications in Tapenade in order to give the
user the choice to select the checkpointing strategy most suitable for their
code. Second, we conduct experiments in real-size scientific codes in order to
gather hints that help us to deduce an optimal checkpointing strategy. Some of
the experimental results show memory savings up to 35% and execution time up to
90%.
| [
{
"version": "v1",
"created": "Fri, 9 Jun 2006 16:01:46 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Hascoet",
"Laurent",
"",
"INRIA Sophia Antipolis"
],
[
"Araya-Polo",
"Mauricio",
"",
"INRIA\n Sophia Antipolis"
]
] |
cs/0606048 | Rudi Cilibrasi | Rudi Cilibrasi and Paul M.B. Vitanyi | A New Quartet Tree Heuristic for Hierarchical Clustering | 22 pages, 14 figures | null | null | null | cs.DS cs.CV cs.DM math.ST physics.data-an q-bio.QM stat.TH | null | We consider the problem of constructing an an optimal-weight tree from the
3*(n choose 4) weighted quartet topologies on n objects, where optimality means
that the summed weight of the embedded quartet topologiesis optimal (so it can
be the case that the optimal tree embeds all quartets as non-optimal
topologies). We present a heuristic for reconstructing the optimal-weight tree,
and a canonical manner to derive the quartet-topology weights from a given
distance matrix. The method repeatedly transforms a bifurcating tree, with all
objects involved as leaves, achieving a monotonic approximation to the exact
single globally optimal tree. This contrasts to other heuristic search methods
from biological phylogeny, like DNAML or quartet puzzling, which, repeatedly,
incrementally construct a solution from a random order of objects, and
subsequently add agreement values.
| [
{
"version": "v1",
"created": "Sun, 11 Jun 2006 16:05:51 GMT"
}
] | 2011-11-09T00:00:00 | [
[
"Cilibrasi",
"Rudi",
""
],
[
"Vitanyi",
"Paul M. B.",
""
]
] |
cs/0606067 | Raphael Clifford | Michael A. Bender, Raphael Clifford and Kostas Tsichlas | Scheduling Algorithms for Procrastinators | 12 pages, 3 figures | null | 10.1007/s10951-007-0038-4 | null | cs.DS | null | This paper presents scheduling algorithms for procrastinators, where the
speed that a procrastinator executes a job increases as the due date
approaches. We give optimal off-line scheduling policies for linearly
increasing speed functions. We then explain the computational/numerical issues
involved in implementing this policy. We next explore the online setting,
showing that there exist adversaries that force any online scheduling policy to
miss due dates. This impossibility result motivates the problem of minimizing
the maximum interval stretch of any job; the interval stretch of a job is the
job's flow time divided by the job's due date minus release time. We show that
several common scheduling strategies, including the "hit-the-highest-nail"
strategy beloved by procrastinators, have arbitrarily large maximum interval
stretch. Then we give the "thrashing" scheduling policy and show that it is a
\Theta(1) approximation algorithm for the maximum interval stretch.
| [
{
"version": "v1",
"created": "Wed, 14 Jun 2006 16:55:44 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Aug 2007 10:27:35 GMT"
}
] | 2011-01-05T00:00:00 | [
[
"Bender",
"Michael A.",
""
],
[
"Clifford",
"Raphael",
""
],
[
"Tsichlas",
"Kostas",
""
]
] |
cs/0606103 | Chengpu Wang | Chengpu Wang | Precision Arithmetic: A New Floating-Point Arithmetic | 54 Pages, 32 Figures | null | null | null | cs.DM cs.DS cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new deterministic floating-point arithmetic called precision arithmetic is
developed to track precision for arithmetic calculations. It uses a novel
rounding scheme to avoid excessive rounding error propagation of conventional
floating-point arithmetic. Unlike interval arithmetic, its uncertainty tracking
is based on statistics and the central limit theorem, with a much tighter
bounding range. Its stable rounding error distribution is approximated by a
truncated normal distribution. Generic standards and systematic methods for
validating uncertainty-bearing arithmetics are discussed. The precision
arithmetic is found to be better than interval arithmetic in both
uncertainty-tracking and uncertainty-bounding for normal usages.
The precision arithmetic is available publicly at
http://precisionarithm.sourceforge.net.
| [
{
"version": "v1",
"created": "Sun, 25 Jun 2006 18:56:28 GMT"
},
{
"version": "v10",
"created": "Wed, 13 Oct 2010 02:39:44 GMT"
},
{
"version": "v11",
"created": "Sat, 30 Oct 2010 03:49:06 GMT"
},
{
"version": "v12",
"created": "Thu, 11 Nov 2010 20:25:56 GMT"
},
{
"version": "v13",
"created": "Mon, 6 Dec 2010 21:56:48 GMT"
},
{
"version": "v14",
"created": "Wed, 5 Jan 2011 04:43:49 GMT"
},
{
"version": "v15",
"created": "Mon, 21 Feb 2011 02:07:43 GMT"
},
{
"version": "v16",
"created": "Mon, 14 Mar 2011 20:40:47 GMT"
},
{
"version": "v17",
"created": "Tue, 26 Jul 2011 02:22:36 GMT"
},
{
"version": "v18",
"created": "Sat, 10 Dec 2011 19:13:43 GMT"
},
{
"version": "v19",
"created": "Wed, 12 Sep 2012 01:57:05 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2006 01:33:34 GMT"
},
{
"version": "v20",
"created": "Tue, 18 Mar 2014 04:17:22 GMT"
},
{
"version": "v21",
"created": "Thu, 3 Apr 2014 11:40:24 GMT"
},
{
"version": "v22",
"created": "Sat, 12 Apr 2014 02:02:06 GMT"
},
{
"version": "v3",
"created": "Wed, 19 May 2010 03:16:25 GMT"
},
{
"version": "v4",
"created": "Thu, 20 May 2010 21:42:30 GMT"
},
{
"version": "v5",
"created": "Fri, 9 Jul 2010 18:50:07 GMT"
},
{
"version": "v6",
"created": "Wed, 21 Jul 2010 02:26:26 GMT"
},
{
"version": "v7",
"created": "Sun, 25 Jul 2010 22:54:13 GMT"
},
{
"version": "v8",
"created": "Mon, 27 Sep 2010 15:35:14 GMT"
},
{
"version": "v9",
"created": "Wed, 6 Oct 2010 01:21:06 GMT"
}
] | 2014-04-15T00:00:00 | [
[
"Wang",
"Chengpu",
""
]
] |
cs/0606109 | Manor Mendel | Manor Mendel, Assaf Naor | Maximum gradient embeddings and monotone clustering | 25 pages, 2 figures. Final version, minor revision of the previous
one. To appear in "Combinatorica" | Combinatorica 30(5) (2010), 581--615 | 10.1007/s00493-010-2302-z | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Let (X,d_X) be an n-point metric space. We show that there exists a
distribution D over non-contractive embeddings into trees f:X-->T such that for
every x in X, the expectation with respect to D of the maximum over y in X of
the ratio d_T(f(x),f(y)) / d_X(x,y) is at most C (log n)^2, where C is a
universal constant. Conversely we show that the above quadratic dependence on
log n cannot be improved in general. Such embeddings, which we call maximum
gradient embeddings, yield a framework for the design of approximation
algorithms for a wide range of clustering problems with monotone costs,
including fault-tolerant versions of k-median and facility location.
| [
{
"version": "v1",
"created": "Mon, 26 Jun 2006 19:32:29 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Dec 2008 20:05:52 GMT"
},
{
"version": "v3",
"created": "Wed, 29 Apr 2009 03:13:43 GMT"
},
{
"version": "v4",
"created": "Sun, 29 Aug 2010 19:27:14 GMT"
}
] | 2012-11-15T00:00:00 | [
[
"Mendel",
"Manor",
""
],
[
"Naor",
"Assaf",
""
]
] |
cs/0606110 | Richard Weber | Jochen Mundinger, Richard R. Weber and Gideon Weiss | Optimal Scheduling of Peer-to-Peer File Dissemination | 27 pages, 3 figures. (v2) added a note about possible strengthening
of Theorem 5 at end of proof; updated some references | null | null | null | cs.NI cs.DS math.OC | null | Peer-to-peer (P2P) overlay networks such as BitTorrent and Avalanche are
increasingly used for disseminating potentially large files from a server to
many end users via the Internet. The key idea is to divide the file into many
equally-sized parts and then let users download each part (or, for network
coding based systems such as Avalanche, linear combinations of the parts)
either from the server or from another user who has already downloaded it.
However, their performance evaluation has typically been limited to comparing
one system relative to another and typically been realized by means of
simulation and measurements. In contrast, we provide an analytic performance
analysis that is based on a new uplink-sharing version of the well-known
broadcasting problem. Assuming equal upload capacities, we show that the
minimal time to disseminate the file is the same as for the simultaneous
send/receive version of the broadcasting problem. For general upload
capacities, we provide a mixed integer linear program (MILP) solution and a
complementary fluid limit solution. We thus provide a lower bound which can be
used as a performance benchmark for any P2P file dissemination system. We also
investigate the performance of a decentralized strategy, providing evidence
that the performance of necessarily decentralized P2P file dissemination
systems should be close to this bound and therefore that it is useful in
practice.
| [
{
"version": "v1",
"created": "Tue, 27 Jun 2006 08:11:57 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jun 2006 07:17:28 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Mundinger",
"Jochen",
""
],
[
"Weber",
"Richard R.",
""
],
[
"Weiss",
"Gideon",
""
]
] |
cs/0606116 | Philip Bille | Philip Bille | New Algorithms for Regular Expression Matching | null | null | null | null | cs.DS | null | In this paper we revisit the classical regular expression matching problem,
namely, given a regular expression $R$ and a string $Q$, decide if $Q$ matches
one of the strings specified by $R$. Let $m$ and $n$ be the length of $R$ and
$Q$, respectively. On a standard unit-cost RAM with word length $w \geq \log
n$, we show that the problem can be solved in $O(m)$ space with the following
running times: \begin{equation*} \begin{cases}
O(n\frac{m \log w}{w} + m \log w) & \text{if $m > w$} \\
O(n\log m + m\log m) & \text{if $\sqrt{w} < m \leq w$} \\
O(\min(n+ m^2, n\log m + m\log m)) & \text{if $m \leq \sqrt{w}$.} \end{cases}
\end{equation*} This improves the best known time bound among algorithms using
$O(m)$ space. Whenever $w \geq \log^2 n$ it improves all known time bounds
regardless of how much space is used.
| [
{
"version": "v1",
"created": "Wed, 28 Jun 2006 10:51:39 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Bille",
"Philip",
""
]
] |
cs/0606124 | Sean Falconer | Sean M. Falconer and Dmitri Maslov | Weighted hierarchical alignment of directed acyclic graph | null | null | null | null | cs.DS | null | In some applications of matching, the structural or hierarchical properties
of the two graphs being aligned must be maintained. The hierarchical properties
are induced by the direction of the edges in the two directed graphs. These
structural relationships defined by the hierarchy in the graphs act as a
constraint on the alignment. In this paper, we formalize the above problem as
the weighted alignment between two directed acyclic graphs. We prove that this
problem is NP-complete, show several upper bounds for approximating the
solution, and finally introduce polynomial time algorithms for sub-classes of
directed acyclic graphs.
| [
{
"version": "v1",
"created": "Thu, 29 Jun 2006 18:07:49 GMT"
},
{
"version": "v2",
"created": "Fri, 11 May 2007 18:44:43 GMT"
}
] | 2009-09-29T00:00:00 | [
[
"Falconer",
"Sean M.",
""
],
[
"Maslov",
"Dmitri",
""
]
] |
cs/0607025 | Oskar Sandberg | Oskar Sandberg and Ian Clarke | The evolution of navigable small-world networks | null | null | null | null | cs.DS cs.DC | null | Small-world networks, which combine randomized and structured elements, are
seen as prevalent in nature. Several random graph models have been given for
small-world networks, with one of the most fruitful, introduced by Jon
Kleinberg, showing in which type of graphs it is possible to route, or
navigate, between vertices with very little knowledge of the graph itself.
Kleinberg's model is static, with random edges added to a fixed grid. In this
paper we introduce, analyze and test a randomized algorithm which successively
rewires a graph with every application. The resulting process gives a model for
the evolution of small-world networks with properties similar to those studied
by Kleinberg.
| [
{
"version": "v1",
"created": "Fri, 7 Jul 2006 13:21:09 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Sandberg",
"Oskar",
""
],
[
"Clarke",
"Ian",
""
]
] |
cs/0607026 | James Aspnes | James Aspnes and Yang Richard Yang and Yitong Yin | Path-independent load balancing with unreliable machines | Full version of paper submitted to SODA 2007 | null | null | null | cs.DS cs.NI | null | We consider algorithms for load balancing on unreliable machines. The
objective is to optimize the two criteria of minimizing the makespan and
minimizing job reassignments in response to machine failures. We assume that
the set of jobs is known in advance but that the pattern of machine failures is
unpredictable. Motivated by the requirements of BGP routing, we consider
path-independent algorithms, with the property that the job assignment is
completely determined by the subset of available machines and not the previous
history of the assignments. We examine first the question of performance
measurement of path-independent load-balancing algorithms, giving the measure
of makespan and the normalized measure of reassignments cost. We then describe
two classes of algorithms for optimizing these measures against an oblivious
adversary for identical machines. The first, based on independent random
assignments, gives expected reassignment costs within a factor of 2 of optimal
and gives a makespan within a factor of O(log m/log log m) of optimal with high
probability, for unknown job sizes. The second, in which jobs are first grouped
into bins and at most one bin is assigned to each machine, gives
constant-factor ratios on both reassignment cost and makespan, for known job
sizes. Several open problems are discussed.
| [
{
"version": "v1",
"created": "Fri, 7 Jul 2006 14:01:15 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Aspnes",
"James",
""
],
[
"Yang",
"Yang Richard",
""
],
[
"Yin",
"Yitong",
""
]
] |
cs/0607045 | Xin Han | Xin Han, Deshi Ye, Yong Zhou | Improved online hypercube packing | 13 pages, one figure, accepted in WAOA'06 | null | null | null | cs.DS | null | In this paper, we study online multidimensional bin packing problem when all
items are hypercubes.
Based on the techniques in one dimensional bin packing algorithm Super
Harmonic by Seiden, we give a framework for online hypercube packing problem
and obtain new upper bounds of asymptotic competitive ratios.
For square packing, we get an upper bound of 2.1439, which is better than
2.24437.
For cube packing, we also give a new upper bound 2.6852 which is better than
2.9421 by Epstein and van Stee.
| [
{
"version": "v1",
"created": "Tue, 11 Jul 2006 09:43:45 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Sep 2006 03:26:46 GMT"
}
] | 2016-08-31T00:00:00 | [
[
"Han",
"Xin",
""
],
[
"Ye",
"Deshi",
""
],
[
"Zhou",
"Yong",
""
]
] |
cs/0607046 | Xin Han | Xin Han, Kazuo Iwama, Deshi Ye, Guochuan Zhang | Strip Packing vs. Bin Packing | 12 pages, 3 figures | null | null | null | cs.DS | null | In this paper we establish a general algorithmic framework between bin
packing and strip packing, with which we achieve the same asymptotic bounds by
applying bin packing algorithms to strip packing. More precisely we obtain the
following results: (1) Any offline bin packing algorithm can be applied to
strip packing maintaining the same asymptotic worst-case ratio. Thus using FFD
(MFFD) as a subroutine, we get a practical (simple and fast) algorithm for
strip packing with an upper bound 11/9 (71/60). A simple AFPTAS for strip
packing immediately follows. (2) A class of Harmonic-based algorithms for bin
packing can be applied to online strip packing maintaining the same asymptotic
competitive ratio. It implies online strip packing admits an upper bound of
1.58889 on the asymptotic competitive ratio, which is very close to the lower
bound 1.5401 and significantly improves the previously best bound of 1.6910 and
affirmatively answers an open question posed by Csirik et. al.
| [
{
"version": "v1",
"created": "Tue, 11 Jul 2006 09:58:34 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Aug 2006 00:33:29 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Han",
"Xin",
""
],
[
"Iwama",
"Kazuo",
""
],
[
"Ye",
"Deshi",
""
],
[
"Zhang",
"Guochuan",
""
]
] |
cs/0607061 | Igor Mackarov Dr. | Igor Mackarov (Maharishi University of Management) | On Some Peculiarities of Dynamic Switch between Component
Implementations in an Autonomic Computing System | 16 pages, 3 figures | null | null | null | cs.DS cs.DC cs.NA | null | Behavior of the delta algorithm of autonomic switch between two component
implementations is considered on several examples of a client-server systems
involving, in particular, periodic change of intensities of requests for the
component. It is shown that in the cases of some specific combinations of
elementary requests costs, the number of clients in the system, the number of
requests per unit of time, and the cost of switch between the implementations,
the algorithm may reveal behavior that is rather far from the desired. A
sufficient criterion of a success of the algorithm is proposed based on the
analysis of the accumulated implementations costs difference as a function of
time. Suggestions are pointed out of practical evaluation of the algorithm
functioning regarding the observations made in this paper.
| [
{
"version": "v1",
"created": "Wed, 12 Jul 2006 11:09:52 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Mackarov",
"Igor",
"",
"Maharishi University of Management"
]
] |
cs/0607078 | Ying Hung Gan | Ying Hung Gan, Cong Ling and Wai Ho Mow | Complex Lattice Reduction Algorithm for Low-Complexity MIMO Detection | Submitted to IEEE Transactions on Wireless Communication in March
2006. Part of this work was presented at the 2005 Global Telecommunications
Conference, United States, November 2005 | null | null | null | cs.DS cs.IT math.IT | null | Recently, lattice-reduction-aided detectors have been proposed for
multiple-input multiple-output (MIMO) systems to give performance with full
diversity like maximum likelihood receiver, and yet with complexity similar to
linear receivers. However, these lattice-reduction-aided detectors are based on
the traditional LLL reduction algorithm that was originally introduced for
reducing real lattice bases, in spite of the fact that the channel matrices are
inherently complex-valued. In this paper, we introduce the complex LLL
algorithm for direct application to reduce the basis of a complex lattice which
is naturally defined by a complex-valued channel matrix. We prove that complex
LLL reduction-aided detection can also achieve full diversity. Our analysis
reveals that the new complex LLL algorithm can achieve a reduction in
complexity of nearly 50% over the traditional LLL algorithm, and this is
confirmed by simulation. It is noteworthy that the complex LLL algorithm
aforementioned has nearly the same bit-error-rate performance as the
traditional LLL algorithm.
| [
{
"version": "v1",
"created": "Mon, 17 Jul 2006 09:01:33 GMT"
}
] | 2007-07-13T00:00:00 | [
[
"Gan",
"Ying Hung",
""
],
[
"Ling",
"Cong",
""
],
[
"Mow",
"Wai Ho",
""
]
] |
cs/0607098 | Martin Strauss | A. R. Calderbank, Anna C. Gilbert, and Martin J. Strauss | List decoding of noisy Reed-Muller-like codes | null | null | null | null | cs.DS cs.IT math.IT | null | First- and second-order Reed-Muller (RM(1) and RM(2), respectively) codes are
two fundamental error-correcting codes which arise in communication as well as
in probabilistically-checkable proofs and learning. In this paper, we take the
first steps toward extending the quick randomized decoding tools of RM(1) into
the realm of quadratic binary and, equivalently, Z_4 codes. Our main
algorithmic result is an extension of the RM(1) techniques from Goldreich-Levin
and Kushilevitz-Mansour algorithms to the Hankel code, a code between RM(1) and
RM(2). That is, given signal s of length N, we find a list that is a superset
of all Hankel codewords phi with dot product to s at least (1/sqrt(k)) times
the norm of s, in time polynomial in k and log(N). We also give a new and
simple formulation of a known Kerdock code as a subcode of the Hankel code. As
a corollary, we can list-decode Kerdock, too. Also, we get a quick algorithm
for finding a sparse Kerdock approximation. That is, for k small compared with
1/sqrt{N} and for epsilon > 0, we find, in time polynomial in (k
log(N)/epsilon), a k-Kerdock-term approximation s~ to s with Euclidean error at
most the factor (1+epsilon+O(k^2/sqrt{N})) times that of the best such
approximation.
| [
{
"version": "v1",
"created": "Thu, 20 Jul 2006 21:02:29 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2006 20:25:58 GMT"
}
] | 2007-07-16T00:00:00 | [
[
"Calderbank",
"A. R.",
""
],
[
"Gilbert",
"Anna C.",
""
],
[
"Strauss",
"Martin J.",
""
]
] |
cs/0607100 | Xin Han | Xin Han, Kazuo Iwama, Guochuan Zhang | New Upper Bounds on The Approximability of 3D Strip Packing | Submitted to SODA 2007 | null | null | null | cs.DS | null | In this paper, we study the 3D strip packing problem in which we are given a
list of 3-dimensional boxes and required to pack all of them into a
3-dimensional strip with length 1 and width 1 and unlimited height to minimize
the height used. Our results are below: i) we give an approximation algorithm
with asymptotic worst-case ratio 1.69103, which improves the previous best
bound of $2+\epsilon$ by Jansen and Solis-Oba of SODA 2006; ii) we also present
an asymptotic PTAS for the case in which all items have {\em square} bases.
| [
{
"version": "v1",
"created": "Sat, 22 Jul 2006 02:06:26 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Han",
"Xin",
""
],
[
"Iwama",
"Kazuo",
""
],
[
"Zhang",
"Guochuan",
""
]
] |
cs/0607105 | Daniel A. Spielman | Daniel A. Spielman and Shang-Hua Teng | Nearly-Linear Time Algorithms for Preconditioning and Solving Symmetric,
Diagonally Dominant Linear Systems | This revised version contains a new section in which we prove that it
suffices to carry out the computations with limited precision | null | null | null | cs.NA cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a randomized algorithm that, on input a symmetric, weakly
diagonally dominant n-by-n matrix A with m nonzero entries and an n-vector b,
produces a y such that $\norm{y - \pinv{A} b}_{A} \leq \epsilon \norm{\pinv{A}
b}_{A}$ in expected time $O (m \log^{c}n \log (1/\epsilon)),$ for some constant
c. By applying this algorithm inside the inverse power method, we compute
approximate Fiedler vectors in a similar amount of time. The algorithm applies
subgraph preconditioners in a recursive fashion. These preconditioners improve
upon the subgraph preconditioners first introduced by Vaidya (1990).
For any symmetric, weakly diagonally-dominant matrix A with non-positive
off-diagonal entries and $k \geq 1$, we construct in time $O (m \log^{c} n)$ a
preconditioner B of A with at most $2 (n - 1) + O ((m/k) \log^{39} n)$ nonzero
off-diagonal entries such that the finite generalized condition number
$\kappa_{f} (A,B)$ is at most k, for some other constant c.
In the special case when the nonzero structure of the matrix is planar the
corresponding linear system solver runs in expected time $ O (n \log^{2} n + n
\log n \ \log \log n \ \log (1/\epsilon))$.
We hope that our introduction of algorithms of low asymptotic complexity will
lead to the development of algorithms that are also fast in practice.
| [
{
"version": "v1",
"created": "Mon, 24 Jul 2006 04:02:24 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Nov 2006 21:06:59 GMT"
},
{
"version": "v3",
"created": "Mon, 14 May 2007 15:15:46 GMT"
},
{
"version": "v4",
"created": "Thu, 17 Sep 2009 17:48:44 GMT"
},
{
"version": "v5",
"created": "Thu, 13 Sep 2012 12:51:05 GMT"
}
] | 2012-09-14T00:00:00 | [
[
"Spielman",
"Daniel A.",
""
],
[
"Teng",
"Shang-Hua",
""
]
] |
cs/0607115 | Marcin Kaminski | Marcin Kaminski and Vadim Lozin | Polynomial-time algorithm for vertex k-colorability of P_5-free graphs | null | null | null | null | cs.DM cs.DS | null | We give the first polynomial-time algorithm for coloring vertices of P_5-free
graphs with k colors. This settles an open problem and generalizes several
previously known results.
| [
{
"version": "v1",
"created": "Wed, 26 Jul 2006 09:20:20 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Kaminski",
"Marcin",
""
],
[
"Lozin",
"Vadim",
""
]
] |
cs/0608008 | Ilya Safro | Ilya Safro | The minimum linear arrangement problem on proper interval graphs | null | null | null | null | cs.DM cs.DS | null | We present a linear time algorithm for the minimum linear arrangement problem
on proper interval graphs. The obtained ordering is a 4-approximation for
general interval graphs
| [
{
"version": "v1",
"created": "Wed, 2 Aug 2006 07:46:54 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Safro",
"Ilya",
""
]
] |
cs/0608013 | Julien Robert | Julien Robert, Nicolas Schabanel | Pull-Based Data Broadcast with Dependencies: Be Fair to Users, not to
Items | null | null | null | null | cs.DS cs.CC | null | Broadcasting is known to be an efficient means of disseminating data in
wireless communication environments (such as Satellite, mobile phone
networks,...). It has been recently observed that the average service time of
broadcast systems can be considerably improved by taking into consideration
existing correlations between requests. We study a pull-based data broadcast
system where users request possibly overlapping sets of items; a request is
served when all its requested items are downloaded. We aim at minimizing the
average user perceived latency, i.e. the average flow time of the requests. We
first show that any algorithm that ignores the dependencies can yield arbitrary
bad performances with respect to the optimum even if it is given arbitrary
extra resources. We then design a $(4+\epsilon)$-speed
$O(1+1/\epsilon^2)$-competitive algorithm for this setting that consists in 1)
splitting evenly the bandwidth among each requested set and in 2) broadcasting
arbitrarily the items still missing in each set into the bandwidth the set has
received. Our algorithm presents several interesting features: it is simple to
implement, non-clairvoyant, fair to users so that no user may starve for a long
period of time, and guarantees good performances in presence of correlations
between user requests (without any change in the broadcast protocol). We also
present a $ (4+\epsilon)$-speed $O(1+1/\epsilon^3)$-competitive algorithm which
broadcasts at most one item at any given time and preempts each item broadcast
at most once on average. As a side result of our analysis, we design a
competitive algorithm for a particular setting of non-clairvoyant job
scheduling with dependencies, which might be of independent interest.
| [
{
"version": "v1",
"created": "Wed, 2 Aug 2006 15:00:02 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Robert",
"Julien",
""
],
[
"Schabanel",
"Nicolas",
""
]
] |
cs/0608037 | Shaohua Li | Shaohua Li | Cascade hash tables: a series of multilevel double hashing schemes with
O(1) worst case lookup time | this manuscript is poorly written and contains little technical
novelty | null | null | null | cs.DS cs.AI | null | In this paper, the author proposes a series of multilevel double hashing
schemes called cascade hash tables. They use several levels of hash tables. In
each table, we use the common double hashing scheme. Higher level hash tables
work as fail-safes of lower level hash tables. By this strategy, it could
effectively reduce collisions in hash insertion. Thus it gains a constant worst
case lookup time with a relatively high load factor(70%-85%) in random
experiments. Different parameters of cascade hash tables are tested.
| [
{
"version": "v1",
"created": "Mon, 7 Aug 2006 15:22:30 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Sep 2006 10:04:14 GMT"
},
{
"version": "v3",
"created": "Thu, 25 Jun 2015 14:25:38 GMT"
}
] | 2015-06-26T00:00:00 | [
[
"Li",
"Shaohua",
""
]
] |
cs/0608050 | Matthieu Latapy | Pascal Pons and Matthieu Latapy | Post-Processing Hierarchical Community Structures: Quality Improvements
and Multi-scale View | null | Theoretical Computer Science, volume 412, issues 8-10, 4 March
2011, pages 892-900 | 10.1016/j.tcs.2010.11.041 | null | cs.DS cond-mat.dis-nn physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | Dense sub-graphs of sparse graphs (communities), which appear in most
real-world complex networks, play an important role in many contexts. Most
existing community detection algorithms produce a hierarchical structure of
community and seek a partition into communities that optimizes a given quality
function. We propose new methods to improve the results of any of these
algorithms. First we show how to optimize a general class of additive quality
functions (containing the modularity, the performance, and a new similarity
based quality function we propose) over a larger set of partitions than the
classical methods. Moreover, we define new multi-scale quality functions which
make it possible to detect the different scales at which meaningful community
structures appear, while classical approaches find only one partition.
| [
{
"version": "v1",
"created": "Wed, 9 Aug 2006 09:23:06 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jan 2021 15:13:55 GMT"
}
] | 2021-01-13T00:00:00 | [
[
"Pons",
"Pascal",
""
],
[
"Latapy",
"Matthieu",
""
]
] |
cs/0608054 | Luis Rademacher | Luis Rademacher, Santosh Vempala | Dispersion of Mass and the Complexity of Randomized Geometric Algorithms | Full version of L. Rademacher, S. Vempala: Dispersion of Mass and the
Complexity of Randomized Geometric Algorithms. Proc. 47th IEEE Annual Symp.
on Found. of Comp. Sci. (2006). A version of it to appear in Advances in
Mathematics | null | null | null | cs.CC cs.CG cs.DS math.FA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How much can randomness help computation? Motivated by this general question
and by volume computation, one of the few instances where randomness provably
helps, we analyze a notion of dispersion and connect it to asymptotic convex
geometry. We obtain a nearly quadratic lower bound on the complexity of
randomized volume algorithms for convex bodies in R^n (the current best
algorithm has complexity roughly n^4, conjectured to be n^3). Our main tools,
dispersion of random determinants and dispersion of the length of a random
point from a convex body, are of independent interest and applicable more
generally; in particular, the latter is closely related to the variance
hypothesis from convex geometry. This geometric dispersion also leads to lower
bounds for matrix problems and property testing.
| [
{
"version": "v1",
"created": "Sat, 12 Aug 2006 23:31:07 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Jun 2008 19:14:43 GMT"
}
] | 2008-06-17T00:00:00 | [
[
"Rademacher",
"Luis",
""
],
[
"Vempala",
"Santosh",
""
]
] |
cs/0608066 | Mariano Zelke | Mariano Zelke | k-Connectivity in the Semi-Streaming Model | 13 pages, submitted to Theoretical Computer Science | null | null | null | cs.DM cs.DS | null | We present the first semi-streaming algorithms to determine k-connectivity of
an undirected graph with k being any constant. The semi-streaming model for
graph algorithms was introduced by Muthukrishnan in 2003 and turns out to be
useful when dealing with massive graphs streamed in from an external storage
device.
Our two semi-streaming algorithms each compute a sparse subgraph of an input
graph G and can use this subgraph in a postprocessing step to decide
k-connectivity of G. To this end the first algorithm reads the input stream
only once and uses time O(k^2*n) to process each input edge. The second
algorithm reads the input k+1 times and needs time O(k+alpha(n)) per input
edge. Using its constructed subgraph the second algorithm can also generate all
l-separators of the input graph for all l<k.
| [
{
"version": "v1",
"created": "Wed, 16 Aug 2006 10:37:07 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Zelke",
"Mariano",
""
]
] |
cs/0608079 | Anna Gilbert | A. C. Gilbert, M. J. Strauss, J. A. Tropp, and R. Vershynin | Algorithmic linear dimension reduction in the l_1 norm for sparse
vectors | null | null | null | null | cs.DS | null | This paper develops a new method for recovering m-sparse signals that is
simultaneously uniform and quick. We present a reconstruction algorithm whose
run time, O(m log^2(m) log^2(d)), is sublinear in the length d of the signal.
The reconstruction error is within a logarithmic factor (in m) of the optimal
m-term approximation error in l_1. In particular, the algorithm recovers
m-sparse signals perfectly and noisy signals are recovered with polylogarithmic
distortion. Our algorithm makes O(m log^2 (d)) measurements, which is within a
logarithmic factor of optimal. We also present a small-space implementation of
the algorithm. These sketching techniques and the corresponding reconstruction
algorithms provide an algorithmic dimension reduction in the l_1 norm. In
particular, vectors of support m in dimension d can be linearly embedded into
O(m log^2 d) dimensions with polylogarithmic distortion. We can reconstruct a
vector from its low-dimensional sketch in time O(m log^2(m) log^2(d)).
Furthermore, this reconstruction is stable and robust under small
perturbations.
| [
{
"version": "v1",
"created": "Sat, 19 Aug 2006 01:55:14 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Gilbert",
"A. C.",
""
],
[
"Strauss",
"M. J.",
""
],
[
"Tropp",
"J. A.",
""
],
[
"Vershynin",
"R.",
""
]
] |