id
stringlengths 9
16
| submitter
stringlengths 4
52
⌀ | authors
stringlengths 4
937
| title
stringlengths 7
243
| comments
stringlengths 1
472
⌀ | journal-ref
stringlengths 4
244
⌀ | doi
stringlengths 14
55
⌀ | report-no
stringlengths 3
125
⌀ | categories
stringlengths 5
97
| license
stringclasses 9
values | abstract
stringlengths 33
2.95k
| versions
list | update_date
unknown | authors_parsed
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cs/0608091 | Jan Van den Bussche | Floris Geerts, Peter Revesz, Jan Van den Bussche | On-line topological simplification of weighted graphs | This is the full techreport corresponding to the paper "On-line
maintenance of simplified weighted graphs for efficient distance queries" in
the proceedings of ACM-GIS 2006 | Proceedings ACM-GIS 2006, ACM Press | null | null | cs.DS cs.DB | null | We describe two efficient on-line algorithms to simplify weighted graphs by
eliminating degree-two vertices. Our algorithms are on-line in that they react
to updates on the data, keeping the simplification up-to-date. The supported
updates are insertions of vertices and edges; hence, our algorithms are
partially dynamic. We provide both analytical and empirical evaluations of the
efficiency of our approaches. Specifically, we prove an O(log n) upper bound on
the amortized time complexity of our maintenance algorithms, with n the number
of insertions.
| [
{
"version": "v1",
"created": "Wed, 23 Aug 2006 20:08:53 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Geerts",
"Floris",
""
],
[
"Revesz",
"Peter",
""
],
[
"Bussche",
"Jan Van den",
""
]
] |
cs/0608108 | Nicolas Brodu | Nicolas Brodu | Spherical Indexing for Neighborhood Queries | 9 pages, 10 figures. The source code is available at
http://nicolas.brodu.free.fr/en/programmation/neighand/index.html | null | null | null | cs.DS cs.CG | null | This is an algorithm for finding neighbors when the objects can freely move
and have no predefined position. The query consists in finding neighbors for a
center location and a given radius. Space is discretized in cubic cells. This
algorithm introduces a direct spherical indexing that gives the list of all
cells making up the query sphere, for any radius and any center location. It
can additionally take in account both cyclic and non-cyclic regions of
interest. Finding only the K nearest neighbors naturally benefits from the
spherical indexing by minimally running through the sphere from center to edge,
and reducing the maximum distance when K neighbors have been found.
| [
{
"version": "v1",
"created": "Tue, 29 Aug 2006 00:12:55 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Brodu",
"Nicolas",
""
]
] |
cs/0608124 | Philip Bille | Philip Bille and Inge Li Goertz | The Tree Inclusion Problem: In Linear Space and Faster | Minor updates from last time | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given two rooted, ordered, and labeled trees $P$ and $T$ the tree inclusion
problem is to determine if $P$ can be obtained from $T$ by deleting nodes in
$T$. This problem has recently been recognized as an important query primitive
in XML databases. Kilpel\"ainen and Mannila [\emph{SIAM J. Comput. 1995}]
presented the first polynomial time algorithm using quadratic time and space.
Since then several improved results have been obtained for special cases when
$P$ and $T$ have a small number of leaves or small depth. However, in the worst
case these algorithms still use quadratic time and space. Let $n_S$, $l_S$, and
$d_S$ denote the number of nodes, the number of leaves, and the %maximum depth
of a tree $S \in \{P, T\}$. In this paper we show that the tree inclusion
problem can be solved in space $O(n_T)$ and time: O(\min(l_Pn_T, l_Pl_T\log
\log n_T + n_T, \frac{n_Pn_T}{\log n_T} + n_{T}\log n_{T})). This improves or
matches the best known time complexities while using only linear space instead
of quadratic. This is particularly important in practical applications, such as
XML databases, where the space is likely to be a bottleneck.
| [
{
"version": "v1",
"created": "Thu, 31 Aug 2006 12:23:37 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Sep 2007 11:28:07 GMT"
},
{
"version": "v3",
"created": "Mon, 9 Feb 2009 11:57:45 GMT"
},
{
"version": "v4",
"created": "Wed, 7 Oct 2009 08:51:58 GMT"
},
{
"version": "v5",
"created": "Tue, 18 Jan 2011 15:38:52 GMT"
}
] | "2011-01-19T00:00:00" | [
[
"Bille",
"Philip",
""
],
[
"Goertz",
"Inge Li",
""
]
] |
cs/0609009 | Virginia Vassilevska | Virginia Vassilevska, Ryan Williams and Raphael Yuster | Finding heaviest H-subgraphs in real weighted graphs, with applications | 23 pages | null | null | null | cs.DS cs.DM | null | For a graph G with real weights assigned to the vertices (edges), the MAX
H-SUBGRAPH problem is to find an H-subgraph of G with maximum total weight, if
one exists. The all-pairs MAX H-SUBGRAPH problem is to find for every pair of
vertices u,v, a maximum H-subgraph containing both u and v, if one exists. Our
main results are new strongly polynomial algorithms for the all-pairs MAX
H-SUBGRAPH problem for vertex weighted graphs. We also give improved algorithms
for the MAX-H SUBGRAPH problem for edge weighted graphs, and various related
problems, including computing the first k most significant bits of the distance
product of two matrices. Some of our algorithms are based, in part, on fast
matrix multiplication.
| [
{
"version": "v1",
"created": "Mon, 4 Sep 2006 08:08:00 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Vassilevska",
"Virginia",
""
],
[
"Williams",
"Ryan",
""
],
[
"Yuster",
"Raphael",
""
]
] |
cs/0609031 | Sreyash Kenkre | Sreyash Kenkre and Sundar Vishwanathan | Approximation Algorithms for the Bipartite Multi-cut Problem | 11 pages | null | null | null | cs.CC cs.DS | null | We introduce the {\it Bipartite Multi-cut} problem. This is a generalization
of the {\it st-Min-cut} problem, is similar to the {\it Multi-cut} problem
(except for more stringent requirements) and also turns out to be an immediate
generalization of the {\it Min UnCut} problem. We prove that this problem is
{\bf NP}-hard and then present LP and SDP based approximation algorithms. While
the LP algorithm is based on the Garg-Vazirani-Yannakakis algorithm for {\it
Multi-cut}, the SDP algorithm uses the {\it Structure Theorem} of $\ell_2^2$
Metrics.
| [
{
"version": "v1",
"created": "Thu, 7 Sep 2006 18:10:39 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2006 06:46:19 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Kenkre",
"Sreyash",
""
],
[
"Vishwanathan",
"Sundar",
""
]
] |
cs/0609032 | Sumit Ganguly | Sumit Ganguly and Anirban Majumder | CR-precis: A deterministic summary structure for update data streams | 11 pages | null | null | IIT Kanpur, July 1 2006 | cs.DS | null | We present the \crprecis structure, that is a general-purpose, deterministic
and sub-linear data structure for summarizing \emph{update} data streams. The
\crprecis structure yields the \emph{first deterministic sub-linear space/time
algorithms for update streams} for answering a variety of fundamental stream
queries, such as, (a) point queries, (b) range queries, (c) finding approximate
frequent items, (d) finding approximate quantiles, (e) finding approximate
hierarchical heavy hitters, (f) estimating inner-products, (g) near-optimal
$B$-bucket histograms, etc..
| [
{
"version": "v1",
"created": "Thu, 7 Sep 2006 19:21:01 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Sep 2006 05:51:43 GMT"
},
{
"version": "v3",
"created": "Tue, 10 Oct 2006 07:37:25 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Ganguly",
"Sumit",
""
],
[
"Majumder",
"Anirban",
""
]
] |
cs/0609046 | Chih-Chun Wang | Chih-Chun Wang (1), Sanjeev R. Kulkarni (2), H. Vincent Poor (2) ((1)
Purdue University, (2) Princeton University) | Exhausting Error-Prone Patterns in LDPC Codes | submitted to IEEE Trans. Information Theory | null | null | null | cs.IT cs.DS math.IT | null | It is proved in this work that exhaustively determining bad patterns in
arbitrary, finite low-density parity-check (LDPC) codes, including stopping
sets for binary erasure channels (BECs) and trapping sets (also known as
near-codewords) for general memoryless symmetric channels, is an NP-complete
problem, and efficient algorithms are provided for codes of practical short
lengths n~=500. By exploiting the sparse connectivity of LDPC codes, the
stopping sets of size <=13 and the trapping sets of size <=11 can be
efficiently exhaustively determined for the first time, and the resulting
exhaustive list is of great importance for code analysis and finite code
optimization. The featured tree-based narrowing search distinguishes this
algorithm from existing ones for which inexhaustive methods are employed. One
important byproduct is a pair of upper bounds on the bit-error rate (BER) &
frame-error rate (FER) iterative decoding performance of arbitrary codes over
BECs that can be evaluated for any value of the erasure probability, including
both the waterfall and the error floor regions. The tightness of these upper
bounds and the exhaustion capability of the proposed algorithm are proved when
combining an optimal leaf-finding module with the tree-based search. These
upper bounds also provide a worst-case-performance guarantee which is crucial
to optimizing LDPC codes for extremely low error rate applications, e.g.,
optical/satellite communications. Extensive numerical experiments are conducted
that include both randomly and algebraically constructed LDPC codes, the
results of which demonstrate the superior efficiency of the exhaustion
algorithm and its significant value for finite length code optimization.
| [
{
"version": "v1",
"created": "Mon, 11 Sep 2006 00:50:16 GMT"
}
] | "2007-07-13T00:00:00" | [
[
"Wang",
"Chih-Chun",
""
],
[
"Kulkarni",
"Sanjeev R.",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
cs/0609083 | Chinh Hoang | C. T. Hoang, J. Sawada, X. Shu | k-Colorability of P5-free graphs | null | null | null | null | cs.DM cs.DS | null | A polynomial time algorithm that determines for a fixed integer k whether or
not a P5-free graph can be k-colored is presented in this paper. If such a
coloring exists, the algorithm will produce a valid k-coloring.
| [
{
"version": "v1",
"created": "Thu, 14 Sep 2006 21:30:23 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Hoang",
"C. T.",
""
],
[
"Sawada",
"J.",
""
],
[
"Shu",
"X.",
""
]
] |
cs/0609085 | Philip Bille | Philip Bille, Rolf Fagerberg, Inge Li Goertz | Improved Approximate String Matching and Regular Expression Matching on
Ziv-Lempel Compressed Texts | null | null | null | null | cs.DS | null | We study the approximate string matching and regular expression matching
problem for the case when the text to be searched is compressed with the
Ziv-Lempel adaptive dictionary compression schemes. We present a time-space
trade-off that leads to algorithms improving the previously known complexities
for both problems. In particular, we significantly improve the space bounds,
which in practical applications are likely to be a bottleneck.
| [
{
"version": "v1",
"created": "Fri, 15 Sep 2006 07:36:25 GMT"
},
{
"version": "v2",
"created": "Thu, 3 May 2007 11:07:06 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Bille",
"Philip",
""
],
[
"Fagerberg",
"Rolf",
""
],
[
"Goertz",
"Inge Li",
""
]
] |
cs/0609103 | Bodo Manthey | Bodo Manthey | Minimum-weight Cycle Covers and Their Approximability | To appear in the Proceedings of the 33rd Workshop on Graph-Theoretic
Concepts in Computer Science (WG 2007). Minor changes | null | null | null | cs.DS cs.CC cs.DM | null | A cycle cover of a graph is a set of cycles such that every vertex is part of
exactly one cycle. An L-cycle cover is a cycle cover in which the length of
every cycle is in the set L.
We investigate how well L-cycle covers of minimum weight can be approximated.
For undirected graphs, we devise a polynomial-time approximation algorithm that
achieves a constant approximation ratio for all sets L. On the other hand, we
prove that the problem cannot be approximated within a factor of 2-eps for
certain sets L.
For directed graphs, we present a polynomial-time approximation algorithm
that achieves an approximation ratio of O(n), where $n$ is the number of
vertices. This is asymptotically optimal: We show that the problem cannot be
approximated within a factor of o(n).
To contrast the results for cycle covers of minimum weight, we show that the
problem of computing L-cycle covers of maximum weight can, at least in
principle, be approximated arbitrarily well.
| [
{
"version": "v1",
"created": "Mon, 18 Sep 2006 13:22:39 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Feb 2007 16:14:32 GMT"
},
{
"version": "v3",
"created": "Wed, 2 May 2007 14:58:11 GMT"
}
] | "2009-09-29T00:00:00" | [
[
"Manthey",
"Bodo",
""
]
] |
cs/0609115 | Clemence Magnien | Matthieu Latapy, Clemence Magnien (LIP6 - CNRS and UPMC, France) | Measuring Fundamental Properties of Real-World Complex Networks | null | null | null | null | cs.NI cond-mat.stat-mech cs.DS | null | Complex networks, modeled as large graphs, received much attention during
these last years. However, data on such networks is only available through
intricate measurement procedures. Until recently, most studies assumed that
these procedures eventually lead to samples large enough to be representative
of the whole, at least concerning some key properties. This has crucial impact
on network modeling and simulation, which rely on these properties.
Recent contributions proved that this approach may be misleading, but no
solution has been proposed. We provide here the first practical way to
distinguish between cases where it is indeed misleading, and cases where the
observed properties may be trusted. It consists in studying how the properties
of interest evolve when the sample grows, and in particular whether they reach
a steady state or not.
In order to illustrate this method and to demonstrate its relevance, we apply
it to data-sets on complex network measurements that are representative of the
ones commonly used. The obtained results show that the method fulfills its
goals very well. We moreover identify some properties which seem easier to
evaluate in practice, thus opening interesting perspectives.
| [
{
"version": "v1",
"created": "Wed, 20 Sep 2006 13:38:41 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Mar 2007 11:36:47 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Latapy",
"Matthieu",
"",
"LIP6 - CNRS and UPMC, France"
],
[
"Magnien",
"Clemence",
"",
"LIP6 - CNRS and UPMC, France"
]
] |
cs/0609116 | Clemence Magnien | Matthieu Latapy (LIAFA - CNRS, Universite Paris 7) | Theory and Practice of Triangle Problems in Very Large (Sparse
(Power-Law)) Graphs | null | null | null | null | cs.DS cond-mat.stat-mech cs.NI | null | Finding, counting and/or listing triangles (three vertices with three edges)
in large graphs are natural fundamental problems, which received recently much
attention because of their importance in complex network analysis. We provide
here a detailed state of the art on these problems, in a unified way. We note
that, until now, authors paid surprisingly little attention to space
complexity, despite its both fundamental and practical interest. We give the
space complexities of known algorithms and discuss their implications. Then we
propose improvements of a known algorithm, as well as a new algorithm, which
are time optimal for triangle listing and beats previous algorithms concerning
space complexity. They have the additional advantage of performing better on
power-law graphs, which we also study. We finally show with an experimental
study that these two algorithms perform very well in practice, allowing to
handle cases that were previously out of reach.
| [
{
"version": "v1",
"created": "Wed, 20 Sep 2006 14:17:34 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Latapy",
"Matthieu",
"",
"LIAFA - CNRS, Universite Paris 7"
]
] |
cs/0609118 | Prahladavaradan Sampath | Prahladavaradan Sampath | Duality of Fix-Points for Distributive Lattices | 7 pages | null | null | null | cs.DS cs.DM | null | We present a novel algorithm for calculating fix-points. The algorithm
calculates fix-points of an endo-function f on a distributive lattice, by
performing reachability computation a graph derived from the dual of f; this is
in comparison to traditional algorithms that are based on iterated application
of f until a fix-point is reached.
| [
{
"version": "v1",
"created": "Thu, 21 Sep 2006 17:34:25 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Sampath",
"Prahladavaradan",
""
]
] |
cs/0609128 | Marcin Kaminski | Josep Diaz and Marcin Kaminski | Max-Cut and Max-Bisection are NP-hard on unit disk graphs | null | null | null | null | cs.DS cs.CC | null | We prove that the Max-Cut and Max-Bisection problems are NP-hard on unit disk
graphs. We also show that $\lambda$-precision graphs are planar for $\lambda$ >
1 / \sqrt{2}$.
| [
{
"version": "v1",
"created": "Fri, 22 Sep 2006 18:17:12 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Diaz",
"Josep",
""
],
[
"Kaminski",
"Marcin",
""
]
] |
cs/0609153 | Pavel Dmitriev | Pavel Dmitriev, Carl Lagoze | Mining Generalized Graph Patterns based on User Examples | 11 pages, 11 figures. A short version of this paper appears in
proceedings of ICDM-2006 conference | null | null | null | cs.DS cs.LG | null | There has been a lot of recent interest in mining patterns from graphs.
Often, the exact structure of the patterns of interest is not known. This
happens, for example, when molecular structures are mined to discover fragments
useful as features in chemical compound classification task, or when web sites
are mined to discover sets of web pages representing logical documents. Such
patterns are often generated from a few small subgraphs (cores), according to
certain generalization rules (GRs). We call such patterns "generalized
patterns"(GPs). While being structurally different, GPs often perform the same
function in the network. Previously proposed approaches to mining GPs either
assumed that the cores and the GRs are given, or that all interesting GPs are
frequent. These are strong assumptions, which often do not hold in practical
applications. In this paper, we propose an approach to mining GPs that is free
from the above assumptions. Given a small number of GPs selected by the user,
our algorithm discovers all GPs similar to the user examples. First, a machine
learning-style approach is used to find the cores. Second, generalizations of
the cores in the graph are computed to identify GPs. Evaluation on synthetic
data, generated using real cores and GRs from biological and web domains,
demonstrates effectiveness of our approach.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2006 18:42:44 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Dmitriev",
"Pavel",
""
],
[
"Lagoze",
"Carl",
""
]
] |
cs/0610001 | Kunihiko Sadakane | Daisuke Okanohara, Kunihiko Sadakane | Practical Entropy-Compressed Rank/Select Dictionary | null | null | null | null | cs.DS | null | Rank/Select dictionaries are data structures for an ordered set $S \subset
\{0,1,...,n-1\}$ to compute $\rank(x,S)$ (the number of elements in $S$ which
are no greater than $x$), and $\select(i,S)$ (the $i$-th smallest element in
$S$), which are the fundamental components of \emph{succinct data structures}
of strings, trees, graphs, etc. In those data structures, however, only
asymptotic behavior has been considered and their performance for real data is
not satisfactory. In this paper, we propose novel four Rank/Select
dictionaries, esp, recrank, vcode and sdarray, each of which is small if the
number of elements in $S$ is small, and indeed close to $nH_0(S)$ ($H_0(S) \leq
1$ is the zero-th order \textit{empirical entropy} of $S$) in practice, and its
query time is superior to the previous ones. Experimental results reveal the
characteristics of our data structures and also show that these data structures
are superior to existing implementations in both size and query time.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2006 23:52:09 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Okanohara",
"Daisuke",
""
],
[
"Sadakane",
"Kunihiko",
""
]
] |
cs/0610042 | Sergey Gubin | Sergey Gubin | A Polynomial Time Algorithm for The Traveling Salesman Problem | 8 pages. Simplified | Complementary to Yannakakis' Theorem, 22nd MCCCC, University of
Nevada, Las Vegas, 2008, p.8 | null | null | cs.DM cs.CC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ATSP polytope can be expressed by asymmetric polynomial size linear
program.
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2006 13:15:12 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Nov 2006 17:34:51 GMT"
},
{
"version": "v3",
"created": "Thu, 25 Sep 2008 05:02:16 GMT"
}
] | "2008-11-10T00:00:00" | [
[
"Gubin",
"Sergey",
""
]
] |
cs/0610046 | Daniel Lemire | Daniel Lemire | Streaming Maximum-Minimum Filter Using No More than Three Comparisons
per Element | to appear in Nordic Journal of Computing | Daniel Lemire, Streaming Maximum-Minimum Filter Using No More than
Three Comparisons per Element, Nordic Journal of Computing, Volume 13, Number
4, pages 328-339, 2006 | null | null | cs.DS | null | The running maximum-minimum (max-min) filter computes the maxima and minima
over running windows of size w. This filter has numerous applications in signal
processing and time series analysis. We present an easy-to-implement online
algorithm requiring no more than 3 comparisons per element, in the worst case.
Comparatively, no algorithm is known to compute the running maximum (or
minimum) filter in 1.5 comparisons per element, in the worst case. Our
algorithm has reduced latency and memory usage.
| [
{
"version": "v1",
"created": "Mon, 9 Oct 2006 22:09:42 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Oct 2006 02:01:49 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Feb 2007 21:18:33 GMT"
},
{
"version": "v4",
"created": "Tue, 13 Mar 2007 00:41:20 GMT"
},
{
"version": "v5",
"created": "Thu, 22 Mar 2007 01:19:11 GMT"
}
] | "2012-01-16T00:00:00" | [
[
"Lemire",
"Daniel",
""
]
] |
cs/0610076 | Willy Valdivia-Granda | Willy Valdivia-Granda, William Perrizo, Edward Deckard, Francis Larson | Peano Count Trees (P-Trees) and Rule Association Mining for Gene
Expression Profiling of Microarray Data | null | 2002 International Conference in Bioinformatics. Bangkok, Thailand | null | null | cs.DS cs.IR q-bio.MN | null | The greatest challenge in maximizing the use of gene expression data is to
develop new computational tools capable of interconnecting and interpreting the
results from different organisms and experimental settings. We propose an
integrative and comprehensive approach including a super-chip containing data
from microarray experiments collected on different species subjected to hypoxic
and anoxic stress. A data mining technology called Peano count tree (P-trees)
is used to represent genomic data in multidimensions. Each microarray spot is
presented as a pixel with its corresponding red/green intensity feature bands.
Each bad is stored separately in a reorganized 8-separate (bSQ) file format.
Each bSQ is converted to a quadrant base tree structure (P-tree) from which a
superchip is represented as expression P-trees (EP-trees) and repression
P-trees (RP-trees). The use of association rule mining is proposed to derived
to meanigingfully organize signal transduction pathways taking in consideration
evolutionary considerations. We argue that the genetic constitution of an
organism (K) can be represented by the total number of genes belonging to two
groups. The group X constitutes genes (X1,Xn) and they can be represented as 1
or 0 depending on whether the gene was expressed or not. The second group of Y
genes (Y1,Yn) is expressed at different levels. These genes have a very high
repression, high expression, very repressed or highly repressed. However, many
genes of the group Y are specie specific and modulated by the products and
combinations of genes of the group X. In this paper, we introduce the dSQ and
P-tree technology; the biological implications of association rule mining using
X and Y gene groups and some advances in the integration of this information
using the BRAIN architecture.
| [
{
"version": "v1",
"created": "Thu, 12 Oct 2006 19:55:32 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Oct 2006 18:27:58 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Valdivia-Granda",
"Willy",
""
],
[
"Perrizo",
"William",
""
],
[
"Deckard",
"Edward",
""
],
[
"Larson",
"Francis",
""
]
] |
cs/0610119 | Elad Hazan | Elad Hazan | Approximate Convex Optimization by Online Game Playing | null | null | null | null | cs.DS | null | Lagrangian relaxation and approximate optimization algorithms have received
much attention in the last two decades. Typically, the running time of these
methods to obtain a $\epsilon$ approximate solution is proportional to
$\frac{1}{\epsilon^2}$. Recently, Bienstock and Iyengar, following Nesterov,
gave an algorithm for fractional packing linear programs which runs in
$\frac{1}{\epsilon}$ iterations. The latter algorithm requires to solve a
convex quadratic program every iteration - an optimization subroutine which
dominates the theoretical running time.
We give an algorithm for convex programs with strictly convex constraints
which runs in time proportional to $\frac{1}{\epsilon}$. The algorithm does NOT
require to solve any quadratic program, but uses gradient steps and elementary
operations only. Problems which have strictly convex constraints include
maximum entropy frequency estimation, portfolio optimization with loss risk
constraints, and various computational problems in signal processing.
As a side product, we also obtain a simpler version of Bienstock and
Iyengar's result for general linear programming, with similar running time.
We derive these algorithms using a new framework for deriving convex
optimization algorithms from online game playing algorithms, which may be of
independent interest.
| [
{
"version": "v1",
"created": "Thu, 19 Oct 2006 22:10:32 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Hazan",
"Elad",
""
]
] |
cs/0610125 | Radoslaw Hofman | Radoslaw Hofman | Report on article: P=NP Linear programming formulation of the Traveling
Salesman Problem | This version contain more figures, and clearer way to explain counter
example idea for k dimensions | null | null | null | cs.CC cs.DM cs.DS | null | This article presents counter examples for three articles claiming that P=NP.
Articles for which it applies are: Moustapha Diaby "P = NP: Linear programming
formulation of the traveling salesman problem" and "Equality of complexity
classes P and NP: Linear programming formulation of the quadratic assignment
problem", and also Sergey Gubin "A Polynomial Time Algorithm for The Traveling
Salesman Problem"
| [
{
"version": "v1",
"created": "Fri, 20 Oct 2006 14:01:22 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Oct 2006 13:41:20 GMT"
},
{
"version": "v3",
"created": "Thu, 26 Oct 2006 12:00:02 GMT"
},
{
"version": "v4",
"created": "Thu, 2 Nov 2006 11:19:24 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Hofman",
"Radoslaw",
""
]
] |
cs/0610128 | Daniel Lemire | Daniel Lemire and Owen Kaser | Hierarchical Bin Buffering: Online Local Moments for Dynamic External
Memory Arrays | null | ACM Transactions on Algorithms 4(1): 14 (2008) | 10.1145/1328911.1328925 | null | cs.DS cs.DB | http://creativecommons.org/publicdomain/zero/1.0/ | Local moments are used for local regression, to compute statistical measures
such as sums, averages, and standard deviations, and to approximate probability
distributions. We consider the case where the data source is a very large I/O
array of size n and we want to compute the first N local moments, for some
constant N. Without precomputation, this requires O(n) time. We develop a
sequence of algorithms of increasing sophistication that use precomputation and
additional buffer space to speed up queries. The simpler algorithms partition
the I/O array into consecutive ranges called bins, and they are applicable not
only to local-moment queries, but also to algebraic queries (MAX, AVERAGE, SUM,
etc.). With N buffers of size sqrt{n}, time complexity drops to O(sqrt n). A
more sophisticated approach uses hierarchical buffering and has a logarithmic
time complexity (O(b log_b n)), when using N hierarchical buffers of size n/b.
Using Overlapped Bin Buffering, we show that only a single buffer is needed, as
with wavelet-based algorithms, but using much less storage. Applications exist
in multidimensional and statistical databases over massive data sets,
interactive image processing, and visualization.
| [
{
"version": "v1",
"created": "Sat, 21 Oct 2006 00:30:57 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Aug 2007 15:42:52 GMT"
},
{
"version": "v3",
"created": "Sun, 26 Apr 2020 21:39:09 GMT"
}
] | "2020-04-28T00:00:00" | [
[
"Lemire",
"Daniel",
""
],
[
"Kaser",
"Owen",
""
]
] |
cs/0610155 | Ping Li | Ping Li, Trevor J. Hastie, Kenneth W. Church | Nonlinear Estimators and Tail Bounds for Dimension Reduction in $l_1$
Using Cauchy Random Projections | null | null | null | null | cs.DS cs.IR cs.LG | null | For dimension reduction in $l_1$, the method of {\em Cauchy random
projections} multiplies the original data matrix $\mathbf{A}
\in\mathbb{R}^{n\times D}$ with a random matrix $\mathbf{R} \in
\mathbb{R}^{D\times k}$ ($k\ll\min(n,D)$) whose entries are i.i.d. samples of
the standard Cauchy C(0,1). Because of the impossibility results, one can not
hope to recover the pairwise $l_1$ distances in $\mathbf{A}$ from $\mathbf{B} =
\mathbf{AR} \in \mathbb{R}^{n\times k}$, using linear estimators without
incurring large errors. However, nonlinear estimators are still useful for
certain applications in data stream computation, information retrieval,
learning, and data mining.
We propose three types of nonlinear estimators: the bias-corrected sample
median estimator, the bias-corrected geometric mean estimator, and the
bias-corrected maximum likelihood estimator. The sample median estimator and
the geometric mean estimator are asymptotically (as $k\to \infty$) equivalent
but the latter is more accurate at small $k$. We derive explicit tail bounds
for the geometric mean estimator and establish an analog of the
Johnson-Lindenstrauss (JL) lemma for dimension reduction in $l_1$, which is
weaker than the classical JL lemma for dimension reduction in $l_2$.
Asymptotically, both the sample median estimator and the geometric mean
estimators are about 80% efficient compared to the maximum likelihood estimator
(MLE). We analyze the moments of the MLE and propose approximating the
distribution of the MLE by an inverse Gaussian.
| [
{
"version": "v1",
"created": "Fri, 27 Oct 2006 07:08:51 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Li",
"Ping",
""
],
[
"Hastie",
"Trevor J.",
""
],
[
"Church",
"Kenneth W.",
""
]
] |
cs/0610163 | Rajiv Ranjan Mr. | Rajiv Ranjan, Aaron Harwood and Rajkumar Buyya | A Taxonomy of Peer-to-Peer Based Complex Queries: a Grid perspective | null | null | null | null | cs.NI cs.DC cs.DS | null | Grid superscheduling requires support for efficient and scalable discovery of
resources. Resource discovery activities involve searching for the appropriate
resource types that match the user's job requirements. To accomplish this goal,
a resource discovery system that supports the desired look-up operation is
mandatory. Various kinds of solutions to this problem have been suggested,
including the centralised and hierarchical information server approach.
However, both of these approaches have serious limitations in regards to
scalability, fault-tolerance and network congestion. To overcome these
limitations, organising resource information using Peer-to-Peer (P2P) network
model has been proposed. Existing approaches advocate an extension to
structured P2P protocols, to support the Grid resource information system
(GRIS). In this paper, we identify issues related to the design of such an
efficient, scalable, fault-tolerant, consistent and practical GRIS system using
a P2P network model. We compile these issues into various taxonomies in
sections III and IV. Further, we look into existing works that apply P2P based
network protocols to GRIS. We think that this taxonomy and its mapping to
relevant systems would be useful for academic and industry based researchers
who are engaged in the design of scalable Grid systems.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2006 08:30:17 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Ranjan",
"Rajiv",
""
],
[
"Harwood",
"Aaron",
""
],
[
"Buyya",
"Rajkumar",
""
]
] |
cs/0610174 | Marko Samer | Marko Samer, Stefan Szeider | A Fixed-Parameter Algorithm for #SAT with Parameter Incidence Treewidth | 9 pages, 1 figure | null | null | null | cs.DS cs.CC cs.LO | null | We present an efficient fixed-parameter algorithm for #SAT parameterized by
the incidence treewidth, i.e., the treewidth of the bipartite graph whose
vertices are the variables and clauses of the given CNF formula; a variable and
a clause are joined by an edge if and only if the variable occurs in the
clause. Our algorithm runs in time O(4^k k l N), where k denotes the incidence
treewidth, l denotes the size of a largest clause, and N denotes the number of
nodes of the tree-decomposition.
| [
{
"version": "v1",
"created": "Tue, 31 Oct 2006 12:58:36 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Feb 2007 20:56:15 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Samer",
"Marko",
""
],
[
"Szeider",
"Stefan",
""
]
] |
cs/0611001 | Michael Elkin | Michael Elkin | A near-optimal fully dynamic distributed algorithm for maintaining
sparse spanners | null | null | null | null | cs.DS | null | In this paper we devise an extremely efficient fully dynamic distributed
algorithm for maintaining sparse spanners. Our resuls also include the first
fully dynamic centralized algorithm for the problem with non-trivial bounds for
both incremental and decremental update. Finally, we devise a very efficient
streaming algorithm for the problem.
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2006 09:36:20 GMT"
}
] | "2009-09-29T00:00:00" | [
[
"Elkin",
"Michael",
""
]
] |
cs/0611008 | Radoslaw Hofman | Radoslaw Hofman | Why Linear Programming cannot solve large instances of NP-complete
problems in polynomial time | null | null | null | null | cs.CC cs.DM cs.DS cs.NA | null | This article discusses ability of Linear Programming models to be used as
solvers of NP-complete problems. Integer Linear Programming is known as
NP-complete problem, but non-integer Linear Programming problems can be solved
in polynomial time, what places them in P class. During past three years there
appeared some articles using LP to solve NP-complete problems. This methods use
large number of variables (O(n^9)) solving correctly almost all instances that
can be solved in reasonable time. Can they solve infinitively large instances?
This article gives answer to this question.
| [
{
"version": "v1",
"created": "Thu, 2 Nov 2006 08:40:53 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Hofman",
"Radoslaw",
""
]
] |
cs/0611019 | Vincent Limouzy | Binh-Minh Bui-Xuan (LIRMM), Michel Habib (LIAFA), Vincent Limouzy
(LIAFA), Fabien De Montgolfier (LIAFA) | Algorithmic Aspects of a General Modular Decomposition Theory | null | null | null | null | cs.DS | null | A new general decomposition theory inspired from modular graph decomposition
is presented. This helps unifying modular decomposition on different
structures, including (but not restricted to) graphs. Moreover, even in the
case of graphs, the terminology ``module'' not only captures the classical
graph modules but also allows to handle 2-connected components, star-cutsets,
and other vertex subsets. The main result is that most of the nice algorithmic
tools developed for modular decomposition of graphs still apply efficiently on
our generalisation of modules. Besides, when an essential axiom is satisfied,
almost all the important properties can be retrieved. For this case, an
algorithm given by Ehrenfeucht, Gabow, McConnell and Sullivan 1994 is
generalised and yields a very efficient solution to the associated
decomposition problem.
| [
{
"version": "v1",
"created": "Sat, 4 Nov 2006 18:32:23 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Nov 2007 10:20:28 GMT"
}
] | "2007-11-20T00:00:00" | [
[
"Bui-Xuan",
"Binh-Minh",
"",
"LIRMM"
],
[
"Habib",
"Michel",
"",
"LIAFA"
],
[
"Limouzy",
"Vincent",
"",
"LIAFA"
],
[
"De Montgolfier",
"Fabien",
"",
"LIAFA"
]
] |
cs/0611023 | Surender Baswana | Surender Baswana | Faster Streaming algorithms for graph spanners | 16 pages | null | null | null | cs.DS | null | Given an undirected graph $G=(V,E)$ on $n$ vertices, $m$ edges, and an
integer $t\ge 1$, a subgraph $(V,E_S)$, $E_S\subseteq E$ is called a
$t$-spanner if for any pair of vertices $u,v \in V$, the distance between them
in the subgraph is at most $t$ times the actual distance. We present streaming
algorithms for computing a $t$-spanner of essentially optimal size-stretch
trade offs for any undirected graph.
Our first algorithm is for the classical streaming model and works for
unweighted graphs only. The algorithm performs a single pass on the stream of
edges and requires $O(m)$ time to process the entire stream of edges. This
drastically improves the previous best single pass streaming algorithm for
computing a $t$-spanner which requires $\theta(mn^{\frac{2}{t}})$ time to
process the stream and computes spanner with size slightly larger than the
optimal.
Our second algorithm is for {\em StreamSort} model introduced by Aggarwal et
al. [FOCS 2004], which is the streaming model augmented with a sorting
primitive. The {\em StreamSort} model has been shown to be a more powerful and
still very realistic model than the streaming model for massive data sets
applications. Our algorithm, which works of weighted graphs as well, performs
$O(t)$ passes using $O(\log n)$ bits of working memory only.
Our both the algorithms require elementary data structures.
| [
{
"version": "v1",
"created": "Mon, 6 Nov 2006 03:09:05 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Baswana",
"Surender",
""
]
] |
cs/0611088 | James Oravec | Lawrence L. Larmore and James A. Oravec | T-Theory Applications to Online Algorithms for the Server Problem | 19 figures 38 pages | null | null | null | cs.DS cs.DM | null | Although largely unnoticed by the online algorithms community, T-theory, a
field of discrete mathematics, has contributed to the development of several
online algorithms for the k-server problem. A brief summary of the k-server
problem, and some important application concepts of T-theory, are given.
Additionally, a number of known k-server results are restated using the
established terminology of T-theory. Lastly, a previously unpublished
3-competitiveness proof, using T-theory, for the Harmonic algorithm for two
servers is presented.
| [
{
"version": "v1",
"created": "Sat, 18 Nov 2006 19:50:57 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Larmore",
"Lawrence L.",
""
],
[
"Oravec",
"James A.",
""
]
] |
cs/0611098 | Christian Lavault | Christian Lavault (IRISA / INRIA Rennes) | Analysis of an Efficient Distributed Algorithm for Mutual Exclusion
(Average-Case Analysis of Path Reversal) | null | LNCS 634 (1992) 133-144 | null | null | cs.DC cs.DS | null | The algorithm analysed by Na\"{i}mi, Trehe and Arnold was the very first
distributed algorithm to solve the mutual exclusion problem in complete
networks by using a dynamic logical tree structure as its basic distributed
data structure, viz. a path reversal transformation in rooted n-node trees;
besides, it was also the first one to achieve a logarithmic average-case
message complexity. The present paper proposes a direct and general approach to
compute the moments of the cost of path reversal. It basically uses one-one
correspondences between combinatorial structures and the associated probability
generating functions: the expected cost of path reversal is thus proved to be
exactly $H_{n-1}$. Moreover, time and message complexity of the algorithm as
well as randomized bounds on its worst-case message complexity in arbitrary
networks are also given. The average-case analysis of path reversal and the
analysis of this distributed algorithm for mutual exclusion are thus fully
completed in the paper. The general techniques used should also prove available
and fruitful when adapted to the most efficient recent tree-based distributed
algorithms for mutual exclusion which require powerful tools, particularly for
average-case analyses.
| [
{
"version": "v1",
"created": "Mon, 20 Nov 2006 22:02:29 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Lavault",
"Christian",
"",
"IRISA / INRIA Rennes"
]
] |
cs/0611101 | Petteri Kaski | Andreas Bj\"orklund, Thore Husfeldt, Petteri Kaski, Mikko Koivisto | Fourier meets M\"{o}bius: fast subset convolution | null | null | null | null | cs.DS cs.DM math.CO | null | We present a fast algorithm for the subset convolution problem: given
functions f and g defined on the lattice of subsets of an n-element set N,
compute their subset convolution f*g, defined for all S\subseteq N by (f *
g)(S) = \sum_{T \subseteq S}f(T) g(S\setminus T), where addition and
multiplication is carried out in an arbitrary ring. Via M\"{o}bius transform
and inversion, our algorithm evaluates the subset convolution in O(n^2 2^n)
additions and multiplications, substantially improving upon the straightforward
O(3^n) algorithm. Specifically, if the input functions have an integer range
{-M,-M+1,...,M}, their subset convolution over the ordinary sum-product ring
can be computed in O^*(2^n log M) time; the notation O^* suppresses
polylogarithmic factors. Furthermore, using a standard embedding technique we
can compute the subset convolution over the max-sum or min-sum semiring in
O^*(2^n M) time. To demonstrate the applicability of fast subset convolution,
we present the first O^*(2^k n^2 + n m) algorithm for the minimum Steiner tree
problem in graphs with n vertices, k terminals, and m edges with bounded
integer weights, improving upon the O^*(3^k n + 2^k n^2 + n m) time bound of
the classical Dreyfus-Wagner algorithm. We also discuss extensions to recent
O^*(2^n)-time algorithms for covering and partitioning problems (Bj\"{o}rklund
and Husfeldt, FOCS 2006; Koivisto, FOCS 2006).
| [
{
"version": "v1",
"created": "Tue, 21 Nov 2006 08:34:30 GMT"
}
] | "2016-08-16T00:00:00" | [
[
"Björklund",
"Andreas",
""
],
[
"Husfeldt",
"Thore",
""
],
[
"Kaski",
"Petteri",
""
],
[
"Koivisto",
"Mikko",
""
]
] |
cs/0611107 | Adam L. Buchsbaum | Adam L. Buchsbaum, Emden R. Gansner, Cecilia M. Procopiuc, Suresh
Venkatasubramanian | Rectangular Layouts and Contact Graphs | 28 pages, 13 figures, 55 references, 1 appendix | null | null | null | cs.DS cs.DM | null | Contact graphs of isothetic rectangles unify many concepts from applications
including VLSI and architectural design, computational geometry, and GIS.
Minimizing the area of their corresponding {\em rectangular layouts} is a key
problem. We study the area-optimization problem and show that it is NP-hard to
find a minimum-area rectangular layout of a given contact graph. We present
O(n)-time algorithms that construct $O(n^2)$-area rectangular layouts for
general contact graphs and $O(n\log n)$-area rectangular layouts for trees.
(For trees, this is an $O(\log n)$-approximation algorithm.) We also present an
infinite family of graphs (rsp., trees) that require $\Omega(n^2)$ (rsp.,
$\Omega(n\log n)$) area.
We derive these results by presenting a new characterization of graphs that
admit rectangular layouts using the related concept of {\em rectangular duals}.
A corollary to our results relates the class of graphs that admit rectangular
layouts to {\em rectangle of influence drawings}.
| [
{
"version": "v1",
"created": "Tue, 21 Nov 2006 15:03:37 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Buchsbaum",
"Adam L.",
""
],
[
"Gansner",
"Emden R.",
""
],
[
"Procopiuc",
"Cecilia M.",
""
],
[
"Venkatasubramanian",
"Suresh",
""
]
] |
cs/0611114 | Ping Li | Ping Li | Very Sparse Stable Random Projections, Estimators and Tail Bounds for
Stable Random Projections | null | null | null | null | cs.DS cs.IT cs.LG math.IT | null | This paper will focus on three different aspects in improving the current
practice of stable random projections.
Firstly, we propose {\em very sparse stable random projections} to
significantly reduce the processing and storage cost, by replacing the
$\alpha$-stable distribution with a mixture of a symmetric $\alpha$-Pareto
distribution (with probability $\beta$, $0<\beta\leq1$) and a point mass at the
origin (with a probability $1-\beta$). This leads to a significant
$\frac{1}{\beta}$-fold speedup for small $\beta$.
Secondly, we provide an improved estimator for recovering the original
$l_\alpha$ norms from the projected data. The standard estimator is based on
the (absolute) sample median, while we suggest using the geometric mean. The
geometric mean estimator we propose is strictly unbiased and is easier to
study. Moreover, the geometric mean estimator is more accurate, especially
non-asymptotically.
Thirdly, we provide an adequate answer to the basic question of how many
projections (samples) are needed for achieving some pre-specified level of
accuracy. \cite{Proc:Indyk_FOCS00,Article:Indyk_TKDE03} did not provide a
criterion that can be used in practice. The geometric mean estimator we propose
allows us to derive sharp tail bounds which can be expressed in exponential
forms with constants explicitly given.
| [
{
"version": "v1",
"created": "Wed, 22 Nov 2006 11:38:25 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Dec 2006 14:55:06 GMT"
}
] | "2007-07-13T00:00:00" | [
[
"Li",
"Ping",
""
]
] |
cs/0611116 | Mikhail Nesterenko | Mikhail Nesterenko and S\'ebastien Tixeuil | Discovering Network Topology in the Presence of Byzantine Faults | null | 13th Colloquium on Structural Information and Communication
Complexity (SIROCCO), LNCS Volume 4056 pp. 212-226, Chester, UK, July 2006 | 10.1007/11780823_17 | null | cs.DC cs.DS cs.OS | null | We study the problem of Byzantine-robust topology discovery in an arbitrary
asynchronous network. We formally state the weak and strong versions of the
problem. The weak version requires that either each node discovers the topology
of the network or at least one node detects the presence of a faulty node. The
strong version requires that each node discovers the topology regardless of
faults. We focus on non-cryptographic solutions to these problems. We explore
their bounds. We prove that the weak topology discovery problem is solvable
only if the connectivity of the network exceeds the number of faults in the
system. Similarly, we show that the strong version of the problem is solvable
only if the network connectivity is more than twice the number of faults. We
present solutions to both versions of the problem. The presented algorithms
match the established graph connectivity bounds. The algorithms do not require
the individual nodes to know either the diameter or the size of the network.
The message complexity of both programs is low polynomial with respect to the
network size. We describe how our solutions can be extended to add the property
of termination, handle topology changes and perform neighborhood discovery.
| [
{
"version": "v1",
"created": "Wed, 22 Nov 2006 18:25:43 GMT"
}
] | "2008-03-29T00:00:00" | [
[
"Nesterenko",
"Mikhail",
""
],
[
"Tixeuil",
"Sébastien",
""
]
] |
cs/0611117 | Mikhail Nesterenko | Mark Miyashita and Mikhail Nesterenko | 2FACE: Bi-Directional Face Traversal for Efficient Geometric Routing | null | null | null | null | cs.DC cs.DS cs.OS | null | We propose bi-directional face traversal algorithm $2FACE$ to shorten the
path the message takes to reach the destination in geometric routing. Our
algorithm combines the practicality of the best single-direction traversal
algorithms with the worst case message complexity of $O(|E|)$, where $E$ is the
number of network edges. We apply $2FACE$ to a variety of geometric routing
algorithms. Our simulation results indicate that bi-directional face traversal
decreases the latency of message delivery two to three times compared to single
direction face traversal. The thus selected path approaches the shortest
possible route. This gain in speed comes with a similar message overhead
increase. We describe an algorithm which compensates for this message overhead
by recording the preferable face traversal direction. Thus, if a source has
several messages to send to the destination, the subsequent messages follow the
shortest route. Our simulation results show that with most geometric routing
algorithms the message overhead of finding the short route by bi-directional
face traversal is compensated within two to four repeat messages.
| [
{
"version": "v1",
"created": "Wed, 22 Nov 2006 19:28:31 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Miyashita",
"Mark",
""
],
[
"Nesterenko",
"Mikhail",
""
]
] |
cs/0611166 | Dimitris Kalles | Dimitris Kalles, Athanassios Papagelis | Lossless fitness inheritance in genetic algorithms for decision trees | Contains 23 pages, 6 figures, 12 tables. Text last updated as of
March 6, 2009. Submitted to a journal | null | null | null | cs.AI cs.DS cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When genetic algorithms are used to evolve decision trees, key tree quality
parameters can be recursively computed and re-used across generations of
partially similar decision trees. Simply storing instance indices at leaves is
enough for fitness to be piecewise computed in a lossless fashion. We show the
derivation of the (substantial) expected speed-up on two bounding case problems
and trace the attractive property of lossless fitness inheritance to the
divide-and-conquer nature of decision trees. The theoretical results are
supported by experimental evidence.
| [
{
"version": "v1",
"created": "Thu, 30 Nov 2006 15:20:15 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Mar 2009 22:43:31 GMT"
}
] | "2009-03-11T00:00:00" | [
[
"Kalles",
"Dimitris",
""
],
[
"Papagelis",
"Athanassios",
""
]
] |
cs/0612028 | Alexey Stepanov | Fedor V. Fomin, Serge Gaspers, Saket Saurabh, Alexey A. Stepanov | Using Combinatorics to Prune Search Trees: Independent and Dominating
Set | This paper has been withdrawn | null | null | null | cs.DS cs.DM | null | This paper has been withdrawn by the author.
| [
{
"version": "v1",
"created": "Tue, 5 Dec 2006 14:42:56 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Dec 2006 09:39:26 GMT"
}
] | "2008-07-07T00:00:00" | [
[
"Fomin",
"Fedor V.",
""
],
[
"Gaspers",
"Serge",
""
],
[
"Saurabh",
"Saket",
""
],
[
"Stepanov",
"Alexey A.",
""
]
] |
cs/0612031 | Andrew McGregor | Andrew McGregor and S. Muthukrishnan | Estimating Aggregate Properties on Probabilistic Streams | 11 pages | null | null | null | cs.DS cs.DB | null | The probabilistic-stream model was introduced by Jayram et al. \cite{JKV07}.
It is a generalization of the data stream model that is suited to handling
``probabilistic'' data where each item of the stream represents a probability
distribution over a set of possible events. Therefore, a probabilistic stream
determines a distribution over potentially a very large number of classical
"deterministic" streams where each item is deterministically one of the domain
values. The probabilistic model is applicable for not only analyzing streams
where the input has uncertainties (such as sensor data streams that measure
physical processes) but also where the streams are derived from the input data
by post-processing, such as tagging or reconciling inconsistent and poor
quality data.
We present streaming algorithms for computing commonly used aggregates on a
probabilistic stream. We present the first known, one pass streaming algorithm
for estimating the \AVG, improving results in \cite{JKV07}. We present the
first known streaming algorithms for estimating the number of \DISTINCT items
on probabilistic streams. Further, we present extensions to other aggregates
such as the repeat rate, quantiles, etc. In all cases, our algorithms work with
provable accuracy guarantees and within the space constraints of the data
stream model.
| [
{
"version": "v1",
"created": "Tue, 5 Dec 2006 23:34:52 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"McGregor",
"Andrew",
""
],
[
"Muthukrishnan",
"S.",
""
]
] |
cs/0612033 | Andr\'e Kempe | Andr\'e Kempe | Acronym-Meaning Extraction from Corpora Using Multi-Tape Weighted
Finite-State Machines | 6 pages, LaTeX | null | null | 2006/019 (at Xerox Research Centre Europe, France) | cs.CL cs.DS cs.SC | null | The automatic extraction of acronyms and their meaning from corpora is an
important sub-task of text mining. It can be seen as a special case of string
alignment, where a text chunk is aligned with an acronym. Alternative
alignments have different cost, and ideally the least costly one should give
the correct meaning of the acronym. We show how this approach can be
implemented by means of a 3-tape weighted finite-state machine (3-WFSM) which
reads a text chunk on tape 1 and an acronym on tape 2, and generates all
alternative alignments on tape 3. The 3-WFSM can be automatically generated
from a simple regular expression. No additional algorithms are required at any
stage. Our 3-WFSM has a size of 27 states and 64 transitions, and finds the
best analysis of an acronym in a few milliseconds.
| [
{
"version": "v1",
"created": "Wed, 6 Dec 2006 10:13:12 GMT"
}
] | "2009-08-03T00:00:00" | [
[
"Kempe",
"André",
""
]
] |
cs/0612037 | Jerome Leroux | J\'er\^ome Leroux (LaBRI) | Least Significant Digit First Presburger Automata | null | null | null | null | cs.DS | null | Since 1969 \cite{C-MST69,S-SMJ77}, we know that any Presburger-definable set
\cite{P-PCM29} (a set of integer vectors satisfying a formula in the
first-order additive theory of the integers) can be represented by a
state-based symmbolic representation, called in this paper Finite Digit Vector
Automata (FDVA). Efficient algorithms for manipulating these sets have been
recently developed. However, the problem of deciding if a FDVA represents such
a set, is a well-known hard problem first solved by Muchnik in 1991 with a
quadruply-exponential time algorithm. In this paper, we show how to determine
in polynomial time whether a FDVA represents a Presburger-definable set, and we
provide in this positive case a polynomial time algorithm that constructs a
Presburger-formula that defines the same set.
| [
{
"version": "v1",
"created": "Wed, 6 Dec 2006 14:55:36 GMT"
}
] | "2016-08-16T00:00:00" | [
[
"Leroux",
"Jérôme",
"",
"LaBRI"
]
] |
cs/0612041 | Andr\'e Kempe | Andr\'e Kempe | Viterbi Algorithm Generalized for n-Tape Best-Path Search | 12 pages, 3 figures, LaTeX (+ .eps) | Proc. FSMNLP 2009, Pretoria, South Africa. July 21-24. (improved
version). | null | null | cs.CL cs.DS cs.SC | null | We present a generalization of the Viterbi algorithm for identifying the path
with minimal (resp. maximal) weight in a n-tape weighted finite-state machine
(n-WFSM), that accepts a given n-tuple of input strings (s_1,... s_n). It also
allows us to compile the best transduction of a given input n-tuple by a
weighted (n+m)-WFSM (transducer) with n input and m output tapes. Our algorithm
has a worst-case time complexity of O(|s|^n |E| log (|s|^n |Q|)), where n and
|s| are the number and average length of the strings in the n-tuple, and |Q|
and |E| the number of states and transitions in the n-WFSM, respectively. A
straight forward alternative, consisting in intersection followed by classical
shortest-distance search, operates in O(|s|^n (|E|+|Q|) log (|s|^n |Q|)) time.
| [
{
"version": "v1",
"created": "Thu, 7 Dec 2006 08:42:46 GMT"
}
] | "2009-08-03T00:00:00" | [
[
"Kempe",
"André",
""
]
] |
cs/0612052 | Jon Feldman | Jon Feldman, S. Muthukrishnan, Martin Pal, Cliff Stein | Budget Optimization in Search-Based Advertising Auctions | null | null | null | null | cs.DS cs.CE cs.GT | null | Internet search companies sell advertisement slots based on users' search
queries via an auction. While there has been a lot of attention on the auction
process and its game-theoretic aspects, our focus is on the advertisers. In
particular, the advertisers have to solve a complex optimization problem of how
to place bids on the keywords of their interest so that they can maximize their
return (the number of user clicks on their ads) for a given budget. We model
the entire process and study this budget optimization problem. While most
variants are NP hard, we show, perhaps surprisingly, that simply randomizing
between two uniform strategies that bid equally on all the keywords works well.
More precisely, this strategy gets at least 1-1/e fraction of the maximum
clicks possible. Such uniform strategies are likely to be practical. We also
present inapproximability results, and optimal algorithms for variants of the
budget optimization problem.
| [
{
"version": "v1",
"created": "Fri, 8 Dec 2006 17:33:54 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Feldman",
"Jon",
""
],
[
"Muthukrishnan",
"S.",
""
],
[
"Pal",
"Martin",
""
],
[
"Stein",
"Cliff",
""
]
] |
cs/0612055 | Milan Ruzic | Anna Pagh and Rasmus Pagh and Milan Ruzic | Linear Probing with Constant Independence | 13 pages | null | null | null | cs.DS cs.DB | null | Hashing with linear probing dates back to the 1950s, and is among the most
studied algorithms. In recent years it has become one of the most important
hash table organizations since it uses the cache of modern computers very well.
Unfortunately, previous analysis rely either on complicated and space consuming
hash functions, or on the unrealistic assumption of free access to a truly
random hash function. Already Carter and Wegman, in their seminal paper on
universal hashing, raised the question of extending their analysis to linear
probing. However, we show in this paper that linear probing using a pairwise
independent family may have expected {\em logarithmic} cost per operation. On
the positive side, we show that 5-wise independence is enough to ensure
constant expected time per operation. This resolves the question of finding a
space and time efficient hash function that provably ensures good performance
for linear probing.
| [
{
"version": "v1",
"created": "Fri, 8 Dec 2006 22:50:24 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Pagh",
"Anna",
""
],
[
"Pagh",
"Rasmus",
""
],
[
"Ruzic",
"Milan",
""
]
] |
cs/0612058 | Eric Vigoda | Daniel Stefankovic, Santosh Vempala, Eric Vigoda | Adaptive Simulated Annealing: A Near-optimal Connection between Sampling
and Counting | null | null | null | null | cs.DS cs.DM | null | We present a near-optimal reduction from approximately counting the
cardinality of a discrete set to approximately sampling elements of the set. An
important application of our work is to approximating the partition function
$Z$ of a discrete system, such as the Ising model, matchings or colorings of a
graph. The typical approach to estimating the partition function $Z(\beta^*)$
at some desired inverse temperature $\beta^*$ is to define a sequence, which we
call a {\em cooling schedule}, $\beta_0=0<\beta_1<...<\beta_\ell=\beta^*$ where
Z(0) is trivial to compute and the ratios $Z(\beta_{i+1})/Z(\beta_i)$ are easy
to estimate by sampling from the distribution corresponding to $Z(\beta_i)$.
Previous approaches required a cooling schedule of length $O^*(\ln{A})$ where
$A=Z(0)$, thereby ensuring that each ratio $Z(\beta_{i+1})/Z(\beta_i)$ is
bounded. We present a cooling schedule of length $\ell=O^*(\sqrt{\ln{A}})$.
For well-studied problems such as estimating the partition function of the
Ising model, or approximating the number of colorings or matchings of a graph,
our cooling schedule is of length $O^*(\sqrt{n})$, which implies an overall
savings of $O^*(n)$ in the running time of the approximate counting algorithm
(since roughly $\ell$ samples are needed to estimate each ratio).
| [
{
"version": "v1",
"created": "Sun, 10 Dec 2006 20:00:38 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Stefankovic",
"Daniel",
""
],
[
"Vempala",
"Santosh",
""
],
[
"Vigoda",
"Eric",
""
]
] |
cs/0612060 | Sreyash Kenkre | Sreyash Kenkre, Sundar Vishwanathan | The Common Prefix Problem On Trees | 8 pages | null | null | null | cs.DS cs.CC | null | We present a theoretical study of a problem arising in database query
optimization, which we call as The Common Prefix Problem. We present a
$(1-o(1))$ factor approximation algorithm for this problem, when the underlying
graph is a binary tree. We then use a result of Feige and Kogan to show that
even on stars, the problem is hard to approximate.
| [
{
"version": "v1",
"created": "Mon, 11 Dec 2006 12:32:02 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Kenkre",
"Sreyash",
""
],
[
"Vishwanathan",
"Sundar",
""
]
] |
cs/0612072 | Zoya Svitkina | S. Muthukrishnan, Martin Pal and Zoya Svitkina | Stochastic Models for Budget Optimization in Search-Based Advertising | null | null | null | null | cs.DS cs.GT | null | Internet search companies sell advertisement slots based on users' search
queries via an auction. Advertisers have to determine how to place bids on the
keywords of their interest in order to maximize their return for a given
budget: this is the budget optimization problem. The solution depends on the
distribution of future queries.
In this paper, we formulate stochastic versions of the budget optimization
problem based on natural probabilistic models of distribution over future
queries, and address two questions that arise.
[Evaluation] Given a solution, can we evaluate the expected value of the
objective function?
[Optimization] Can we find a solution that maximizes the objective function
in expectation?
Our main results are approximation and complexity results for these two
problems in our three stochastic models. In particular, our algorithmic results
show that simple prefix strategies that bid on all cheap keywords up to some
level are either optimal or good approximations for many cases; we show other
cases to be NP-hard.
| [
{
"version": "v1",
"created": "Thu, 14 Dec 2006 21:13:57 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Sep 2007 20:36:33 GMT"
}
] | "2007-09-24T00:00:00" | [
[
"Muthukrishnan",
"S.",
""
],
[
"Pal",
"Martin",
""
],
[
"Svitkina",
"Zoya",
""
]
] |
cs/0612074 | Zengjian Hu | Petra Berenbrink and Colin Cooper and Zengjian Hu | Energy Efficient Randomized Communication in Unknown AdHoc Networks | 15 pages. 1 figure | null | null | null | cs.DC cs.DS | null | This paper studies broadcasting and gossiping algorithms in random and
general AdHoc networks. Our goal is not only to minimise the broadcasting and
gossiping time, but also to minimise the energy consumption, which is measured
in terms of the total number of messages (or transmissions) sent. We assume
that the nodes of the network do not know the network, and that they can only
send with a fixed power, meaning they can not adjust the areas sizes that their
messages cover. We believe that under these circumstances the number of
transmissions is a very good measure for the overall energy consumption.
For random networks, we present a broadcasting algorithm where every node
transmits at most once. We show that our algorithm broadcasts in $O(\log n)$
steps, w.h.p, where $n$ is the number of nodes. We then present a $O(d \log n)$
($d$ is the expected degree) gossiping algorithm using $O(\log n)$ messages per
node.
For general networks with known diameter $D$, we present a randomised
broadcasting algorithm with optimal broadcasting time $O(D \log (n/D) + \log^2
n)$ that uses an expected number of $O(\log^2 n / \log (n/D))$ transmissions
per node. We also show a tradeoff result between the broadcasting time and the
number of transmissions: we construct a network such that any oblivious
algorithmusing a time-invariant distribution requires $\Omega(\log^2 n / \log
(n/D))$ messages per node in order to finish broadcasting in optimal time. This
demonstrates the tightness of our upper bound. We also show that no oblivious
algorithm can complete broadcasting w.h.p. using $o(\log n)$ messages per node.
| [
{
"version": "v1",
"created": "Fri, 15 Dec 2006 03:43:39 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Berenbrink",
"Petra",
""
],
[
"Cooper",
"Colin",
""
],
[
"Hu",
"Zengjian",
""
]
] |
cs/0612088 | Julien Robert | Julien Robert and Nicolas Schabanel | Non-Clairvoyant Batch Sets Scheduling: Fairness is Fair enough | 12 pages, 1 figure | null | null | null | cs.DC cs.DS | null | Scheduling questions arise naturally in many different areas among which
operating system design, compiling,... In real life systems, the
characteristics of the jobs (such as release time and processing time) are
usually unknown and unpredictable beforehand. The system is typically unaware
of the remaining work in each job or of the ability of the job to take
advantage of more resources. Following these observations, we adopt the job
model by Edmonds et al (2000, 2003) in which the jobs go through a sequence of
different phases. Each phase consists of a certain quantity of work and a
speed-up function that models how it takes advantage of the number of
processors it receives. We consider the non-clairvoyant online setting where a
collection of jobs arrives at time 0. We consider the metrics setflowtime
introduced by Robert et al (2007). The goal is to minimize the sum of the
completion time of the sets, where a set is completed when all of its jobs are
done. If the input consists of a single set of jobs, this is simply the
makespan of the jobs; and if the input consists of a collection of singleton
sets, it is simply the flowtime of the jobs. We show that the non-clairvoyant
strategy EQUIoEQUI that evenly splits the available processors among the still
unserved sets and then evenly splits these processors among the still
uncompleted jobs of each unserved set, achieves a competitive ratio
(2+\sqrt3+o(1))\frac{ln n}{lnln n} for the setflowtime minimization and that
this is asymptotically optimal (up to a constant factor), where n is the size
of the largest set. For makespan minimization, we show that the non-clairvoyant
strategy EQUI achieves a competitive ratio of (1+o(1))\frac{ln n}{lnln n},
which is again asymptotically optimal.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2006 15:19:59 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Dec 2006 21:55:28 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Robert",
"Julien",
""
],
[
"Schabanel",
"Nicolas",
""
]
] |
cs/0612089 | Damien Woods | Damien Woods, Turlough Neary | On the time complexity of 2-tag systems and small universal Turing
machines | Slightly expanded and updated from conference version | FOCS 2006: 47th Annual IEEE Symposium on Foundations of Computer
Science, IEEE, pages 439-446, Berkeley, CA | 10.1109/FOCS.2006.58 | null | cs.CC cs.DS | null | We show that 2-tag systems efficiently simulate Turing machines. As a
corollary we find that the small universal Turing machines of Rogozhin, Minsky
and others simulate Turing machines in polynomial time. This is an exponential
improvement on the previously known simulation time overhead and improves a
forty year old result in the area of small universal Turing machines.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2006 15:59:45 GMT"
}
] | "2016-11-17T00:00:00" | [
[
"Woods",
"Damien",
""
],
[
"Neary",
"Turlough",
""
]
] |
cs/0612100 | Rob van Stee | Leah Epstein and Rob van Stee | Improved results for a memory allocation problem | null | null | null | null | cs.DS | null | We consider a memory allocation problem that can be modeled as a version of
bin packing where items may be split, but each bin may contain at most two
(parts of) items. A 3/2-approximation algorithm and an NP-hardness proof for
this problem was given by Chung et al. We give a simpler 3/2-approximation
algorithm for it which is in fact an online algorithm. This algorithm also has
good performance for the more general case where each bin may contain at most k
parts of items. We show that this general case is also strongly NP-hard.
Additionally, we give an efficient 7/5-approximation algorithm.
| [
{
"version": "v1",
"created": "Wed, 20 Dec 2006 13:39:18 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Epstein",
"Leah",
""
],
[
"van Stee",
"Rob",
""
]
] |
cs/0701011 | Michael Baer | Michael B. Baer | Infinite-Alphabet Prefix Codes Optimal for $\beta$-Exponential Penalties | 5 pages, 2 figures (with 3 illustrations total), accepted to ISIT
2007 | null | null | null | cs.IT cs.DS math.IT | null | Let $P = \{p(i)\}$ be a measure of strictly positive probabilities on the set
of nonnegative integers. Although the countable number of inputs prevents usage
of the Huffman algorithm, there are nontrivial $P$ for which known methods find
a source code that is optimal in the sense of minimizing expected codeword
length. For some applications, however, a source code should instead minimize
one of a family of nonlinear objective functions, $\beta$-exponential means,
those of the form $\log_a \sum_i p(i) a^{n(i)}$, where $n(i)$ is the length of
the $i$th codeword and $a$ is a positive constant. Applications of such
minimizations include a problem of maximizing the chance of message receipt in
single-shot communications ($a<1$) and a problem of minimizing the chance of
buffer overflow in a queueing system ($a>1$). This paper introduces methods for
finding codes optimal for such exponential means. One method applies to
geometric distributions, while another applies to distributions with lighter
tails. The latter algorithm is applied to Poisson distributions. Both are
extended to minimizing maximum pointwise redundancy.
| [
{
"version": "v1",
"created": "Wed, 3 Jan 2007 03:39:46 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Apr 2007 04:35:29 GMT"
}
] | "2007-07-13T00:00:00" | [
[
"Baer",
"Michael B.",
""
]
] |
cs/0701012 | Michael Baer | Michael B. Baer | $D$-ary Bounded-Length Huffman Coding | 5 pages, 2 figures, accepted to ISIT 2007 | null | null | null | cs.IT cs.DS math.IT | null | Efficient optimal prefix coding has long been accomplished via the Huffman
algorithm. However, there is still room for improvement and exploration
regarding variants of the Huffman problem. Length-limited Huffman coding,
useful for many practical applications, is one such variant, in which codes are
restricted to the set of codes in which none of the $n$ codewords is longer
than a given length, $l_{\max}$. Binary length-limited coding can be done in
$O(n l_{\max})$ time and O(n) space via the widely used Package-Merge
algorithm. In this paper the Package-Merge approach is generalized without
increasing complexity in order to introduce a minimum codeword length,
$l_{\min}$, to allow for objective functions other than the minimization of
expected codeword length, and to be applicable to both binary and nonbinary
codes; nonbinary codes were previously addressed using a slower dynamic
programming approach. These extensions have various applications -- including
faster decompression -- and can be used to solve the problem of finding an
optimal code with limited fringe, that is, finding the best code among codes
with a maximum difference between the longest and shortest codewords. The
previously proposed method for solving this problem was nonpolynomial time,
whereas solving this using the novel algorithm requires only $O(n (l_{\max}-
l_{\min})^2)$ time and O(n) space.
| [
{
"version": "v1",
"created": "Wed, 3 Jan 2007 03:42:09 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Apr 2007 04:31:52 GMT"
}
] | "2007-07-13T00:00:00" | [
[
"Baer",
"Michael B.",
""
]
] |
cs/0701020 | Sumit Ganguly | Sumit Ganguly | A nearly optimal and deterministic summary structure for update data
streams | Withdrawn | null | null | null | cs.DS | null | The paper has been withdrawn due to an error in Lemma 1.
| [
{
"version": "v1",
"created": "Thu, 4 Jan 2007 09:03:08 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Jan 2007 15:11:59 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Ganguly",
"Sumit",
""
]
] |
cs/0701023 | Sergey Gubin | Sergey Gubin | A Polynomial Time Algorithm for 3-SAT | 9 pages. The version consolidates results and shares know-how | null | null | null | cs.CC cs.DM cs.DS cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Article describes a class of efficient algorithms for 3SAT and their
generalizations on SAT.
| [
{
"version": "v1",
"created": "Thu, 4 Jan 2007 18:16:30 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Mar 2007 17:55:26 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Apr 2008 22:57:05 GMT"
},
{
"version": "v4",
"created": "Tue, 15 Jul 2008 07:28:03 GMT"
}
] | "2008-07-15T00:00:00" | [
[
"Gubin",
"Sergey",
""
]
] |
cs/0701045 | Iosif Pinelis | Iosif Pinelis | Polygon Convexity: Another O(n) Test | 14 pages; changes: (i) a test for non-strict convexity is added; (ii)
the proofs are gathered in a separate section; (iii) a more detailed abstract
is given | null | null | null | cs.CG cs.DS | null | An n-gon is defined as a sequence \P=(V_0,...,V_{n-1}) of n points on the
plane. An n-gon \P is said to be convex if the boundary of the convex hull of
the set {V_0,...,V_{n-1}} of the vertices of \P coincides with the union of the
edges [V_0,V_1],...,[V_{n-1},V_0]; if at that no three vertices of \P are
collinear then \P is called strictly convex. We prove that an n-gon \P with
n\ge3 is strictly convex if and only if a cyclic shift of the sequence
(\al_0,...,\al_{n-1})\in[0,2\pi)^n of the angles between the x-axis and the
vectors V_1-V_0,...,V_0-V_{n-1} is strictly monotone. A ``non-strict'' version
of this result is also proved.
| [
{
"version": "v1",
"created": "Mon, 8 Jan 2007 18:51:37 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Jan 2007 22:57:51 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Pinelis",
"Iosif",
""
]
] |
cs/0701079 | Yuriy Reznik | Yuriy A. Reznik | Practical Binary Adaptive Block Coder | null | null | null | null | cs.IT cs.DS math.IT | null | This paper describes design of a low-complexity algorithm for adaptive
encoding/ decoding of binary sequences produced by memoryless sources. The
algorithm implements universal block codes constructed for a set of contexts
identified by the numbers of non-zero bits in previous bits in a sequence. We
derive a precise formula for asymptotic redundancy of such codes, which refines
previous well-known estimate by Krichevsky and Trofimov, and provide
experimental verification of this result. In our experimental study we also
compare our implementation with existing binary adaptive encoders, such as
JBIG's Q-coder, and MPEG AVC (ITU-T H.264)'s CABAC algorithms.
| [
{
"version": "v1",
"created": "Thu, 11 Jan 2007 19:58:05 GMT"
}
] | "2007-07-13T00:00:00" | [
[
"Reznik",
"Yuriy A.",
""
]
] |
cs/0701083 | Marko Samer | Georg Gottlob, Marko Samer | A Backtracking-Based Algorithm for Computing Hypertree-Decompositions | 19 pages, 6 figures, 3 tables | ACM Journal of Experimental Algorithmics (JEA) 13(1):1.1-1.19,
2008. | 10.1145/1412228.1412229 | null | cs.DS cs.AI | null | Hypertree decompositions of hypergraphs are a generalization of tree
decompositions of graphs. The corresponding hypertree-width is a measure for
the cyclicity and therefore tractability of the encoded computation problem.
Many NP-hard decision and computation problems are known to be tractable on
instances whose structure corresponds to hypergraphs of bounded
hypertree-width. Intuitively, the smaller the hypertree-width, the faster the
computation problem can be solved. In this paper, we present the new
backtracking-based algorithm det-k-decomp for computing hypertree
decompositions of small width. Our benchmark evaluations have shown that
det-k-decomp significantly outperforms opt-k-decomp, the only exact hypertree
decomposition algorithm so far. Even compared to the best heuristic algorithm,
we obtained competitive results as long as the hypergraphs are not too large.
| [
{
"version": "v1",
"created": "Sun, 14 Jan 2007 01:14:25 GMT"
}
] | "2008-10-12T00:00:00" | [
[
"Gottlob",
"Georg",
""
],
[
"Samer",
"Marko",
""
]
] |
cs/0701114 | Ignacio Vega-Paez M en C | Ignacio Vega-Paez, Georgina G. Pulido, Jose Angel Ortega | The problem determination of Functional Dependencies between attributes
Relation Scheme in the Relational Data Model. El problema de determinar
Dependencias Funcionales entre atributos en los esquemas en el Modelo
Relacional | null | International Journal of Multidisciplinary Sciences and
Engineering, Vol. 2, No. 5, 2011, 1-4 | null | IBP-Memo 2006 07, Sep 2006 | cs.DB cs.DS | null | An alternative definition of the concept is given of functional dependence
among the attributes of the relational schema in the Relational Model, this
definition is obtained in terms of the set theory. For that which a theorem is
demonstrated that establishes equivalence and on the basis theorem an algorithm
is built for the search of the functional dependences among the attributes. The
algorithm is illustrated by a concrete example
| [
{
"version": "v1",
"created": "Wed, 17 Jan 2007 20:08:53 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Mar 2007 22:43:08 GMT"
}
] | "2011-12-09T00:00:00" | [
[
"Vega-Paez",
"Ignacio",
""
],
[
"Pulido",
"Georgina G.",
""
],
[
"Ortega",
"Jose Angel",
""
]
] |
cs/0701142 | Wolfgang Bein | Wolfgang Bein, Lawrence L. Larmore, R\"udiger Reischuk | Knowledge State Algorithms: Randomization with Limited Information | 17 pages, 2 figures | null | null | null | cs.DS | null | We introduce the concept of knowledge states; many well-known algorithms can
be viewed as knowledge state algorithms. The knowledge state approach can be
used to to construct competitive randomized online algorithms and study the
tradeoff between competitiveness and memory. A knowledge state simply states
conditional obligations of an adversary, by fixing a work function, and gives a
distribution for the algorithm. When a knowledge state algorithm receives a
request, it then calculates one or more "subsequent" knowledge states, together
with a probability of transition to each. The algorithm then uses randomization
to select one of those subsequents to be the new knowledge state. We apply the
method to the paging problem. We present optimally competitive algorithm for
paging for the cases where the cache sizes are k=2 and k=3. These algorithms
use only a very limited number of bookmarks.
| [
{
"version": "v1",
"created": "Tue, 23 Jan 2007 00:54:27 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Bein",
"Wolfgang",
""
],
[
"Larmore",
"Lawrence L.",
""
],
[
"Reischuk",
"Rüdiger",
""
]
] |
cs/0701153 | Richard Kr\'alovi\v{c} | Michal Fori\v{s}ek, Branislav Katreniak, Jana Katreniakov\'a,
Rastislav Kr\'alovi\v{c}, Richard Kr\'alovi\v{c}, Vladim\'ir Koutn\'y, Dana
Pardubsk\'a, Tom\'a\v{s} Plachetka, Branislav Rovan | Online Bandwidth Allocation | null | null | null | null | cs.DS cs.NI | null | The paper investigates a version of the resource allocation problem arising
in the wireless networking, namely in the OVSF code reallocation process. In
this setting a complete binary tree of a given height $n$ is considered,
together with a sequence of requests which have to be served in an online
manner. The requests are of two types: an insertion request requires to
allocate a complete subtree of a given height, and a deletion request frees a
given allocated subtree. In order to serve an insertion request it might be
necessary to move some already allocated subtrees to other locations in order
to free a large enough subtree. We are interested in the worst case average
number of such reallocations needed to serve a request.
It was proved in previous work that the competitive ratio of the optimal
online algorithm solving this problem is between 1.5 and O(n). We partially
answer the question about its exact value by giving an O(1)-competitive online
algorithm.
Same model has been used in the context of memory management systems, and
analyzed for the number of reallocations needed to serve a request in the worst
case. In this setting, our result is a corresponding amortized analysis.
| [
{
"version": "v1",
"created": "Thu, 25 Jan 2007 11:52:29 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Forišek",
"Michal",
""
],
[
"Katreniak",
"Branislav",
""
],
[
"Katreniaková",
"Jana",
""
],
[
"Královič",
"Rastislav",
""
],
[
"Královič",
"Richard",
""
],
[
"Koutný",
"Vladimír",
""
],
[
"Pardubská",
"Dana",
""
],
[
"Plachetka",
"Tomáš",
""
],
[
"Rovan",
"Branislav",
""
]
] |
cs/0701164 | Jim Gray | Alexander S. Szalay, Jim Gray, George Fekete, Peter Z. Kunszt, Peter
Kukol, Ani Thakar | Indexing the Sphere with the Hierarchical Triangular Mesh | null | null | null | MSR-TR-2005-123 | cs.DB cs.DS | null | We describe a method to subdivide the surface of a sphere into spherical
triangles of similar, but not identical, shapes and sizes. The Hierarchical
Triangular Mesh (HTM) is a quad-tree that is particularly good at supporting
searches at different resolutions, from arc seconds to hemispheres. The
subdivision scheme is universal, providing the basis for addressing and for
fast lookups. The HTM provides the basis for an efficient geospatial indexing
scheme in relational databases where the data have an inherent location on
either the celestial sphere or the Earth. The HTM index is superior to
cartographical methods using coordinates with singularities at the poles. We
also describe a way to specify surface regions that efficiently represent
spherical query areas. This article presents the algorithms used to identify
the HTM triangles covering such regions.
| [
{
"version": "v1",
"created": "Fri, 26 Jan 2007 00:04:12 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Szalay",
"Alexander S.",
""
],
[
"Gray",
"Jim",
""
],
[
"Fekete",
"George",
""
],
[
"Kunszt",
"Peter Z.",
""
],
[
"Kukol",
"Peter",
""
],
[
"Thakar",
"Ani",
""
]
] |
cs/0701171 | Jim Gray | Jim Gray, Maria A. Nieto-Santisteban, Alexander S. Szalay | The Zones Algorithm for Finding Points-Near-a-Point or Cross-Matching
Spatial Datasets | null | null | null | MSR TR 2006 52 | cs.DB cs.DS | null | Zones index an N-dimensional Euclidian or metric space to efficiently support
points-near-a-point queries either within a dataset or between two datasets.
The approach uses relational algebra and the B-Tree mechanism found in almost
all relational database systems. Hence, the Zones Algorithm gives a
portable-relational implementation of points-near-point, spatial cross-match,
and self-match queries. This article corrects some mistakes in an earlier
article we wrote on the Zones Algorithm and describes some algorithmic
improvements. The Appendix includes an implementation of point-near-point,
self-match, and cross-match using the USGS city and stream gauge database.
| [
{
"version": "v1",
"created": "Fri, 26 Jan 2007 05:11:20 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Gray",
"Jim",
""
],
[
"Nieto-Santisteban",
"Maria A.",
""
],
[
"Szalay",
"Alexander S.",
""
]
] |
cs/0701174 | Dimitris Kalles | T. Hadzilacos, D. Kalles, D. Koumanakos, V. Mitsionis | A Prototype for Educational Planning Using Course Constraints to
Simulate Student Populations | Contains 9 pages, 3 figures, 1 table. Text updated as of February 27,
2009. Submitted to a journal | null | null | null | cs.AI cs.CY cs.DS cs.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distance learning universities usually afford their students the flexibility
to advance their studies at their own pace. This can lead to a considerable
fluctuation of student populations within a program's courses, possibly
affecting the academic viability of a program as well as the related required
resources. Providing a method that estimates this population could be of
substantial help to university management and academic personnel. We describe
how to use course precedence constraints to calculate alternative tuition paths
and then use Markov models to estimate future populations. In doing so, we
identify key issues of a large scale potential deployment.
| [
{
"version": "v1",
"created": "Fri, 26 Jan 2007 08:32:10 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Oct 2008 09:33:20 GMT"
},
{
"version": "v3",
"created": "Wed, 11 Mar 2009 21:20:54 GMT"
}
] | "2009-03-12T00:00:00" | [
[
"Hadzilacos",
"T.",
""
],
[
"Kalles",
"D.",
""
],
[
"Koumanakos",
"D.",
""
],
[
"Mitsionis",
"V.",
""
]
] |
cs/0701185 | Frank Gurski | Frank Gurski | Graph Operations on Clique-Width Bounded Graphs | 30 pages, to appear in "Theory of Computing Systems" | null | 10.1007/s00224-016-9685-1 | null | cs.DS cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clique-width is a well-known graph parameter. Many NP-hard graph problems
admit polynomial-time solutions when restricted to graphs of bounded
clique-width. The same holds for NLC-width. In this paper we study the behavior
of clique-width and NLC-width under various graph operations and graph
transformations. We give upper and lower bounds for the clique-width and
NLC-width of the modified graphs in terms of the clique-width and NLC-width of
the involved graphs.
| [
{
"version": "v1",
"created": "Mon, 29 Jan 2007 14:36:25 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Nov 2014 14:02:50 GMT"
},
{
"version": "v3",
"created": "Sat, 4 Jun 2016 16:45:38 GMT"
}
] | "2016-06-07T00:00:00" | [
[
"Gurski",
"Frank",
""
]
] |
cs/0701189 | Sebastien Tixeuil | Fredrik Manne, Morten Mjelde, Laurence Pilard, S\'ebastien Tixeuil
(LRI) | A New Self-Stabilizing Maximal Matching Algorithm | null | null | null | null | cs.DS cs.DC | null | The maximal matching problem has received considerable attention in the
self-stabilizing community. Previous work has given different self-stabilizing
algorithms that solves the problem for both the adversarial and fair
distributed daemon, the sequential adversarial daemon, as well as the
synchronous daemon. In the following we present a single self-stabilizing
algorithm for this problem that unites all of these algorithms in that it
stabilizes in the same number of moves as the previous best algorithms for the
sequential adversarial, the distributed fair, and the synchronous daemon. In
addition, the algorithm improves the previous best moves complexities for the
distributed adversarial daemon from O(n^2) and O(delta m) to O(m) where n is
the number of processes, m is thenumber of edges, and delta is the maximum
degree in the graph.
| [
{
"version": "v1",
"created": "Tue, 30 Jan 2007 14:52:37 GMT"
}
] | "2016-08-14T00:00:00" | [
[
"Manne",
"Fredrik",
"",
"LRI"
],
[
"Mjelde",
"Morten",
"",
"LRI"
],
[
"Pilard",
"Laurence",
"",
"LRI"
],
[
"Tixeuil",
"Sébastien",
"",
"LRI"
]
] |
cs/0702025 | Markus P\"uschel | Markus Pueschel and Jose M. F. Moura | Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for
DCTs and DSTs | 31 pages, more information at http://www.ece.cmu.edu/~smart | IEEE Transactions on Signal Processing, Vol. 56, No. 4, pp.
1502-1521, 2008 | 10.1109/TSP.2007.907919 | null | cs.IT cs.DS math.IT | null | This paper presents a systematic methodology based on the algebraic theory of
signal processing to classify and derive fast algorithms for linear transforms.
Instead of manipulating the entries of transform matrices, our approach derives
the algorithms by stepwise decomposition of the associated signal models, or
polynomial algebras. This decomposition is based on two generic methods or
algebraic principles that generalize the well-known Cooley-Tukey FFT and make
the algorithms' derivations concise and transparent. Application to the 16
discrete cosine and sine transforms yields a large class of fast algorithms,
many of which have not been found before.
| [
{
"version": "v1",
"created": "Sun, 4 Feb 2007 23:44:34 GMT"
}
] | "2020-01-29T00:00:00" | [
[
"Pueschel",
"Markus",
""
],
[
"Moura",
"Jose M. F.",
""
]
] |
cs/0702029 | Mikkel Thorup | Mario Szegedy and Mikkel Thorup | On the variance of subset sum estimation | 20 pages, 1 figure | null | null | null | cs.DS | null | For high volume data streams and large data warehouses, sampling is used for
efficient approximate answers to aggregate queries over selected subsets.
Mathematically, we are dealing with a set of weighted items and want to support
queries to arbitrary subset sums. With unit weights, we can compute subset
sizes which together with the previous sums provide the subset averages. The
question addressed here is which sampling scheme we should use to get the most
accurate subset sum estimates.
We present a simple theorem on the variance of subset sum estimation and use
it to prove variance optimality and near-optimality of subset sum estimation
with different known sampling schemes. This variance is measured as the average
over all subsets of any given size. By optimal we mean there is no set of input
weights for which any sampling scheme can have a better average variance. Such
powerful results can never be established experimentally. The results of this
paper are derived mathematically. For example, we show that appropriately
weighted systematic sampling is simultaneously optimal for all subset sizes.
More standard schemes such as uniform sampling and
probability-proportional-to-size sampling with replacement can be arbitrarily
bad.
Knowing the variance optimality of different sampling schemes can help
deciding which sampling scheme to apply in a given context.
| [
{
"version": "v1",
"created": "Mon, 5 Feb 2007 15:55:41 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Szegedy",
"Mario",
""
],
[
"Thorup",
"Mikkel",
""
]
] |
cs/0702032 | Reid Andersen | Reid Andersen | Finding large and small dense subgraphs | 12 pages, no figures | null | null | null | cs.DS | null | We consider two optimization problems related to finding dense subgraphs. The
densest at-least-k-subgraph problem (DalkS) is to find an induced subgraph of
highest average degree among all subgraphs with at least k vertices, and the
densest at-most-k-subgraph problem (DamkS) is defined similarly. These problems
are related to the well-known densest k-subgraph problem (DkS), which is to
find the densest subgraph on exactly k vertices. We show that DalkS can be
approximated efficiently, while DamkS is nearly as hard to approximate as the
densest k-subgraph problem.
| [
{
"version": "v1",
"created": "Mon, 5 Feb 2007 19:29:38 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Andersen",
"Reid",
""
]
] |
cs/0702043 | Chinh Hoang | Ch\'inh T. Ho\`ang, Marcin Kami\'nski, Vadim Lozin, J. Sawada, X. Shu | Deciding k-colourability of $P_5$-free graphs in polynomial time | null | null | null | null | cs.DS | null | The problem of computing the chromatic number of a $P_5$-free graph is known
to be NP-hard. In contrast to this negative result, we show that determining
whether or not a $P_5$-free graph admits a $k$-colouring, for each fixed number
of colours $k$, can be done in polynomial time. If such a colouring exists, our
algorithm produces it.
| [
{
"version": "v1",
"created": "Wed, 7 Feb 2007 15:29:32 GMT"
}
] | "2016-08-14T00:00:00" | [
[
"Hoàng",
"Chính T.",
""
],
[
"Kamiński",
"Marcin",
""
],
[
"Lozin",
"Vadim",
""
],
[
"Sawada",
"J.",
""
],
[
"Shu",
"X.",
""
]
] |
cs/0702049 | Gregory Gutin | Noga Alon, Fedor Fomin, Gregory Gutin, Michael Krivelevich and Saket
Saurabh | Parameterized Algorithms for Directed Maximum Leaf Problems | null | null | null | null | cs.DS cs.DM | null | We prove that finding a rooted subtree with at least $k$ leaves in a digraph
is a fixed parameter tractable problem. A similar result holds for finding
rooted spanning trees with many leaves in digraphs from a wide family $\cal L$
that includes all strong and acyclic digraphs. This settles completely an open
question of Fellows and solves another one for digraphs in $\cal L$. Our
algorithms are based on the following combinatorial result which can be viewed
as a generalization of many results for a `spanning tree with many leaves' in
the undirected case, and which is interesting on its own: If a digraph $D\in
\cal L$ of order $n$ with minimum in-degree at least 3 contains a rooted
spanning tree, then $D$ contains one with at least $(n/2)^{1/5}-1$ leaves.
| [
{
"version": "v1",
"created": "Thu, 8 Feb 2007 18:25:08 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Alon",
"Noga",
""
],
[
"Fomin",
"Fedor",
""
],
[
"Gutin",
"Gregory",
""
],
[
"Krivelevich",
"Michael",
""
],
[
"Saurabh",
"Saket",
""
]
] |
cs/0702054 | Christoph Durr | Christoph Durr and Nguyen Kim Thang | Nash equilibria in Voronoi games on graphs | null | null | null | null | cs.GT cs.DS | null | In this paper we study a game where every player is to choose a vertex
(facility) in a given undirected graph. All vertices (customers) are then
assigned to closest facilities and a player's payoff is the number of customers
assigned to it. We show that deciding the existence of a Nash equilibrium for a
given graph is NP-hard which to our knowledge is the first result of this kind
for a zero-sum game. We also introduce a new measure, the social cost
discrepancy, defined as the ratio of the costs between the worst and the best
Nash equilibria. We show that the social cost discrepancy in our game is
Omega(sqrt(n/k)) and O(sqrt(kn)), where n is the number of vertices and k the
number of players.
| [
{
"version": "v1",
"created": "Fri, 9 Feb 2007 12:12:17 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Apr 2007 14:11:36 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Durr",
"Christoph",
""
],
[
"Thang",
"Nguyen Kim",
""
]
] |
cs/0702056 | Hanene Mohamed | Hanene Mohamed (INRIA Rocquencourt) | A probabilistic analysis of a leader election algorithm | null | Fourth Colloquium on Mathematics and Computer Science Algorithms,
Trees, Combinatorics and Probabilities (2006) 225-236 | null | null | cs.DS | null | A {\em leader election} algorithm is an elimination process that divides
recursively into tow subgroups an initial group of n items, eliminates one
subgroup and continues the procedure until a subgroup is of size 1. In this
paper the biased case is analyzed. We are interested in the {\em cost} of the
algorithm, i.e. the number of operations needed until the algorithm stops.
Using a probabilistic approach, the asymptotic behavior of the algorithm is
shown to be related to the behavior of a hitting time of two random sequences
on [0,1].
| [
{
"version": "v1",
"created": "Fri, 9 Feb 2007 15:16:48 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Mohamed",
"Hanene",
"",
"INRIA Rocquencourt"
]
] |
cs/0702057 | Salman Beigi | Mohsen Bahramgiri, Salman Beigi | An Efficient Algorithm to Recognize Locally Equivalent Graphs in
Non-Binary Case | 21 pages, no figures, minor corrections | null | null | null | cs.DS | null | Let $v$ be a vertex of a graph $G$. By the local complementation of $G$ at
$v$ we mean to complement the subgraph induced by the neighbors of $v$. This
operator can be generalized as follows. Assume that, each edge of $G$ has a
label in the finite field $\mathbf{F}_q$. Let $(g_{ij})$ be set of labels
($g_{ij}$ is the label of edge $ij$). We define two types of operators. For the
first one, let $v$ be a vertex of $G$ and $a\in \mathbf{F}_q$, and obtain the
graph with labels $g'_{ij}=g_{ij}+ag_{vi}g_{vj}$. For the second, if $0\neq
b\in \mathbf{F}_q$ the resulted graph is a graph with labels $g''_{vi}=bg_{vi}$
and $g''_{ij}=g_{ij}$, for $i,j$ unequal to $v$. It is clear that if the field
is binary, the operators are just local complementations that we described.
The problem of whether two graphs are equivalent under local complementations
has been studied, \cite{bouchalg}. Here we consider the general case and
assuming that $q$ is odd, present the first known efficient algorithm to verify
whether two graphs are locally equivalent or not.
| [
{
"version": "v1",
"created": "Fri, 9 Feb 2007 15:42:46 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Jul 2007 21:01:46 GMT"
}
] | "2007-07-02T00:00:00" | [
[
"Bahramgiri",
"Mohsen",
""
],
[
"Beigi",
"Salman",
""
]
] |
cs/0702078 | Reid Andersen | Reid Andersen | A Local Algorithm for Finding Dense Subgraphs | 14 pages, no figures | null | null | null | cs.DS cs.CC | null | We present a local algorithm for finding dense subgraphs of bipartite graphs,
according to the definition of density proposed by Kannan and Vinay. Our
algorithm takes as input a bipartite graph with a specified starting vertex,
and attempts to find a dense subgraph near that vertex. We prove that for any
subgraph S with k vertices and density theta, there are a significant number of
starting vertices within S for which our algorithm produces a subgraph S' with
density theta / O(log n) on at most O(D k^2) vertices, where D is the maximum
degree. The running time of the algorithm is O(D k^2), independent of the
number of vertices in the graph.
| [
{
"version": "v1",
"created": "Tue, 13 Feb 2007 23:41:46 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Andersen",
"Reid",
""
]
] |
cs/0702113 | David Pritchard | David Pritchard and Ramakrishna Thurimella | Fast Computation of Small Cuts via Cycle Space Sampling | Previous version appeared in Proc. 35th ICALP, pages 145--160, 2008 | null | null | null | cs.DC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a new sampling-based method to determine cuts in an undirected
graph. For a graph (V, E), its cycle space is the family of all subsets of E
that have even degree at each vertex. We prove that with high probability,
sampling the cycle space identifies the cuts of a graph. This leads to simple
new linear-time sequential algorithms for finding all cut edges and cut pairs
(a set of 2 edges that form a cut) of a graph.
In the model of distributed computing in a graph G=(V, E) with O(log V)-bit
messages, our approach yields faster algorithms for several problems. The
diameter of G is denoted by Diam, and the maximum degree by Delta. We obtain
simple O(Diam)-time distributed algorithms to find all cut edges,
2-edge-connected components, and cut pairs, matching or improving upon previous
time bounds. Under natural conditions these new algorithms are universally
optimal --- i.e. a Omega(Diam)-time lower bound holds on every graph. We obtain
a O(Diam+Delta/log V)-time distributed algorithm for finding cut vertices; this
is faster than the best previous algorithm when Delta, Diam = O(sqrt(V)). A
simple extension of our work yields the first distributed algorithm with
sub-linear time for 3-edge-connected components. The basic distributed
algorithms are Monte Carlo, but they can be made Las Vegas without increasing
the asymptotic complexity.
In the model of parallel computing on the EREW PRAM our approach yields a
simple algorithm with optimal time complexity O(log V) for finding cut pairs
and 3-edge-connected components.
| [
{
"version": "v1",
"created": "Tue, 20 Feb 2007 03:00:33 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Apr 2007 04:26:00 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Jul 2007 04:43:05 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Apr 2008 19:17:34 GMT"
},
{
"version": "v5",
"created": "Wed, 21 Jul 2010 09:49:35 GMT"
}
] | "2010-07-22T00:00:00" | [
[
"Pritchard",
"David",
""
],
[
"Thurimella",
"Ramakrishna",
""
]
] |
cs/0702142 | Daniel Lemire | Daniel Lemire, Martin Brooks, Yuhong Yan | An Optimal Linear Time Algorithm for Quasi-Monotonic Segmentation | Appeared in ICDM 2005 | null | null | null | cs.DS cs.DB | null | Monotonicity is a simple yet significant qualitative characteristic. We
consider the problem of segmenting an array in up to K segments. We want
segments to be as monotonic as possible and to alternate signs. We propose a
quality metric for this problem, present an optimal linear time algorithm based
on novel formalism, and compare experimentally its performance to a linear time
top-down regression algorithm. We show that our algorithm is faster and more
accurate. Applications include pattern recognition and qualitative modeling.
| [
{
"version": "v1",
"created": "Sat, 24 Feb 2007 02:29:36 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Lemire",
"Daniel",
""
],
[
"Brooks",
"Martin",
""
],
[
"Yan",
"Yuhong",
""
]
] |
cs/0702151 | Vladimir Braverman | Vladimir Braverman, Rafail Ostrovsky, Carlo Zaniolo | Succinct Sampling on Streams | null | null | null | null | cs.DS | null | A streaming model is one where data items arrive over long period of time,
either one item at a time or in bursts. Typical tasks include computing various
statistics over a sliding window of some fixed time-horizon. What makes the
streaming model interesting is that as the time progresses, old items expire
and new ones arrive. One of the simplest and central tasks in this model is
sampling. That is, the task of maintaining up to $k$ uniformly distributed
items from a current time-window as old items expire and new ones arrive. We
call sampling algorithms {\bf succinct} if they use provably optimal (up to
constant factors) {\bf worst-case} memory to maintain $k$ items (either with or
without replacement). We stress that in many applications structures that have
{\em expected} succinct representation as the time progresses are not
sufficient, as small probability events eventually happen with probability 1.
Thus, in this paper we ask the following question: are Succinct Sampling on
Streams (or $S^3$-algorithms)possible, and if so for what models? Perhaps
somewhat surprisingly, we show that $S^3$-algorithms are possible for {\em all}
variants of the problem mentioned above, i.e. both with and without replacement
and both for one-at-a-time and bursty arrival models. Finally, we use $S^3$
algorithms to solve various problems in sliding windows model, including
frequency moments, counting triangles, entropy and density estimations. For
these problems we present \emph{first} solutions with provable worst-case
memory guarantees.
| [
{
"version": "v1",
"created": "Sun, 25 Feb 2007 17:20:48 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Feb 2007 22:12:14 GMT"
},
{
"version": "v3",
"created": "Mon, 14 Apr 2008 16:30:01 GMT"
}
] | "2008-04-14T00:00:00" | [
[
"Braverman",
"Vladimir",
""
],
[
"Ostrovsky",
"Rafail",
""
],
[
"Zaniolo",
"Carlo",
""
]
] |
cs/0702156 | Philippe Robert | Fabrice Guillemin, Philippe Robert (INRIA Rocquencourt) | Analysis of Steiner subtrees of Random Trees for Traceroute Algorithms | null | Random Structures and Algorithms, 35(2):194-215, September 2009 | null | null | cs.NI cs.DS | null | We consider in this paper the problem of discovering, via a traceroute
algorithm, the topology of a network, whose graph is spanned by an infinite
branching process. A subset of nodes is selected according to some criterion.
As a measure of efficiency of the algorithm, the Steiner distance of the
selected nodes, i.e. the size of the spanning sub-tree of these nodes, is
investigated. For the selection of nodes, two criteria are considered: A node
is randomly selected with a probability, which is either independent of the
depth of the node (uniform model) or else in the depth biased model, is
exponentially decaying with respect to its depth. The limiting behavior the
size of the discovered subtree is investigated for both models.
| [
{
"version": "v1",
"created": "Tue, 27 Feb 2007 13:42:28 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Jun 2008 10:11:43 GMT"
}
] | "2009-08-25T00:00:00" | [
[
"Guillemin",
"Fabrice",
"",
"INRIA Rocquencourt"
],
[
"Robert",
"Philippe",
"",
"INRIA Rocquencourt"
]
] |
cs/0702159 | Fabiano C. Botelho | Fabiano C. Botelho, Rasmus Pagh, Nivio Ziviani | Perfect Hashing for Data Management Applications | 12 pages | null | null | RT.DCC.002/2007 | cs.DS cs.DB | null | Perfect hash functions can potentially be used to compress data in connection
with a variety of data management tasks. Though there has been considerable
work on how to construct good perfect hash functions, there is a gap between
theory and practice among all previous methods on minimal perfect hashing. On
one side, there are good theoretical results without experimentally proven
practicality for large key sets. On the other side, there are the theoretically
analyzed time and space usage algorithms that assume that truly random hash
functions are available for free, which is an unrealistic assumption. In this
paper we attempt to bridge this gap between theory and practice, using a number
of techniques from the literature to obtain a novel scheme that is
theoretically well-understood and at the same time achieves an
order-of-magnitude increase in performance compared to previous ``practical''
methods. This improvement comes from a combination of a novel, theoretically
optimal perfect hashing scheme that greatly simplifies previous methods, and
the fact that our algorithm is designed to make good use of the memory
hierarchy. We demonstrate the scalability of our algorithm by considering a set
of over one billion URLs from the World Wide Web of average length 64, for
which we construct a minimal perfect hash function on a commodity PC in a
little more than 1 hour. Our scheme produces minimal perfect hash functions
using slightly more than 3 bits per key. For perfect hash functions in the
range $\{0,...,2n-1\}$ the space usage drops to just over 2 bits per key (i.e.,
one bit more than optimal for representing the key). This is significantly
below of what has been achieved previously for very large values of $n$.
| [
{
"version": "v1",
"created": "Tue, 27 Feb 2007 20:56:41 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Botelho",
"Fabiano C.",
""
],
[
"Pagh",
"Rasmus",
""
],
[
"Ziviani",
"Nivio",
""
]
] |
cs/0703001 | Michael Coury | Michael D. Coury | Embedding Graphs into the Extended Grid | 4 pages, 2 figures | null | null | null | cs.DM cs.DS | null | Let $G=(V,E)$ be an arbitrary undirected source graph to be embedded in a
target graph $EM$, the extended grid with vertices on integer grid points and
edges to nearest and next-nearest neighbours. We present an algorithm showing
how to embed $G$ into $EM$ in both time and space $O(|V|^2)$ using the new
notions of islands and bridges. An island is a connected subgraph in the target
graph which is mapped from exactly one vertex in the source graph while a
bridge is an edge between two islands which is mapped from exactly one edge in
the source graph. This work is motivated by real industrial applications in the
field of quantum computing and a need to efficiently embed source graphs in the
extended grid.
| [
{
"version": "v1",
"created": "Wed, 28 Feb 2007 22:37:52 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Coury",
"Michael D.",
""
]
] |
cs/0703006 | Jingchao Chen | Jing-Chao Chen | XORSAT: An Efficient Algorithm for the DIMACS 32-bit Parity Problem | null | null | null | null | cs.DS | null | The DIMACS 32-bit parity problem is a satisfiability (SAT) problem hard to
solve. So far, EqSatz by Li is the only solver which can solve this problem.
However, This solver is very slow. It is reported that it spent 11855 seconds
to solve a par32-5 instance on a Maxintosh G3 300 MHz. The paper introduces a
new solver, XORSAT, which splits the original problem into two parts:
structured part and random part, and then solves separately them with WalkSAT
and an XOR equation solver. Based our empirical observation, XORSAT is
surprisingly fast, which is approximately 1000 times faster than EqSatz. For a
par32-5 instance, XORSAT took 2.9 seconds, while EqSatz took 2844 seconds on
Intel Pentium IV 2.66GHz CPU. We believe that this method significantly
different from traditional methods is also useful beyond this domain.
| [
{
"version": "v1",
"created": "Fri, 2 Mar 2007 01:38:16 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Chen",
"Jing-Chao",
""
]
] |
cs/0703010 | Jaroslaw Byrka | Jaroslaw Byrka and Karen Aardal | An optimal bifactor approximation algorithm for the metric uncapacitated
facility location problem | A journal version | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We obtain a 1.5-approximation algorithm for the metric uncapacitated facility
location problem (UFL), which improves on the previously best known
1.52-approximation algorithm by Mahdian, Ye and Zhang. Note, that the
approximability lower bound by Guha and Khuller is 1.463.
An algorithm is a {\em ($\lambda_f$,$\lambda_c$)-approximation algorithm} if
the solution it produces has total cost at most $\lambda_f \cdot F^* +
\lambda_c \cdot C^*$, where $F^*$ and $C^*$ are the facility and the connection
cost of an optimal solution. Our new algorithm, which is a modification of the
$(1+2/e)$-approximation algorithm of Chudak and Shmoys, is a
(1.6774,1.3738)-approximation algorithm for the UFL problem and is the first
one that touches the approximability limit curve $(\gamma_f, 1+2e^{-\gamma_f})$
established by Jain, Mahdian and Saberi. As a consequence, we obtain the first
optimal approximation algorithm for instances dominated by connection costs.
When combined with a (1.11,1.7764)-approximation algorithm proposed by Jain et
al., and later analyzed by Mahdian et al., we obtain the overall approximation
guarantee of 1.5 for the metric UFL problem. We also describe how to use our
algorithm to improve the approximation ratio for the 3-level version of UFL.
| [
{
"version": "v1",
"created": "Fri, 2 Mar 2007 14:49:57 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Feb 2009 15:15:58 GMT"
}
] | "2009-02-04T00:00:00" | [
[
"Byrka",
"Jaroslaw",
""
],
[
"Aardal",
"Karen",
""
]
] |
cs/0703013 | Vincent Limouzy | Vincent Limouzy (LIAFA), Fabien De Montgolfier (LIAFA), Micha\"el Rao
(LIAFA) | NLC-2 graph recognition and isomorphism | soumis \`{a} WG 2007; 12p | Dans Lecture Notes In Computer Science - Graph-Theoretic Concepts
in Computer Science 33rd International Workshop, WG 2007, Dornburg, Germany,
June 21-23, 2007., Dornburg : Allemagne (2007) | 10.1007/978-3-540-74839-7_9 | null | cs.DS | null | NLC-width is a variant of clique-width with many application in graph
algorithmic. This paper is devoted to graphs of NLC-width two. After giving new
structural properties of the class, we propose a $O(n^2 m)$-time algorithm,
improving Johansson's algorithm \cite{Johansson00}. Moreover, our alogrithm is
simple to understand. The above properties and algorithm allow us to propose a
robust $O(n^2 m)$-time isomorphism algorithm for NLC-2 graphs. As far as we
know, it is the first polynomial-time algorithm.
| [
{
"version": "v1",
"created": "Sat, 3 Mar 2007 06:44:57 GMT"
}
] | "2007-12-11T00:00:00" | [
[
"Limouzy",
"Vincent",
"",
"LIAFA"
],
[
"De Montgolfier",
"Fabien",
"",
"LIAFA"
],
[
"Rao",
"Michaël",
"",
"LIAFA"
]
] |
cs/0703019 | Jean Cardinal | Jean Cardinal, Erik D. Demaine, Samuel Fiorini, Gwena\"el Joret,
Stefan Langerman, Ilan Newman, Oren Weimann | The Stackelberg Minimum Spanning Tree Game | v3: Referees' comments incorporated. A preliminary version appeared
in the proceedings of the 10th Workshop on Algorithms and Data Structures
(WADS 2007) | Algorithmica, vol. 59, no. 2, pp. 129--144, 2011 | 10.1007/s00453-009-9299-y | null | cs.GT cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a one-round two-player network pricing game, the Stackelberg
Minimum Spanning Tree game or StackMST.
The game is played on a graph (representing a network), whose edges are
colored either red or blue, and where the red edges have a given fixed cost
(representing the competitor's prices). The first player chooses an assignment
of prices to the blue edges, and the second player then buys the cheapest
possible minimum spanning tree, using any combination of red and blue edges.
The goal of the first player is to maximize the total price of purchased blue
edges. This game is the minimum spanning tree analog of the well-studied
Stackelberg shortest-path game.
We analyze the complexity and approximability of the first player's best
strategy in StackMST. In particular, we prove that the problem is APX-hard even
if there are only two different red costs, and give an approximation algorithm
whose approximation ratio is at most $\min \{k,1+\ln b,1+\ln W\}$, where $k$ is
the number of distinct red costs, $b$ is the number of blue edges, and $W$ is
the maximum ratio between red costs. We also give a natural integer linear
programming formulation of the problem, and show that the integrality gap of
the fractional relaxation asymptotically matches the approximation guarantee of
our algorithm.
| [
{
"version": "v1",
"created": "Mon, 5 Mar 2007 09:46:26 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Sep 2008 15:44:40 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Sep 2009 15:20:43 GMT"
}
] | "2011-03-07T00:00:00" | [
[
"Cardinal",
"Jean",
""
],
[
"Demaine",
"Erik D.",
""
],
[
"Fiorini",
"Samuel",
""
],
[
"Joret",
"Gwenaël",
""
],
[
"Langerman",
"Stefan",
""
],
[
"Newman",
"Ilan",
""
],
[
"Weimann",
"Oren",
""
]
] |
cs/0703020 | Gabriel Istrate | Anders Hansson, Gabriel Istrate | Counting preimages of TCP reordering patterns | null | null | null | null | cs.DS cs.DM math.CO | null | Packet reordering is an important property of network traffic that should be
captured by analytical models of the Transmission Control Protocol (TCP). We
study a combinatorial problem motivated by RESTORED, a TCP modeling methodology
that incorporates information about packet dynamics. A significant component of
this model is a many-to-one mapping B that transforms sequences of packet IDs
into buffer sequences, in a manner that is compatible with TCP semantics. We
show that the following hold:
1. There exists a linear time algorithm that, given a buffer sequence W of
length n, decides whether there exists a permutation A of 1,2,..., n such that
$A\in B^{-1}(W)$ (and constructs such a permutation, when it exists).
2. The problem of counting the number of permutations in $B^{-1}(W)$ has a
polynomial time algorithm.
We also show how to extend these results to sequences of IDs that contain
repeated packets.
| [
{
"version": "v1",
"created": "Mon, 5 Mar 2007 13:38:45 GMT"
}
] | "2008-11-04T00:00:00" | [
[
"Hansson",
"Anders",
""
],
[
"Istrate",
"Gabriel",
""
]
] |
cs/0703031 | P\'aid\'i Creed | Paidi Creed | Sampling Eulerian orientations of triangular lattice graphs | 23 pages | null | null | null | cs.DM cs.DS | null | We consider the problem of sampling from the uniform distribution on the set
of Eulerian orientations of subgraphs of the triangular lattice. Although it is
known that this can be achieved in polynomial time for any graph, the algorithm
studied here is more natural in the context of planar Eulerian graphs. We
analyse the mixing time of a Markov chain on the Eulerian orientations of a
planar graph which moves between orientations by reversing the edges of
directed faces. Using path coupling and the comparison method we obtain a
polynomial upper bound on the mixing time of this chain for any solid subgraph
of the triangular lattice. By considering the conductance of the chain we show
that there exist subgraphs with holes for which the chain will always take an
exponential amount of time to converge. Finally, as an additional justification
for studying a Markov chain on the set of Eulerian orientations of planar
graphs, we show that the problem of counting Eulerian orientations remains
#P-complete when restricted to planar graphs.
A preliminary version of this work appeared as an extended abstract in the
2nd Algorithms and Complexity in Durham workshop.
| [
{
"version": "v1",
"created": "Wed, 7 Mar 2007 12:34:03 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Creed",
"Paidi",
""
]
] |
cs/0703093 | Roman Vershynin | Roman Vershynin | Some problems in asymptotic convex geometry and random matrices
motivated by numerical algorithms | 12 pages, no figures. Based on the talk at the 2006 conference on
Banach Spaces and their applications in analysis | Banach spaces and their applications in analysis, 209--218, Walter
de Gruyter, Berlin, 2007 | null | null | cs.CG cs.DS cs.NA | null | The simplex method in Linear Programming motivates several problems of
asymptotic convex geometry. We discuss some conjectures and known results in
two related directions -- computing the size of projections of high dimensional
polytopes and estimating the norms of random matrices and their inverses.
| [
{
"version": "v1",
"created": "Mon, 19 Mar 2007 21:51:50 GMT"
}
] | "2016-12-23T00:00:00" | [
[
"Vershynin",
"Roman",
""
]
] |
cs/0703098 | Sergey Gubin | Sergey Gubin | Polynomial time algorithm for 3-SAT. Examples of use | 19 pages | null | null | null | cs.CC cs.DM cs.DS cs.LO | null | The algorithm checks the propositional formulas for patterns of
unsatisfiability.
| [
{
"version": "v1",
"created": "Wed, 21 Mar 2007 06:46:09 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Gubin",
"Sergey",
""
]
] |
cs/0703100 | Rajmohan Rajaraman | Guolong Lin and Rajmohan Rajaraman | Approximation Algorithms for Multiprocessor Scheduling under Uncertainty | 12 pages, 2 encapsulated postscript figures | null | null | null | cs.DC cs.CC cs.DS | null | Motivated by applications in grid computing and project management, we study
multiprocessor scheduling in scenarios where there is uncertainty in the
successful execution of jobs when assigned to processors. We consider the
problem of multiprocessor scheduling under uncertainty, in which we are given n
unit-time jobs and m machines, a directed acyclic graph C giving the
dependencies among the jobs, and for every job j and machine i, the probability
p_{ij} of the successful completion of job j when scheduled on machine i in any
given particular step. The goal of the problem is to find a schedule that
minimizes the expected makespan, that is, the expected completion time of all
the jobs.
The problem of multiprocessor scheduling under uncertainty was introduced by
Malewicz and was shown to be NP-hard even when all the jobs are independent. In
this paper, we present polynomial-time approximation algorithms for the
problem, for special cases of the dag C. We obtain an O(log(n))-approximation
for the case of independent jobs, an
O(log(m)log(n)log(n+m)/loglog(n+m))-approximation when C is a collection of
disjoint chains, an O(log(m)log^2(n))-approximation when C is a collection of
directed out- or in-trees, and an
O(log(m)log^2(n)log(n+m)/loglog(n+m))-approximation when C is a directed
forest.
| [
{
"version": "v1",
"created": "Wed, 21 Mar 2007 20:35:40 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Lin",
"Guolong",
""
],
[
"Rajaraman",
"Rajmohan",
""
]
] |
cs/0703109 | Daniel Lemire | Owen Kaser and Daniel Lemire | Tag-Cloud Drawing: Algorithms for Cloud Visualization | To appear in proceedings of Tagging and Metadata for Social
Information Organization (WWW 2007) | null | null | null | cs.DS | null | Tag clouds provide an aggregate of tag-usage statistics. They are typically
sent as in-line HTML to browsers. However, display mechanisms suited for
ordinary text are not ideal for tags, because font sizes may vary widely on a
line. As well, the typical layout does not account for relationships that may
be known between tags. This paper presents models and algorithms to improve the
display of tag clouds that consist of in-line HTML, as well as algorithms that
use nested tables to achieve a more general 2-dimensional layout in which tag
relationships are considered. The first algorithms leverage prior work in
typesetting and rectangle packing, whereas the second group of algorithms
leverage prior work in Electronic Design Automation. Experiments show our
algorithms can be efficiently implemented and perform well.
| [
{
"version": "v1",
"created": "Thu, 22 Mar 2007 14:54:48 GMT"
},
{
"version": "v2",
"created": "Mon, 7 May 2007 17:59:20 GMT"
}
] | "2009-04-22T00:00:00" | [
[
"Kaser",
"Owen",
""
],
[
"Lemire",
"Daniel",
""
]
] |
cs/0703132 | Leonid Peshkin | Leonid Peshkin | Structure induction by lossless graph compression | 10 pages, 7 figures, 2 tables published in Proceedings of the Data
Compression Conference, 2007 | In proceedings of the Data Compression Conference, 2007, pp 53-62,
published by the IEEE Computer Society Press | 10.1109/DCC.2007.73 | null | cs.DS cs.IT cs.LG math.IT | null | This work is motivated by the necessity to automate the discovery of
structure in vast and evergrowing collection of relational data commonly
represented as graphs, for example genomic networks. A novel algorithm, dubbed
Graphitour, for structure induction by lossless graph compression is presented
and illustrated by a clear and broadly known case of nested structure in a DNA
molecule. This work extends to graphs some well established approaches to
grammatical inference previously applied only to strings. The bottom-up graph
compression problem is related to the maximum cardinality (non-bipartite)
maximum cardinality matching problem. The algorithm accepts a variety of graph
types including directed graphs and graphs with labeled nodes and arcs. The
resulting structure could be used for representation and classification of
graphs.
| [
{
"version": "v1",
"created": "Tue, 27 Mar 2007 05:46:31 GMT"
}
] | "2017-05-25T00:00:00" | [
[
"Peshkin",
"Leonid",
""
]
] |
cs/0703133 | Edith Elkind | Edith Elkind, Leslie Ann Goldberg, Paul W. Goldberg | Computing Good Nash Equilibria in Graphical Games | 25 pages. Short version appears in ACM EC'07 | null | null | null | cs.GT cs.DS cs.MA | null | This paper addresses the problem of fair equilibrium selection in graphical
games. Our approach is based on the data structure called the {\em best
response policy}, which was proposed by Kearns et al. \cite{kls} as a way to
represent all Nash equilibria of a graphical game. In \cite{egg}, it was shown
that the best response policy has polynomial size as long as the underlying
graph is a path. In this paper, we show that if the underlying graph is a
bounded-degree tree and the best response policy has polynomial size then there
is an efficient algorithm which constructs a Nash equilibrium that guarantees
certain payoffs to all participants. Another attractive solution concept is a
Nash equilibrium that maximizes the social welfare. We show that, while exactly
computing the latter is infeasible (we prove that solving this problem may
involve algebraic numbers of an arbitrarily high degree), there exists an FPTAS
for finding such an equilibrium as long as the best response policy has
polynomial size. These two algorithms can be combined to produce Nash
equilibria that satisfy various fairness criteria.
| [
{
"version": "v1",
"created": "Tue, 27 Mar 2007 16:15:54 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Elkind",
"Edith",
""
],
[
"Goldberg",
"Leslie Ann",
""
],
[
"Goldberg",
"Paul W.",
""
]
] |
cs/0703145 | Sandeep Murthy | Sandeep Murthy | The Simultaneous Triple Product Property and Group-theoretic Results for
the Exponent of Matrix Multiplication | 14 pages | null | null | null | cs.DS cs.CC math.GR | null | We describe certain special consequences of certain elementary methods from
group theory for studying the algebraic complexity of matrix multiplication, as
developed by H. Cohn, C. Umans et. al. in 2003 and 2005. The measure of
complexity here is the exponent of matrix multiplication, a real parameter
between 2 and 3, which has been conjectured to be 2. More specifically, a
finite group may simultaneously "realize" several independent matrix
multiplications via its regular algebra if it has a family of triples of
"index" subsets which satisfy the so-called simultaneous triple product
property (STPP), in which case the complexity of these several multiplications
does not exceed the rank (complexity) of the algebra. This leads to bounds for
the exponent in terms of the size of the group and the sizes of its STPP
triples, as well as the dimensions of its distinct irreducible representations.
Wreath products of Abelian with symmetric groups appear especially important,
in this regard, and we give an example of such a group which shows that the
exponent is less than 2.84, and could be possibly be as small as 2.02 depending
on the number of simultaneous matrix multiplications it realizes.
| [
{
"version": "v1",
"created": "Thu, 29 Mar 2007 02:55:17 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Mar 2007 02:17:28 GMT"
},
{
"version": "v3",
"created": "Mon, 2 Apr 2007 13:39:15 GMT"
},
{
"version": "v4",
"created": "Tue, 3 Apr 2007 16:52:16 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Murthy",
"Sandeep",
""
]
] |
cs/0703146 | Sergey Gubin | Sergey Gubin | A Polynomial Time Algorithm for SAT | Update, 30 pages | null | null | MCCCC 23,24,25 | cs.CC cs.DM cs.DS cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Article presents the compatibility matrix method and illustrates it with the
application to P vs NP problem. The method is a generalization of descriptive
geometry: in the method, we draft problems and solve them utilizing the image
creation technique. The method reveals: P = NP = PSPACE
| [
{
"version": "v1",
"created": "Thu, 29 Mar 2007 07:36:30 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Mar 2007 17:51:21 GMT"
},
{
"version": "v3",
"created": "Tue, 10 Feb 2009 06:23:54 GMT"
},
{
"version": "v4",
"created": "Mon, 7 May 2012 08:29:28 GMT"
}
] | "2012-05-08T00:00:00" | [
[
"Gubin",
"Sergey",
""
]
] |
cs/0703150 | Steven G. Johnson | Xuancheng Shao and Steven G. Johnson | Type-II/III DCT/DST algorithms with reduced number of arithmetic
operations | 9 pages | Signal Processing vol. 88, issue 6, p. 1553-1564 (2008) | 10.1016/j.sigpro.2008.01.004 | null | cs.NA cs.DS cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present algorithms for the discrete cosine transform (DCT) and discrete
sine transform (DST), of types II and III, that achieve a lower count of real
multiplications and additions than previously published algorithms, without
sacrificing numerical accuracy. Asymptotically, the operation count is reduced
from ~ 2N log_2 N to ~ (17/9) N log_2 N for a power-of-two transform size N.
Furthermore, we show that a further N multiplications may be saved by a certain
rescaling of the inputs or outputs, generalizing a well-known technique for N=8
by Arai et al. These results are derived by considering the DCT to be a special
case of a DFT of length 4N, with certain symmetries, and then pruning redundant
operations from a recent improved fast Fourier transform algorithm (based on a
recursive rescaling of the conjugate-pair split radix algorithm). The improved
algorithms for DCT-III, DST-II, and DST-III follow immediately from the
improved count for the DCT-II.
| [
{
"version": "v1",
"created": "Fri, 30 Mar 2007 00:53:48 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jan 2009 19:05:59 GMT"
}
] | "2009-09-29T00:00:00" | [
[
"Shao",
"Xuancheng",
""
],
[
"Johnson",
"Steven G.",
""
]
] |
cs/9301115 | Maggie McLoughlin | Donald E. Knuth | Context-free multilanguages | Abstract added by Greg Kuperberg | Theoretical Studies in Computer Science, Ginsburg Festschrift | null | Knuth migration 11/2004 1991 | cs.DS | null | This article is a sketch of ideas that were once intended to appear in the
author's famous series, "The Art of Computer Programming". He generalizes the
notion of a context-free language from a set to a multiset of words over an
alphabet. The idea is to keep track of the number of ways to parse a string.
For example, "fruit flies like a banana" can famously be parsed in two ways;
analogous examples in the setting of programming languages may yet be important
in the future.
The treatment is informal but essentially rigorous.
| [
{
"version": "v1",
"created": "Sun, 1 Dec 1991 00:00:00 GMT"
}
] | "2008-02-03T00:00:00" | [
[
"Knuth",
"Donald E.",
""
]
] |
cs/9301116 | Maggie McLoughlin | Donald E. Knuth, Arvind Raghunathan | The problem of compatible representatives | null | SIAM J. Discrete Math. 5 (1992), no. 3, 422--427 | null | Knuth migration 11/2004 | cs.DS math.CO | null | The purpose of this note is to attach a name to a natural class of
combinatorial problems and to point out that this class includes many important
special cases. We also show that a simple problem of placing nonoverlapping
labels on a rectangular map is NP-complete.
| [
{
"version": "v1",
"created": "Wed, 1 Jul 1992 00:00:00 GMT"
}
] | "2008-02-03T00:00:00" | [
[
"Knuth",
"Donald E.",
""
],
[
"Raghunathan",
"Arvind",
""
]
] |
cs/9608105 | Maggie McLoughlin | Svante Janson and Donald E. Knuth | Shellsort with three increments | null | Random Structures Algorithms 10 (1997), no. 1-2, 125--142 | null | Knuth migration 11/2004 | cs.DS | null | A perturbation technique can be used to simplify and sharpen A. C. Yao's
theorems about the behavior of shellsort with increments $(h,g,1)$. In
particular, when $h=\Theta(n^{7/15})$ and $g=\Theta(h^{1/5})$, the average
running time is $O(n^{23/15})$. The proof involves interesting properties of
the inversions in random permutations that have been $h$-sorted and $g$-sorted.
| [
{
"version": "v1",
"created": "Thu, 22 Aug 1996 00:00:00 GMT"
}
] | "2008-02-03T00:00:00" | [
[
"Janson",
"Svante",
""
],
[
"Knuth",
"Donald E.",
""
]
] |
cs/9801103 | Maggie McLoughlin | Donald E. Knuth | Linear probing and graphs | null | Algorithmica 22 (1998), no. 4, 561--568 | null | Knuth migration 11/2004 | cs.DS | null | Mallows and Riordan showed in 1968 that labeled trees with a small number of
inversions are related to labeled graphs that are connected and sparse. Wright
enumerated sparse connected graphs in 1977, and Kreweras related the inversions
of trees to the so-called ``parking problem'' in 1980. A~combination of these
three results leads to a surprisingly simple analysis of the behavior of
hashing by linear probing, including higher moments of the cost of successful
search.
| [
{
"version": "v1",
"created": "Thu, 15 Jan 1998 00:00:00 GMT"
}
] | "2007-05-23T00:00:00" | [
[
"Knuth",
"Donald E.",
""
]
] |