name
stringlengths 5
6
| title
stringlengths 8
144
| abstract
stringlengths 0
2.68k
| fulltext
stringlengths 1.78k
95k
| keywords
stringlengths 22
532
|
---|---|---|---|---|
273577 | Numerical Condition of Discrete Wavelet Transforms. | The recursive algorithm of a (fast) discrete wavelet transform, as well as its generalizations, can be described as repeated applications of block-Toeplitz operators or, in the case of periodized wavelets, multiplications by block circulant matrices. Singular values of a block circulant matrix are the singular values of some matrix trigonometric series evaluated at certain points. The norm of a block-Toeplitz operator is then the essential supremum of the largest singular value curve of this series. For all reasonable wavelets, the condition number of a block-Toeplitz operator thus is the lowest upper bound for the condition of corresponding block circulant matrices of all possible sizes. In the last section, these results are used to study conditioning of biorthogonal wavelets based on B-splines. | Introduction
. Orthogonality is a very strong property. It might exclude
other useful properties like, for example, symmetry in the case of compactly supported
wavelets [6, 7]. Consequently, in many applications biorthogonal wavelets have
been used rather than the orthogonal ones. Stability of such bases has been studied
and conditions for Riesz bounds to be finite were established [2, 3, 4, 5]. However,
when dealing with applications, one would like to have some quantitative information
about sensitivity to such things like noise in the data or quantization. Some relevant
estimates can be found in the engineering literature on multirate filter banks, where
noise is modelled as a random process and its transmission through the system is
studied; see, e.g., [12]. However, most of these results concern particular designs and
implementations. Here we will use an alternative approach-we will look at discrete
wavelet transforms from the point of view of linear algebra.
For example, let us consider the process of image compression using wavelets
(see, e.g., [1, 11, 14]). The algorithm has three steps. First, the discrete wavelet
transform is applied to the image, then the resulting data is quantized and finally
it is coded in some efficient way. The purpose of the transform is to increase the
compressibility of the data and to restructure the data so as, after decompression,
the error caused by quantizing is less disturbing for a human viewer then if the image
was quantized directly without a transform. The encoded image can be manipulated
in different ways (e.g., transmitted over network) which can cause further distortions.
To decompress the image we just need to decode the data and to apply the inverse
transform. Let us denote the error vector that is added to the transformed data y
before the reconstruction by u and let us suppose that we know the magnitude of the
denotes the original image, the relative error
in the reconstructed image is
This work was supported by the Flinders University of South Australia, the Cooperative Research
Centre for Sensor Signal and Information Processing, Adelaide, and the Australian Government
y National Institute of Standards and Technology, Gaithersburg, MD 20899-0001
A
If no further assumptions are imposed on the image and type of the error, this estimate
is the best possible. Also in other applications, the sensitivity to errors can be shown
to be naturally related to the condition number of the transform matrix with respect
to solving a system of linear equations,
Condition number depends on the norm. For finite matrices we will use here
matrix 2-norm, which is induced by the Euclidean vector norm. When necessary, we
will use subscript 2 to emphasise that we deal with these norms. We will speak also
about condition number of an operator l 2 ( Z
Z ). We define it also by (1.1);
the norm is the operator norm induced by the norm of l 2 ( Z
Due to the translational character of wavelet bases, matrices and operators involved
happen to have a characteristic structure-they are block circulant and block-
Toeplitz, respectively. This structure can be employed when the condition numbers
are computed; Fourier techniques can be used to transform them to a block diagonal
form. This then leads to studying the (point-wise) singular values of certain trigonometric
matrix series. In (Section 3 we study the finite case. The singular values of
a block circulant matrix are shown to be the singular values of small matrices arising
from the "block discrete Fourier transform" of the first block row of the block
circulant matrix. In (Section 4 we generalize this result for block-Toeplitz operators
Z ). Situation is rather more complicated there, because the Fourier
transform maps the discrete space l 2 ( Z
Z ) onto the functional space L 2 ([0; 2-)). As the
main result we show there that
ess sup
oe
where C(A) is the block-Toeplitz operator the infinite matrix of which is generated
by strip
Z , being square blocks) and
The proof is based on the point-wise singular decomposition of A; some difficulties
arising from the fact that we have to ensure that the singular vector we want to construct
has square integrable components must be overcome on the way. For reasonable
wavelets, the curves of singular values of A have some smoothness and essential supremum
and infimum become supremum and infimum or even maximum and minimum.
Condition number of C(A) is then lowest upper bound on condition of periodized
wavelet transforms for all possible lengths of data. We also describe how some particular
properties of the wavelets imply a certain structure of the singular values. These
observations can be used to further improve the efficiency of computing the condition
numbers.
In the last section of this paper, we apply this technique to study conditioning
of biorthogonal B-spline wavelets constructed by Cohen, Daubechies and Feauveau
[5], nowadays probably the most often applied biorthogonal wavelets. We show there
NUMERICAL CONDITION OF DISCRETE WAVELET TRANSFORMS 3
that the condition number increases exponentially with the order of the spline. Conditioning
can be significantly improved by suitable scaling of the wavelet functions,
but, even for the optimal scaling, the growth has exponential character.
After finishing the first version of this paper, I became familiar with related
works by Keinert [8] and Strang [9]. While Strang's work concerns mostly Riesz
bounds for subspaces in a multiresolution analysis and wavelet decomposition, Keinert
concentrates on conditioning of finitely sized transforms and asymptotic estimates for
deep recursive transforms. He also presents a number of numerical experiments that
show how these estimates are realistic when some specific types of introduced errors
are considered (e.g., white noise). In this revised version I have tried to emphasise
results that are complementary to those of Keinert and Strang.
2. Translational and wavelet bases and the operators of the change
of a basis. Let us consider some translation-invariant subspace of L 2 ( R
I ) with a
translational Riesz basis fu k
Zg generated by some r-tuple
of functions u being the translation step. Let this subspace have
another, similar, basis fv k
Z g. Each of the functions v k ,
can be expressed in the terms of the first basis; there exist sequences
Z ) such that
r
a (k;l)
Let us form from these coefficients r \Theta r matrices An , n 2 Z
n will be the element
of An in kth row and lth column. We denote by A the infinite strip of concatenated
matrices An , n 2 Z
Z ,
and we define C(A) to be an infinite block-Toeplitz matrix
We will denote by C(A) also an operator l 2 ( Z
Z ) that can be represented by
such a matrix. If
r
ff
r
for some l 2 ( Z
to
fff n gn2Z Z , that is, it is the operator of the change of a basis.
Because of practical reasons (handling of finite data), periodized bases are often
used in the wavelet context. If, for some integer N , we denote
per
A
then fu per
are bases for some subspace of L 2 ([0; Nh)) and the operator of the
change of basis from the latter to the former can be represented by a block circulant
matrix
SN
where
A multiresolution analysis is a sequence of embedded subspaces of L 2 ( R
I ) generated
by the translates of an appropriately dilated scaling function. In particular,
Z g:
There are wavelet subspaces generated by a wavelet function,
Z
and these subspaces satisfy
The scaling and wavelet function thus have to conform to the two-scale relations that
are usually written as
In the (fast) discrete wavelet transform, we perform recursively the change of basis
from f2 j=2 '(2
Z g to f2 (j \Gamma1)=2
. We can consider both bases to be generated
by two functions, former by u 1), the latter
by x). The translation step h is
here. The recursive inverse transform thus can be associated with repeated
applications of C(A) , where
that is,
In fact, C (
A and the recursive transform itself can be seen as
repetitive applications of a block-Toeplitz operator. As in the case of A,
~
NUMERICAL CONDITION OF DISCRETE WAVELET TRANSFORMS 5
sequences f ~ hn gn2Z Z and f~g n gn2Z Z determine the biorthogonal counterparts of the
scaling and wavelet function, ~
' and ~
/ by relations analogous to (2.1).
Although the conditioning of this basic step of recursive transform is crucial, we
want to study also how the error cumulates in the recursive transform. Since all the
bases involved have translational character, we can use the same approach as for one
step for the transform of any finite depth; we can always find a common translation
step. For example, let us consider two steps of recursion. We perform, in fact the
change of basis in V j from f2 j=2 '(2
Z g to f2 (j \Gamma2)=2
Z
Z g. All these
bases can be considered to be translational bases with translation step
generated by four functions. We have
and
The infinite strip A thus will have four rows; the entries can be easily found by
recursive applications of (2.1). In particular, if we denote the sequences that form
rows of A by fb
0 , we have
b (1)
b (2)
An analogous approach can be used for generalizations of classical wavelet transforms
like those based on more than one scaling and wavelet function and general
integer dilation parameter m - 2 (multiwavelets, higher multiplicity wavelets) or
non-stationary wavelets, where different block-Toeplitz operators applied in the recursive
algorithm. Also wavelet packets transforms, where also wavelet spaces are
further decomposed, can be described in a similar way.
3. Numerical condition of block circulant matrices. Any circulant matrix
is unitarily similar to a diagonal matrix. This matrix has (up to scale) the discrete
Fourier transform of the first row of the original matrix on the diagonal and the
similarity matrix is the matrix of the discrete Fourier transform itself. This fact can
be generalized for block circulant matrices as follows.
Theorem 3.1. Each block circulant matrix is unitarily similar to a block diagonal
matrix. In particular, CN (A) is similar to a matrix with diagonal blocks equal to
Proof. Let !N is the primitive Nth root of unity, us first
create the matrix of the "block discrete Fourier transform";
I
I
I
(3.
6 R. TURCAJOV '
A
I being the r \Theta r identity matrix. Such a matrix is unitary and the r \Theta r block in
row and (n + 1)th block column
of\Omega r;N CN
(A)\Omega
An\Gammal+kN
being the Kronecker delta.
Since the singular values are preserved by unitary transformations and the singular
values of a block diagonal matrix are the singular values of the diagonal blocks,
the theorem above has the following corollary.
Corollary 3.2. A number oe is a singular value of CN (A) if and only if it is a
singular value of A(2-in=N) for some
Let us remind here, that the 2-norm of a matrix M equals to its largest singular
value, which we will denote here oe max (M ). Similarly, oe min (M) will stand for the
smallest singular value, the 2-norm of M \Gamma1 .
Corollary 3.3.
If N 1 is a divisor of N , cond 2 (CN1 (A)) - cond 2 (CN (A)), because all the singular
values of CN1 (A) are simultaneously singular values of CN (A). This means that, for
the recursive transform, we could estimate the condition in each step by the condition
number of the largest block circulant matrix involved, applied in the first step of the
recursion, since in each next step just m-times smaller matrix is used, m being the
dilation factor.
It would be useful to have some estimate completely independent of the size of
the block circulant matrix. One such estimate is straightforward,
Notice that if the curves of the largest and smallest singular values are continuous
(which happens, for example, for compactly supported wavelets, when A contains only
a finite number of non-zero entries) this is the lowest upper bound for cond(CN (A))
independent of N . We will show in the next section that for any reasonable wavelet
the right hand side of (3.2) represents, in fact, the condition number of C(A).
4. Norm and condition number of block-Toeplitz operators. Similarly
as in the previous section, we will apply here a "block Fourier transform". However,
here the situation is a little more complicated than in the case of finite matrices.
Let us denote l 2
Z ) the Hilbert space of (column) vectors of length r with all
components in l 2 ( Z
Z ). We can see this space also as a space of vector-valued sequences.
The inner product is
r
r
a
NUMERICAL CONDITION OF DISCRETE WAVELET TRANSFORMS 7
subscripts determine entries of sequences, while superscripts entries of vectors. Sim-
ilarly,
r ([0; 2-)) is the Hilbert space of r-vectors of square integrable functions on
[0; 2-) with the inner product
r
r
Z 2-f
Z 2-g(-) f(-) d-:
To find the norm of the operator C(A) induced by the norm of l 2 ( Z
Z ), we employ
Hilbert space isomorphisms of these spaces. First, there is a trivial isomorphism
between l 2 ( Z
Z ) and l 2
Z
Second, component-wise Fourier transform is a Hilbert space isomorphism l 2
r ([0; 2-)). For a sequence c 2 l 2 ( Z
Z ) the Fourier transform bc 2 L 2 ([0; 2-)) is defined
as
where the sum converges in L 2 ([0; 2-)) sense. Since 1
e \Gammaik- , k 2 ZZ is an orthonormal
basis for L 2 ([0; 2-)), the inverse mapping is given by
Z 2-bc(-)e ik- dand
the Fourier transform as defined above is a Hilbert space isomorphism l 2 ( Z
2-)). The extension to the vector case is obvious.
Infinite Toeplitz matrices represent convolution operators. For sequences a; b 2
Z ), the convolution c = a b has entries
Convolution operators are closely related to multipliers. The link is the Fourier transform
Lemma 4.1. Let a; b 2 l 2 ( Z
Z ) and let a b 2 l 2 ( Z
Z ) or ba b b 2 L 2 ([0; 2-)). Then
d
a
Proof. For any l 2 Z
Z ,
d
Because the Fourier transform is a Hilbert space isomorphism,
A
The last term represents the lth entry of the inverse Fourier transform of
Theorem 4.2. The operator l 2 ( Z
represented by C(A) is isomorphic
with a matrix multiplier
k2ZZ A k e ik- that maps L 2
r ([0; 2-)) \Gamma! L 2
r ([0; 2-)),
u(-) \Gamma! A(-)u(-).
Proof. By the former isomorphism, C(A) is isomorphic with the operator l 2
l 2
Z ), for which d, the image of c, is given by the formula
A k\Gammal c k ; l 2 Z
We will slightly abuse the notation and denote this operator also by C(A).
Since we assume that C(A) represents the change from one Riesz basis to another,
Z ) and the series
k2ZZ A k e ik- converges component-wise in
straightforward calculation shows that (4.1) can be extended to
matrix/vector case (the Fourier transform being defined component-wise). Because
fA \Gammak g k2ZZ
A(\Gamma-),
d
A(\Gamma-)b c(-);
Z ) or b
r ([0; 2-)). A convolution-type operator
thus becomes in the Fourier domain, indeed, the matrix multiplier A.
The norm of C(A) induced by l 2 ( Z
thus equals to the norm of the matrix multiplier
A as an operator L 2
r ([0; 2-)) \Gamma! L 2
r ([0; 2-)). The following theorem gives
formulae for the norm of a multiplier and its inverse.
Theorem 4.3. Let A be an r \Theta r matrix multiplier with measurable components.
Then
sup
kAuk ess sup
r ([0;2-))
kAuk ess inf
min
Proof. Let us set
ess sup
and let x 2 L 2
r ([0; 2-)). Then
r ([0;2-))
and hence
sup
r ([0;2-))
r ([0;2-
We need to show that we have the lowest upper bound, in other words, that for each
NUMERICAL CONDITION OF DISCRETE WAVELET TRANSFORMS 9
Let us take point-wise the singular value decomposition of A;
where V(-) and U(-) are r \Theta r unitary matrices and \Sigma(-) is the diagonal matrix with
the singular values of A(-) on the diagonal, in decreasing order. We will denote these
singular values oe j (-), To construct x ffl we need a path of right singular
vectors corresponding to the largest singular value, something like the first column of
U , but we have to ensure that this path is square integrable.
First, oe 1 is a measurable function, because A has measurable components
and the matrix norm is a continuous function of the entries. Let us define
Then C has measurable components and, for k ! +1, C(-) k \Gamma! P(-), where
and D(-) is a diagonal matrix with the elements on the diagonal equal to either 1 or 0;
if oe 1 (-) is of multiplicity m (m depending on -), then first m elements are 1 and all the
others are 0. Notice that P(-) is the orthogonal projector onto the subspaces spanned
by all right singular vectors corresponding to singular values oe 1
Because P is the limit of a sequence of matrices with measurable components, its
components are measurable too.
Now, for any ffl ? 0, the set
is a measurable set and -(S ffl ) ? 0. Since P(-) 6= 0 for any -, there exist j and a set
~
that p(-), the jth column of P(-), is non-zero for - 2 ~
Let us set
-( ~
Because x ffl has measurable components and j~x
have
r ([0; 2-)). A simple calculation shows that We have
consequently,
and
Finally,
-( ~
A
which finishes the proof of the first part of the theorem.
us concentrate on the second part of the statement. Let us denote
~
ess inf
min
Clearly, for any u, kuk
kAuk
Z 2-'
min
We now have to show that for every ffl ? 0 there exists x ffl ,
that
In order to do that we first need to construct a square integrable path of right singular
vectors corresponding to the path of the smallest singular values, oe r .
Let us take, again, point-wise singular value decomposition of A,
Now, for a positive integer k, let us consider a matrix A(-) A(-)
k I . We have
I
therefore such a matrix is invertible and the norm of the inverse is (oe r
If we set
oe r (-) 2 +k
I
then C k has measurable components and C k (-) l \Gamma! P(-), k ! +1, l ! +1, where
is, again, a diagonal matrix with the elements on the diagonal equal to either 1
or 0, but now, if oe r (-) is of multiplicity m, then the first r \Gamma m elements are 0 and
all the others are 1. The components of the matrix P are measurable functions and
the matrix P(-) is now the orthogonal projector onto the subspace spanned by right
singular vectors corresponding to the singular values oe r\Gammam+1
for any vector x(-) of unit norm from its range,
The rest of the proof would follow the lines of the proof of the first part, with S ffl being
chosen as
NUMERICAL CONDITION OF DISCRETE WAVELET TRANSFORMS 11
we would have
oe r (-) 2
-( ~
The norm of A induced by the norm of L 2
r ([0; 2-)) thus is
ess sup
oe
the mapping is invertible if and only if
ess inf
min
and the norm of the inverse equals
ess inf
oe min (A(-)):
Combining the results above we obtain the following theorem.
Theorem 4.4. The condition of the operator C(A) (in the norm induced by the
norm of l 2 ( Z
Z
ess sup -2[0;2-) oe max (A(-))
ess inf -2[0;2-) oe min
For all wavelets of practical interest, A has only a finite number of non-zero
entries or at least the sequences forming its rows decay very fast. This implies some
smoothness of entries of A and, consequently, essential supremum of oe max and essential
infimum of oe min coincide with the supremum and infimum, respectively. As we already
pointed out, cond(C(A)) then represents sup N cond(CN (A)).
Let us make a few comments about the structure of singular values of A(-) in
relation with some special properties of A. First, when the underlying bases comprise
of real functions, the entries of A are real and, consequently,
means that the singular values in - \Gamma - and - are the same and we can restrict
our attention onto interval [0; -], only.
Another interesting effect is caused by all the scaling and wavelet functions and
their biorthogonal counterparts being compactly supported. This corresponds to the
fact that only a finite number of square blocks both in A and in ~
A that generates
are non-zero. It is well known, particularly in the filter bank
context (see, e. g., [12], [13]), that this happens if and only if there exist a non-zero
constant ff and an integer p such that
det(
A k z \Gammak
for any z 2 CI , z 6= 0. Because determinant is the product of singular values, the
equation above implies that
Y
A
for some positive constant fi independent of -. This is particularly useful when A has
only two rows. The singular values of A(-) are then inversely proportional and the
maximum and minimum over - then occur at the same point. That is,
5. Alternative expression. Let the sequences that form the rows of A be
. Sometimes it is easier to deal with Fourier series
than with A. We will see an example in (Section 6, when we will study conditioning
of biorthogonal spline wavelets. In these cases it is better to use a different matrix
function.
Theorem 5.1. A number oe is a singular value of A(-) if and only if it is a
singular value of B(\Gamma-=r), where
b (2) (-) b (2) (-
.
Proof. Using the notation introduced in the proof of (Theorem 3.1, we have, for
any
2-l
r
r =r
r
a (s;k+1)
This is because
r equals to r if divisible by r and it is 0 otherwise.
Consequently,
B(-)\Omega
rA(\Gammar-)D r (-);
where\Omega 1;r is the r \Theta r matrix of the discrete Fourier transform and D r (-) is the
diagonal matrix with the diagonal entries equal to e \Gammaik- ,
particular order).
1;r is unitary and so is D r (-) (for any -), the statement of
the theorem holds.
Just let us point here that, instead of considering each row of A separately, we
could use block rows, each of them comprising of, let say, p rows. We then would
obtain similar result with some p \Theta p matrices B
n instead of scalars b
instead of
\Omega 1;r we would
use\Omega p;r=p and, similarly, D r (-) would be replaced by a matrix with
diagonal blocks equal to e \Gammaik- I , This might be useful for the
case of multiwavelets (more than one scaling function) when the two-scale equations
analogous to (2.1) have matrix coefficients (see, e.g., [10]).
NUMERICAL CONDITION OF DISCRETE WAVELET TRANSFORMS 13
6. Conditioning of biorthogonal wavelets based on B-splines. B-splines.
Biorthogonal wavelets based on B-splines were introduced by Cohen, Daubechies and
Feauveau in [5]. To the B-spline basis function of a particular order (which represents
a scaling function) there exists a whole family of possible biorthogonal counterparts
with different size of support and regularity. We will use here the notation of [2],
where the sequences determining the scaling and wavelet functions through the two-scale
equations of type (2.1) are given in terms of trigonometric polynomials
~
Since the scaling function ' equals to the B-spline of order n,
For any integer K such that 2K -
~
determines a biorthogonal scaling function; PK is the solution of Bezout problem
in particular,
The
corresponding to the wavelet filters are
then defined as
We have
and, because we deal with compactly supported real classical wavelets with dilations
by 2, we are interested in the maximum of the condition number of B(-) on [0; -=2].
Squares of the singular values of B(-) are the eigenvalues of the matrix B(-)B(-)
and they satisfy a quadratic equation
Fairly straightforward although somewhat tedious calculations show that the coefficients
of this equation are
det(B(-)B(-)
14 R. TURCAJOV '
A
and
where
Theorem 6.1. The numerical condition of one level of the (fast) discrete wavelet
transform based on B-spline biorthogonal wavelets of order n defined above is at least
independently of the value of K.
Proof. Since
(cf. (6.1)), substituting -=2 into the formulae above we obtain
Squares of the singular values of B(-=2) thus equal \Gamman , respectively, and the
condition of this matrix is 2 n . The condition hence must be at least 2 n .
Numerical experiments show that the condition number often equals 2 n . From
the point of view of conditioning, it is better to choose K smaller for low order splines
and larger for higher order splines; see Table 1 at the end of the paper.
Once the scaling filters m 0 and ~
are given, (6.2) is not the only possibility for
the corresponding wavelet filters. The entire freedom can be described as follows:
Z , ff 6= 0. The choice of k is, from the point of view of numerical condition,
irrelevant, but the scaling by ff can be used to improve the condition. In the case
of the spline wavelets improvement can be significant. However, it turns out that
whatever scaling we choose, we can't beat the exponential growth with the order of
the spline.
Theorem 6.2. For any scaling factor ff, the condition of one step of discrete
wavelet transform with spline biorthogonal wavelet of order n is at least 2 n=2 .
Proof. Instead of the condition of B(-) we need to study here the condition of
before, in (6.2). For -=2
the singular values of B ff (-=2) are jffj 2 n=2 and 2 \Gamman=2 and its condition hence is jffj 2 n
for \Gamman and 1=(jffj2 n ) for \Gamman . On the other hand, for
and its condition is jffj for jffj - 1 and 1=jffj for jffj ! 1. Combining these results
we see that the condition of the wavelet transform can not be better than jffj2 n if
NUMERICAL CONDITION OF DISCRETE WAVELET TRANSFORMS 15
Consequently, whatever jffj we choose, the
condition is at least 2 n=2 .
The optimal scaling parameter is usually equal or close to 2 \Gamman=2 , see Table 2 and
3. Notice that this is true especially for the wavelets that have condition number
equal to 2 n . The condition of the optimally scaled wavelet then equals 2 n=2 , in most
cases.
Figures
1-5 show some typical behaviour of the singular value curves in dependence
on the order of the spline, parameter K, scaling of the wavelet and depth of the
transform. There are some interesting details there like, for example, the presence of
points where the plot looks almost like if two curves were intersecting each other, but,
in fact, we have two different curves that have turning points and are well separated.
Acknowledgements
. The author was a postgraduate research scholar supported
by the Australian government. She thanks Jaroslav Kautsky for suggesting the topic
and many fruitful discussions.
--R
Image coding using wavelet transform
in Wavelets: A Tutorial in Theory and Applications
A stability criterion for biorthogonal wavelet bases and their related subband coding scheme
Biorthogonal bases of compactly supported wavelets
Orthonormal bases of compactly supported wavelets
Numerical stability of biorthogonal wavelet transforms
Inner products and condition numbers for wavelets and filter banks
Short wavelets and matrix dilation equations
Compact image coding using wavelets and wavelet packets based on non-stationary and inhomogeneous multiresolution analyses
Multirate Systems and Filter Banks
Perfect reconstruction FIR filter banks: Some properties and factorizations
--TR
--CTR
Zhong-Yun Liu, Some properties of centrosymmetric matrices, Applied Mathematics and Computation, v.141 n.2-3, p.297-306, 5 September | block-Toeplitz operators;translational bases;biorthogonal wavelets;numerical condition;block circulant matrices |
273582 | Robust Solutions to Least-Squares Problems with Uncertain Data. | We consider least-squares problems where the coefficient matrices A,b are unknown but bounded. We minimize the worst-case residual error using (convex) second-order cone programming, yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpreted as a Tikhonov regularization procedure, with the advantage that it provides an exact bound on the robustness of solution and a rigorous way to compute the regularization parameter. When the perturbation has a known (e.g., Toeplitz) structure, the same problem can be solved in polynomial-time using semidefinite programming (SDP). We also consider the case when A,b are rational functions of an unknown-but-bounded perturbation vector. We show how to minimize (via SDP) upper bounds on the optimal worst-case residual. We provide numerical examples, including one from robust identification and one from robust interpolation. | Introduction
. Consider the problem of finding a solution x to an overdetermined
set of equations Ax ' b, where the data matrices A 2 R n\Thetam , b 2 R n are given.
The Least Squares (LS) fit minimizes the residual k\Deltabk subject to resulting
in a consistent linear model of the form \Deltab) that is closest to the original
one (in Euclidean norm sense). The Total Least Squares (TLS) solution described by
Golub and Van Loan [17] finds the smallest error subject to the consistency
equation \Deltab. The resulting closest consistent linear model
\Deltab) is even more accurate than the LS one, since modifications of A are
allowed.
Accuracy is the primary aim of LS and TLS, so it is not surprising that both
solutions may exhibit very sensitive behavior to perturbations in the data matrices
b). Detailed sensitivity analyses for the LS and TLS problems may be found
in [12, 18, 2, 44, 22, 14]. Many regularization methods have been proposed to de-
To appear in SIAM Journal on Matrix Analysis and Applications, 1997.
y Ecole Nationale Sup'erieure de Techniques Avanc'ees, 32, Bd. Victor, 75739 Paris, France. Internet:
(elghaoui, lebret)@ensta.fr
L. EL GHAOUI AND H. LEBRET
crease sensitivity, and make LS and TLS applicable. Most regularization schemes for
LS, including Tikhonov regularization [43], amount to solve a weighted LS problem for
an augmented system. As pointed out in [18], the choice of weights (or regularization
parameter) is usually not obvious, and application-dependent. Several criteria for optimizing
the regularization parameter(s) have been proposed (see e.g. [23, 11, 15]). These
criteria are chosen according to some additional a priori information, of deterministic
or stochastic nature. The extensive surveys [31, 8, 21] discuss these problems and some
applications.
In contrast with the extensive work on sensitivity and regularization, relatively
little has been done on the subject of deterministic robustness of LS problems, in which
the perturbations are deterministic, and unknown-but-bounded (not necessarily small).
Some work has been done on a qualitative analysis of the problem, where entries of
are unspecified but by their sign [26, 39]. In many papers mentioning least-squares
and robustness, the latter notion is understood in some stochastic sense, see
e.g. [20, 47, 37]. A notable exception concerns the field of identification, where subject
has been explored using a framework used in control system analysis [40, 9], or using
regularization ideas combined with additional a priori information [34, 42].
In this paper, we assume that the data matrices are subject to (non necessarily
small) deterministic perturbations. First, we assume that the given model is not a
single pair (A; b), but a family of matrices is an
unknown-but-bounded matrix, precisely, k\Deltak - ae, where ae - 0 is given. For x fixed,
we define the worst-case residual as
(1)
We say that x is a Robust Least Squares (RLS) solution if x minimizes the worst-case
residual r(A; b; ae; x). The RLS solution trades accuracy for robustness, at the expense
of introducing bias. In our paper, we assume that the perturbation bound ae is known,
but in x3.5, we also show that TLS can be used as a preliminary step to obtain a value
of ae that is consistent with data matrices A; b.
In many applications, the perturbation matrices \DeltaA, \Deltab have a known structure.
For instance, \DeltaA might have a Toeplitz structure inherited from A. In this case, the
worst-case residual (1) might be a very conservative estimate. We are led to consider
the following Structured RLS (SRLS) problem. Given A
(2)
For ae - 0, and x we define the structured worst-case residual as
kffik-ae
We say that x is a Structured Robust Least Squares (SRLS) solution if x minimizes the
worst-case residual r S (A; b; ae; x).
ROBUST LEAST SQUARES 3
Our main contribution is to show that we can compute the exact value of the optimal
worst-case residuals using convex, second-order cone or semidefinite programming
(SOCP or SDP). The consequence is that the RLS and SRLS problems can be solved
in polynomial-time, and great practical efficiency, using e.g. recent interior-point methods
[33, 46]. Our exact results are to be contrasted with those of Doyle et. al [9], which
also use SDP to compute upper bounds on the worst-case residual for identification
problems. In the preliminary draft [5], sent to us shortly after submission of this paper,
the authors provide a solution to an (unstructured) RLS problem, which is similar to
that given in x3.2.
Another contribution is to show that the RLS solution is continuous in the data
matrices A; b. RLS can thus be interpreted as a (Tikhonov) regularization technique for
ill-conditioned LS problems: the additonal a priori information is ae (the perturbation
level), and the regularization parameter is optimal for robustness. Similar regularity
results hold for the SRLS problem.
We also consider a generalisation of the SRLS problem, referred to as the linear-
fractional SRLS problem in the sequel, in which the matrix functions A(ffi), b(ffi) in (2)
depend rationally on the parameter vector ffi. (We describe a robust interpolation problem
that falls in this class in x7.6.) Using the framework of [9], we show that the problem
is NP-complete in this case, but that we may compute, and optimize, upper bounds on
the worst-case residual using SDP. In parallel with RLS, we interpret our solution as
one of a weighted LS problem for an augmented system, the weights being computed
via SDP.
The paper's outline is as follows. Next section is devoted to some technical lemmas.
Section 3 is devoted to the RLS problem. In section 4, we consider the SRLS problem.
Section 5 studies the linear-fractional SRLS problem. Regularity results are given in
Section 6. Section 7 shows numerical examples.
2. Preliminary results.
2.1. Semidefinite and second-order cone programs. We briefly recall some
important results on semidefinite programs (SDPs) and second-order cone programs
(SOCPs). These results can be found in e.g. [4, 33, 46].
A linear matrix inequality is a constraint on a vector x of the form
where the symmetric matrices F
are given. The minimization
problem
subject to F(x) - 0
called a semidefinite program (SDP). SDPs are convex optimization
problems and can be solved in polynomial-time with e.g. primal-dual interior-point
methods [33, 45].
4 L. EL GHAOUI AND H. LEBRET
The problem dual to problem (5) is
maximize \GammaTrF 0 Z
subject to Z - 0; TrF i
where Z is a symmetric N \Theta N matrix and c i is the i-th coordinate of vector c. When
both problems are strictly feasible (that is, when there exists x; Z which satisfy the
constraints strictly), the existence of optimal points is guaranteed [33, thm.4.2.1], and
both problems have equal optimal objectives. In this case, the optimal primal-dual
pairs (x; Z) are those pairs (x; Z) such that x is feasible for the primal problem, Z is
feasible for the dual one, and
A second-order cone programming problem is one of the form
subject to kC
L. The dual problem of
problem (7) is
subject to
are the dual variables. Optimality conditions
similar to those for SDPs can be obtained for SOCPs. SOCPs can be expressed as
SDPs, therefore they can be solved in polynomial-time using interior-point methods for
SDPs. However the SDP formulation is not the most efficient numerically, as special
interior-point methods can be devised for SOCPs [33, 28, 1].
complexity results on interior-point methods for SOCPs and SDPs are given
by Nesterov and Nemirovsky [33, p.224,236]. In practice, it is observed that the number
of iterations is almost constant, independent of problem size [46]. For the SOCP, each
iteration has complexity O((n for the SDP, we refer the reader
to [33].
2.2. S-procedure. The following lemma can be found e.g. in [4, p.24]. It is
widely used, e.g. in control theory, and in connection with trust region methods in
optimization [41].
Lemma 2.1 (S-procedure). Let F be quadratic functions of the variable
. The following condition on F
ROBUST LEAST SQUARES 5
holds if
there exist
0:
When the converse holds, provided that there is some i 0 such that F 1 (i
The next lemma is a corollary of the above result, in the case
Lemma 2.2. Let T
real matrices of appropriate size. We have
for every \Delta, k\Deltak - 1, if and only if kT 4 k ! 1, and there exists a scalar - 0 such that
0:
Proof. If T 2 or T 3 equal zero, the result is obvious. Now assume
which in turn implies kT 4 k ! 1. Thus, for a given - , (10) holds if
and only if kT 4 k ! 1 and for every (u; p), we have
4 p. Since T 2 6= 0, the constraint q T q - p T p is qualified, that is,
satisfied strictly for some
Using
the S-procedure, we obtain that there exists - 2 R such that (10) holds if and only if
1, and for every (u; p) such that q T q - p T p, we have u
our proof by noting that for every pair (p; q),
only if p T p - q T q.
Next lemma is a "structured" version of the above, which can be traced back to [13].
Lemma 2.3. Let T
real matrices of appropriate size. Let D
be a subspace of R N \ThetaN , and denote by S (resp. G) the set of symmetric (resp. skew-
symmetric) matrices that commute with every element of D. We have
and (9) for every \Delta 2 D, k\Deltak - 1, if there exist S 2 S, G 2 G such that
If the condition is necessary and sufficient.
Proof. The proof follows the scheme of that of lemma 2.2, except that p T p - q T q
is replaced with p T G. Note that
0, the above result is a simple application of lemma 2.2 to the scaled matrices
6 L. EL GHAOUI AND H. LEBRET
2.3. Elimination lemma. The last lemma is proven in [4, 24].
Lemma 2.4 (Elimination). Given real matrices of appropriate
size, there exists a real matrix X such that
if and only if
~
U T W ~
where ~
U , ~
are orthogonal complements of U; V . If U; V are full column-rank, and (12)
holds, a solution X to the inequality (11) is
oe is any scalar such that Q ? 0 (the existence of which is
guaranteed by (12)).
3. Unstructured Robust Least-Squares. In this section, we consider the RLS
problem, which is to compute
For we recover the standard LS problem. For every ae ? 0, OE(A; b;
aeOE(A=ae; b=ae; 1), so we take in the sequel, unless otherwise stated. In the remainder
of this paper, OE(A; b) (resp. r(A; b; x)) denotes OE(A; b; 1) (resp. r(A; b; 1; x)). In
the definition above, the norm used for the perturbation bound is the Frobenius norm.
As seen shortly, the worst-case residual is the same when the norm used is the largest
singular value norm.
3.1. Optimizing the worst-case residual. The following results yield a numerically
efficient algorithm for solving the RLS problem in the unstructured case.
Theorem 3.1. When ae = 1, the worst-case residual (1) is given by
The problem of minimizing r(A; b; x) over x has a unique solution x RLS , referred to
as the RLS solution. This problem can be formulated as the second-order cone program
subject to kAx \Gamma bk
x#
Proof. Fix x . Using the triangle inequality, we have
ROBUST LEAST SQUARES 7
Now choose
if Ax 6= b;
any unit-norm vector otherwise.
Since \Delta is rank-one, we have k\Deltak In addition, we have
which implies that \Delta is a worst-case perturbation (for both the Frobenius and maximum
singular value norms), and that equality always holds in (16). Finally, unicity of the
x follows from the strict convexity of the worst-case residual.
Using an interior-point primal-dual potential reduction method for solving the unstructured
RLS problem (15), the number of iterations is almost constant [46]. Further-
more, each iteration takes O((n+m)m 2 ) operations. A rough summary of this analysis
is that the method has the same order of complexity as one SVD of A.
3.2. Analysis of the optimal solution. Using duality results for SOCPs, we
have the following theorem.
Theorem 3.2. When ae = 1, the (unique) solution x RLS to the RLS problem is
given by
A y b else,
where (-) are the (unique) optimal points for problem (15).
Proof. Using the results of x2.1, we obtain that the problem dual to (15) is
subject to A T z
Since both primal and dual problems are strictly feasible, there exist optimal points
for both of them. If - at the optimum, then
In this case, the optimal x is the (unique) minimum-norm solution to
Now assume - . Again, both primal and dual problems are strictly feasible,
therefore the primal and dual optimal objectives are equal:
Using
and
8 L. EL GHAOUI AND H. LEBRET
Replace these values in A T z to obtain the expression of the optimal x:
A T b; with
Remark 3.1. When - , The RLS solution can be interpreted as the solution
of a weighted LS problem for an augmented system:
A
I3
\Theta
where -). The RLS method amounts to compute the weighting
matrix \Theta that is optimal for robustness, via the SOCP (15). We shall encounter a
generalization of the above formula for the linear-fractional SRLS problem of x5.
Remark 3.2. It is possible to solve the problem when only A is perturbed
In this case, the worst-case residual is kAx kxk, and the optimal x is determined
by (17), where bk. (See the example in x7.2).
3.3. Reduction to a one-dimensional search. When the SVD of A is available,
we can use it to reduce the problem to a one-dimensional convex differentiable problem.
The following analysis will also be useful in x6.
Introduce the SVD of A and a related decomposition for b:
Assume that - at the optimum of problem (15). From (18), we have
never feasible, we may define Multiplying by -, we obtain that
From
1). Thus, the optimal
worst-case residual is
ROBUST LEAST SQUARES 9
where f is the following function
The function f is convex and twice differentiable on [' min 1[. If b 62 Range(A), f
is infinite at twice differentiable on the closed interval [' min 1].
Therefore, the minimization of f can be done using standard Newton methods for
differentiable optimization.
Theorem 3.3. When ae = 1, the solution of the unstructured RLS can be computed
by solving the one-dimensional convex differentiable problem (19), or by computing the
unique real root inside [' min 1] (if any) of the equation' 2
r
The above theorem yields an alternative method for computing the RLS solution.
This method is similar to the one given in [5]. A related approach was used for quadratically
constrained LS problems in [19].
The above solution, which requires one SVD of A, has cost O(nm 2 +m 3 ). The SOCP
method is only a few times more costly (see the end of x3.1), with the advantage that
we can include all kinds of additional constraints on x (nonnegativity and/or quadratic
constraints, etc) in the SOCP (15), with low additional cost. Also, the SVD solution
does not extend to the structured case considered in x4.
3.4. Robustness of LS solution. It is instructive to know when the RLS and
LS solutions coincide, in which case we can say the LS solution is robust. This happens
if and only if the optimal ' in problem (19) is equal to 1. The latter implies b
(that is, b 2 Range(A)). In this case, f is differentiable at its minimum over
[' min 1] is at only if
df
d'
We obtain a necessary and sufficient condition for the optimal ' to be equal to 1. This
condition is
If (21) holds, then the RLS and LS solutions coincide. Otherwise, the optimal ' ! 1, and
x is given by (17). We may write the latter condition in the case when the norm-bound
of the perturbation ae is different from 1 as: ae ? ae min , where
ae min
Thus, ae min can be interpreted as the perturbation level that the LS solution allows. We
note that, when b 2 Range(A), the LS and TLS solution also coincide.
Corollary 3.4. The LS, TLS and RLS solutions coincide whenever the norm-
bound on the perturbation matrix ae satisfies ae - ae min (A; b), where ae min (A; b) is defined
in (22). Thus, ae min (A; b) can be seen as a robustness measure of the LS (or TLS)
solution.
When A is full rank, the robustness measure aemin is non zero, and decreases as the
condition number of A increases.
Remark 3.3. We note that the TLS solution x TLS is the most accurate, in the
sense it minimizes the distance function (see [18]),
and the least robust, in the sense of the worst-case residual. The LS solution, x
is intermediate (in the sense of accuracy and robustness). In fact, it can be shown that
3.5. Robust and Total Least-Squares. The RLS framework assumes that the
data matrices (A; b) are the "nominal" values of the model, which are subject to unstructured
perturbation, bounded in norm by ae. Now, if we think of (A; b) as "mea-
sured" data, the assumption that (A; b) correspond to a nominal model may not be
judicious. Also, in some applications, the norm-bound ae on the perturbation may be
hard to estimate. The Total Least-Squares (TLS) solution, when it exists, can be used
in conjunction with RLS to address this issue.
Assume that the TLS problem has a solution. Let \DeltaA TLS , \Deltab TLS , x TLS be minimizers
of the TLS problem
minimize subject to
and let
ae
TLS finds a consistent, linear system that is closest (in Frobenius norm sense) to the
observed data (A; b). The underlying assumption is that the observed data (A; b) is the
result of a consistent, linear system which, under the measurement process, has been
subjected to unstructured perturbations, unknown but bounded in norm by ae TLS . With
this assumption, any point of the ball
ROBUST LEAST SQUARES 11
can be observed, just as well as (A; b). Thus, TLS computes an "uncertain linear
system" representation of the observed phenomenon: is the nominal model,
and ae TLS is the perturbation level.
Once this uncertain system representation choosing
x TLS as a "solution" to Ax ' b amounts to finding the exact solution to the nominal
system. Doing so, we compute a very accurate solution (with zero residual), which does
not take into account the perturbation level ae TLS . A more robust solution is given by
the solution to the following RLS problem
The solution to the above problem coincides with the TLS one (that is, in our case,
with x TLS ) when ae TLS - ae min
is stricly positive, except when A
With standard LS, the perturbations that account for measurement errors are structured
(with To be consistent with LS, one should consider the following RLS
problem instead of (23):
k\Deltabk-ae LS
It turns out that the above problem yields the same solution as LS itself.
To summarize, RLS can be used in conjunction with TLS for "solving" a linear
system Ax ' b. Solve the TLS problem to build an ``uncertain linear system'' re-presentation
of the observed data. Then, take the solution x RLS to
the RLS problem with the nominal matrices ae TLS .
Note that computing the TLS solution (precisely, A TLS , b TLS and ae TLS ) only requires the
computation of the smallest singular value and associated singular subspace [17].
4. Structured Robust Least Squares. In this section, we consider the SRLS
problem, which is to compute
kffik-ae
where A; b are defined in (2). As before, we assume with no loss of generality that
by r S (A; b; x). Throughout the section, we use the
following
4.1. Computing the worst-case residual. We first examine the problem of
computing the worst-case residual r S (A; b; x) for a given x
With the above notation, we have
F
Now let - 0. Using the S-procedure (lemma 2.1), we have
F
for every ffi, only if there exists a scalar - 0 such that
\Gammag
Using the fact that - 0 is implied by -I - F , we may rewrite the above condition as
\Gammag
0:
The consequence is that the worst-case residual is computed by solving a SDP with
two scalar variables. A bit more analysis shows how to reduce the problem to a one-
dimensional, convex differentiable problem, and obtain the corresponding worst-case
perturbation.
Theorem 4.1. For every x fixed, the squared worst-case residual (for
can be computed by solving the SDP in two variables
subject to (29);
or, alternatively, by minimizing a one-dimensional convex differentiable function:
where
If - is optimal for problem (30), the equations in ffi
have a solution, any of which is a worst-case perturbation.
Proof. See Appendix A, where we also show how to compute a worst-case perturbation
4.2. Optimizing the worst-case residual. Using theorem 4.1, the expression
of F; g; h given in (27), and Schur complements, we obtain following result.
Theorem 4.2. When ae = 1, the Euclidean-norm SRLS can be solved by computing
an optimal solution (-; x) of the SDP
subject to6 4
ROBUST LEAST SQUARES 13
where M(x) is defined in (26).
Remark 4.1. Straightforward manipulations show that the result are coherent with
the unstructured case.
Although the above SDP is not directly amenable to the more efficient SOCP
formulation, we may devise special interior-point methods for solving the problem.
These special-purpose methods will probably have much greater efficiency than general-purpose
SDP solvers. This study is left for the future.
Remark 4.2. The discussion of x3.5 extends to the case when the perturbations are
structured. TLS problems with (affine) structure constraints on perturbation matrices
are discussed in [7]. While the structured version of the TLS problem becomes very hard
to solve, the SRLS problem retains polynomial-time complexity.
5. Linear-Fractional SRLS. In this section, we examine a generalization of the
SRLS problem. Our framework encompasses the case when the functions A(ffi), b(ffi)
are rational. We show that the computation of the worst-case residual is NP-complete,
but that upper bounds can be computed (and optimized) using SDP. First, we need
to motivate the problem, and develop a formalism for posing it. This formalism was
introduced by Doyle and coauthors [9] in the context of robust identification.
5.1. Motivations. In some structured robust least-squares problems such as (3),
it may not be convenient to measure the perturbation size with Euclidean norm. Indeed,
the latter implies a correlated bound on the perturbation. One may instead consider a
SRLS problem, in which the bounds are not correlated, that is, the perturbation size
in (3) is measured by the maximum norm:
kffik 1-1
Also, in some RLS problems, we may assume that some columns of [A b] are perfectly
known, for instance the error [\DeltaA \Deltab] has the form
and otherwise unknown. More generally, we may be interested in SRLS problems, where
the perturbed data matrices write
A(\Delta) b(\Delta)
A b
are given matrices, and \Delta is a (full) norm-bounded matrix. In
such a problem, the perturbation is not structured, except via the matrices L; RA
(Note that a special case of this problem is solved in [5].)
Finally, we may be interested in SRLS problems in which the matrix functions
A(ffi), b(ffi) in (3) are rational functions of the parameter vector ffi. One example is given
in x7.6.
It turns out that the above three extensions can be addressed using the same
formalism, which we detail now.
5.2. Problem definition. Let D be a subspace of R N \ThetaN , A 2 R n\Thetam
R n\ThetaN , RA 2 R N \Thetam , R b 2 R N , D 2 R N \ThetaN . For every \Delta 2 D such that det(I \GammaD\Delta) 6= 0,
14 L. EL GHAOUI AND H. LEBRET
we define the matrix functions
A(\Delta) b(\Delta)
A b
For a given x we define the worst-case residual by
r D (A; b; ae; x) \Delta
\Delta2D; k\Deltak-ae
We say that x is a Structured Robust Least Squares (SRLS) solution if x minimizes the
worst-case residual above. As before, we assume no loss of generality, and
denote r D (A; b; 1; x) by r D (A; b; x).
The above formulation encompasses the three situations referred to in x5.1. First,
the maximum-norm SRLS problem (33) is readily transformed into problem (35), as
follows. Let n\ThetaN be such that [A i b i
R T
Problem (33) can be formulated as the minimization of (35), with D defined as above.
Also, we recover the case when the perturbed matrices write as in (34), when we
allow \Delta to be any full matrix (that is, In particular, we recover the
unstructured RLS problem of x3, as follows. Assume n ? m. We have
\Deltab \Theta
refers to dummy elements that are added to
the perturbation matrix in order to make it a square, n \Theta n matrix.) In this case, the
perturbation set D is R n\Thetan .
Finally, the case when A(ffi) and b(ffi) are rational functions of a vector ffi (well-
defined over the unit ball fffi j kffik 1 - 1g) can be converted (in polynomial time) into
the above framework (see e.g. [48] for a conversion procedure). We give an example of
such a conversion in x7.6.
5.3. Complexity analysis. In comparison with the SRLS problem of x4, the
linear-fractional SRLS problem offers two levels of increased complexity.
First, checking whether the worst-case residual is finite is NP-complete [6]. The
linear-fractional dependence (that is, D 6= 0) is a first cause of increased complexity.
The SRLS problem above remains hard even when matrices A(ffi), b(ffi) depend
affinely on the perturbation elements 0). Consider for instance the SRLS problem,
with and in which D is defined as in (36). In this case, the problem of computing
the worst-case residual can be formulated as
kffik 1-1
F
ROBUST LEAST SQUARES 15
for appropriate F; g; h. The only difference with the wost-case residual defined in (28)
is the norm used to measure perturbation. Computing the above quantity is NP-complete
(it is equivalent to a MAX CUT problem [36, 38]). The following lemma,
which we provide for the sake of completeness, is a simple corollary of a result by
Nemirovskii [32].
Lemma 5.1. The problem P(A;b; D; x):
Given a positive rational number -, matrices A; b; L; RA of appropriate
size, and an m-vector x, all with rational entries, and a linear
subset D, determine whether r D (A; b; x) -
is NP-complete.
Proof. See appendix B.
5.4. An upper bound on the worst-case residual. Although our problem is
NP-complete, we can minimize upper bounds in polynomial-time, using SDP. Introduce
the following linear subspaces:
R. The inequality - ? r D (A; b; x) holds if and only if, for every \Delta 2 D,
0:
Using Lemma 2.3, we obtain that - ? r D (A; b; x) holds if there exist S 2 S, G 2 G,
such that
G; x) =6 4 \Theta Ax \Gamma b
where
\Theta \Delta
Minimizing - subject to the above semidefinite constraint yields an upper bound for
r D (A; b; x). It turns out that the above estimate of the worst-case residual is actually
exact, in some "generic" sense.
Theorem 5.2. When ae = 1, an upper bound on the worst-case residual r D (A; b; x)
can be obtained by solving the SDP
- subject to S 2 G; (38):
The upper bound is exact when D = R N \ThetaN . If \Theta ? 0 at the optimum, the upper bound
is also exact.
Proof. See appendix C.
L. EL GHAOUI AND H. LEBRET
5.5. Optimizing the worst-case residual. Since x appears linearly in the constraint
(38), we may optimize the worst-case residual's upper bound using SDP. We may
reduce the number of variables appearing in the previous problem, using the elimination
lemma 2.4. Inequality in (38) can be written as in (11), with
\GammaR b
A
where \Theta is defined in (39).
Denote by N the orthogonal complement of [A T R T
. Using the elimination
lemma 2.4, we obtain an equivalent condition for (38) to hold for some x namely
G; \Theta ? 0; (N
\GammaR b
\Gammab \GammaR b -7 5 (N
For every -; S; G that are stricly feasible for the above constraints, an x that satisfies (38)
is given, when RA is full-rank, by
A
A
\Theta \Gamma1
A
RA
A
A
\Theta \Gamma1
(To prove this, we applied formula (13), and took oe !1.)
Theorem 5.3. When ae = 1, an upper bound on the optimal worst-case residual
can be obtained by solving the SDP
- subject to S 2 G; (38);
or, alternatively, the SDP
- subject to (41):
The upper bound is always exact when D = R N \ThetaN . If \Theta ? 0 at the optimum, the
upper bound is also exact. The optimal x is then unique, and given by (42) when RA is
full-rank.
Proof. See appendix C.
Remark 5.1. In parallel to the unstructured case (see remark 3.1), the linear-
fractional SRLS can be interpreted as a weighted LS for an augmented system. Precisely,
when \Theta ? 0, the linear-fractional SRLS solution can be interpreted as the solution of a
weighted LS problem:
A
RA
The SRLS method amounts to compute the weighting matrix \Theta that is optimal for
robustness.
ROBUST LEAST SQUARES 17
Remark 5.2. Our results are coherent with the unstructured case: replace L by I,
R by [I 0] T , variable S by -I, and set G = 0. The parameter - of theorem 3.2 can be
interpreted as the Schur complement of -I \Gamma LSL T in the matrix \Theta.
Remark 5.3. We emphasize that the above results are exact (non conservative)
when the perturbation structure is full. In particular, we recover (and generalize) the
results of [5] in the case when only some columns of A are affected by otherwise unstructured
perturbations.
Remark 5.4. When is possible to use the approximation method of [16] to
obtain solutions (based on the SDP relaxations given in theorem 5.3) that have expected
value within 14% of the true value.
6. Link with regularization. The standard LS solution x LS is very sensitive to
errors in A; b when A is ill-conditioned. In fact, the LS solution might not be a continuous
function of A; b when A is near-deficient. This has motivated many researchers
for ways to regularize the LS problem, which is to make the solution x unique and
continuous in the data matrices (A; b). In this section, we briefly examine the links of
our RLS and SRLS solution with regularization methods for standard LS.
Beforehand, we note that since all our problems are formulated as SDPs, we could
invoke the quite complete sensitivity analysis results obtained by Bonnans, Cominetti
and Shapiro [3]. The application of these general results to our SDPs is considered
in [35].
6.1. Regularization methods for LS. Most regularization methods for LS amount
to impose an additional bound on the solution vector x. One way is to minimize
where\Omega is some squared-norm (see [23, 43, 8]). Another way is to
use constrained least-squares (see [18, p.561-571]).
In a classical Tikhonov regularization method, \Omega\Gamma
some "regularization" parameter. The modified value of x is obtained by solving an
augmented LS problem
and is given by
(Note that for every - ? 0, the above x is continuous in (A; b).)
The above expression also arises in the Levenberg-Marquardt method for optimiza-
tion, or in the Ridge regression problem [17]. As mentioned in [18], the choice of an
appropriate - is problem-dependent, and in many cases, not obvious.
In more elaborate regularization schemes of the Tikhonov type, the identity matrix
in (46) is replaced with a positive semidefinite weighting matrix (see for instance [31, 8]).
Again, this can be interpreted as a (weighted) least-squares method for an augmented
system.
L. EL GHAOUI AND H. LEBRET
6.2. RLS and regularization. Noting the similarity between (17) and (46), we
can interpret the (unstructured) RLS method as one of Tikhonov regularization. The
following theorem yields an estimate of the "smoothing effect" of the RLS method.
Note that improved regularity results are given in [35].
Theorem 6.1. The (unique) RLS solution x RLS and the optimal worst-case residual
are continuous functions of the data matrices A; b. Furthermore, if K is a compact set
of R n , and then for every uncertainty size ae ? 0, the function
R n\Thetam \Theta K \Gamma! [1 dK
is Lipschitzian, with Lipschitz constant 1
Theorem 6.1 shows that any level of robustness (that is, any norm-bound on perturbations
ae ? regularization. We describe in x7 some numerical examples
that illustrate our results.
Remark 6.1. In the RLS method, the Tikhonov regularization parameter - is
chosen by solving a second-order cone problem, in such a way that - is optimal for
robustness. The cost of the RLS solution is equal to the cost of solving a small number
of least-squares problems of the same size as the classical Tikhonov regularization
problem (45).
Remark 6.2. The equation that determines - in the RLS method is
ae
This choice has resemblance with Miller's choice [30], where - is determined recursively
by the equations
This formula arises in RLS when there is no perturbation in b (see remark 3.2). Thus,
Miller's solution corresponds to a RLS problem in which the perturbation affects only
the columns of A. We note that this solution is not necessarily regular (continuous).
Total least-squares (TLS) deserves a special mention here. When the TLS problem
has a solution, it is given by x oe is the smallest singular
value of [A b]. This corresponds to (46). The negative value of - implies
that the TLS is a "deregularized" LS, a fact noted in [17]. In view of our link between
regularization and robustness, the above is consistent with the fact that RLS trades off
the accuracy of TLS with robustness and regularity, at the expense of introducing bias
in the solution. See also remark 3.3.
6.3. SRLS and regularization. Similarly, we may ask whether the solution to
the SRLS problem of x4 is continuous in the data matrices A as was the case
for unstructured RLS problems. We only discuss continuity of the optimal worst-case
ROBUST LEAST SQUARES 19
residual with respect to problems, the coefficient matrices A
are fixed).
In view of Theorem 4.2, continuity holds if the feasible set of the SDP (32) is
bounded. Obviously, the objective - is bounded above by
Thus the variable - is also bounded, as (32) implies 0 -. With - bounded
above, we see that (32) implies that x is bounded if
bounded implies x bounded.
The above property holds if and only if [A T
Theorem 6.2. A sufficient condition for continuity of the optimal worst-case
residual (as a function of
6.4. Linear-fractional SRLS and regularization. Precise conditions for continuity
of the optimal upper bound on worst-case residual in the linear-fractional case
are not known. We may however regularize this quantity using a method described
in [29] for a related problem. For a given ffl ? 0, define the bounded set
ae
ffl I
oe
where S is defined in (37). It is easy to show that restricting the condition number of
variable S also bounds the variable G in the SDP (44). This yields the following result.
Theorem 6.3. An upper bound on the optimal worst-case residual can be obtained
by computing the optimal value -(ffl) of the SDP
min
- subject to S 2 G; (41):
The corresponding upper bound is a continuous function of [A b]. As ffl ! 0, the
corresponding optimal value -(ffl) has a limit, equal to the optimal value of SDP (44).
As noted in remark 5.1, the linear-fractional SRLS can be interpreted as a weighted
LS, and so can the above regularization method. Thus, the above method belongs to
the class of Tikhonov (or weighted LS) regularization methods referred to in 6.1, the
weighting matrix being optimal for robustness.
7. Numerical examples. The following numerical examples were obtained using
two different codes: for SDPs, we used the code SP [45], and a matlab interface to SP
called [10]. For the (unstructured) RLS problems, we used the second-order
cone program described in [28].
L. EL GHAOUI AND H. LEBRET
#iter
Vertical bars indicate deviation
for 20 trials, with
mean
min
#iter
Vertical bars indicate deviation
for 20 trials, with
mean
min
Fig. 1. Average, minimum and maximum number of iterations for various RLS problems using the
SOCP formulation. In the left figure, we show these numbers for values of n ranging from 100 to 1000.
For each value of n, the vertical bar indicates the minimum and maximum values obtained with 20
trials of A; b, with In the right figure, we show these numbers for values of m ranging from
11 to 100. For each value of n, the vertical bar indicates the minimum and maximum values obtained
with 20 trials of A; b, with 1000. For both plots, the plain curve is the mean value.
7.1. Complexity estimates of RLS. We first did "large-scale" experiments for
the RLS problem of x3. As mentioned in x2.1, the number of iterations is almost independent
of the size of the problem for SOCPs. We have solved problem (15) for
uniformly generated random matrices A and vectors b with various sizes of n; m. Figure
1 shows the average number of iterarions as well as the minimum and maximum
number of iterations for various values of n; m. The experiments confirm the fact the
number of iterations is almost independent of problem size for the RLS problem.
7.2. LS, TLS and RLS. We now compare the LS, TLS and RLS solutions for
On the left and right plots in Fig. 2, we show the four points
signs and the corresponding linear fits for LS problems (solid line), TLS problems
(dotted line) and RLS problems for (dashed lines). The left plot gives the RLS
solution with perturations [A+\DeltaA; b+\Deltab] whereas the right plot considers perturbation
in A only, [A In both plots, the worst-case points for the RLS solution are
indicated by 0 2. As ae increases, the slope of the RLS solution
decreases, and goes to zero when ae ! 1. The plot confirms remark 3.3: the TLS
solution is the most accurate and the least robust, and LS is intermediate.
In the case when we have perturbations in A only (right plot), we obtain an instance
of a linear-fractional SRLS (with a full perturbation matrix), as mentioned in x5.1. (It
is also possible to solve this problem directly, as in x3.) In this last case of course, the
worst-case perturbation can only move along the A-axis.
ROBUST LEAST SQUARES 21
A
TLS
RLS
A
TLS
RLS
Fig. 2. Least-squares (solid), total least-squares (dotted) and robust least-squares (dashed) solutions.
The signs + correspond to the nominal [A b]. The left plot gives RLS solution with perturations
the right plot considers perturbation in A only, [A b]. The worst-case
perturbed points for the RLS solution are indicated by 0 2.
7.3. RLS and regularization. As mentioned in x6, we may use RLS to regularize
an ill-conditioned LS problem. Consider the RLS problem for
The matrix A is singular when
Fig. 3 shows the regularizing effect of the RLS solution. The left (resp. right) figure
shows the optimal worst-case residual (resp. norm of RLS solution) as a function of the
parameter ff, for various values of ae. When ae = 0, we obtain the LS solution. The latter
is not a continuous function of ff, and both the solution norm and residual exhibit a
spike for becomes singular). For ae ? 0, the RLS solution is smooth.
The spike is more and more flattened as ae grows, which illustrates theorem 6.1. For
1, the optimal worst-case residual becomes flat (independent of ff), and equal to
7.4. Robustness of LS solution. The next example illustrates that sometimes
(precisely, if b 2 Range(A)), the LS solution is robust, up to the perturbation level
ae min defined in (22). This "natural" robustness of the LS solution degradates as the
condition number of A grows. For " A ? 0, consider the RLS problem for
":1
We have considered six values of " A (which equals the inverse of the condition
number of A) from .05 to .55. Table 1 shows the values of ae min (as defined in (22)) for
22 L. EL GHAOUI AND H. LEBRET
ff
dashed
dashed dotted
ff
Fig. 3. Optimal worst-case residual and norm of RLS solution vs. ff for various values of perturbation
level ae. For the optimal residual and solution are discontinuous. The spike is
smoothed as more robustness is asked for (that is, when ae increases). On the right plot the curves for
are not visible.
Table
Values of ae min for various " A .
curve
ae min 0.06 0.34 0.78 1.12 1.28 1.35
the six values of " A . When the condition number of A grows, the robustness of the LS
solution (measured by ae min ) decreases.
The right plot of Fig. 4 gives the worst-case residual vs. the robustness parameter
ae for the six values of " A . The plot illustrates that for ae ? ae min , the LS solution (in
our case, A differs from the RLS one. Indeed, for each curve, the residual remains
equal to zero as long as ae - ae min . For example, the curve labeled '1' (corresponding to
quits the x-axis for ae - ae
The left plot of Fig. 4 corresponds to the RLS problem with
of " A . This plot shows the various functions f(') as defined in (20). For each value of
" A , the optimal ' (hence the RLS solution) is obtained by minimizing the function f .
The three smallest values of " A induce functions f (as defined in (20)) that are minimal
1. For the three others, the optimal ' is 1. This means that ae min is smaller
than 1 in the first three cases and larger than 1 in the other cases. This is confirmed in
Table
1.
7.5. Robust identification. Consider the following system identification prob-
lem. We seek to estimate the impulse response h of a discrete-time system from its input
u and output y. Assuming that the system is single-input and single-ouput, linear, and
of order m, and that u is zero for negative time indices, y, u and h are related by the
ROBUST LEAST SQUARES 23
6 531
Fig. 4. The left plot shows function f(') (as defined in (20)) for the six values of " A (for ae = 1).
The right plot gives the optimal RLS residuals versus ae for the same values of " A . The labels
correspond to values of " A given in Table 1.
convolution equations y, where
and U is a lower triangular Toeplitz matrix whose first column is u. Assuming are
known exactly leads to a linear equation in h, which can be computed with standard
LS.
In practive however, both y and u are subject to errors. We may assume for
instance that the actual value of y is y ffiy, and that of u is u are
unknown-but-bounded perturbations. The perturbed matrices U; y write
is the i-th column of the m \Theta m identity matrix, and U i are lower
triangular Toeplitz matrices with first column equal to e i .
We first assume that the sum of the input and output energies is bounded, that is
We adress the following
min
kffik-ae
As an example, we consider the following nominal values for
In Fig. 5, we have shown the optimal worst-case residual and that corresponding to
the LS solution, as given by solving problems (30) and (32), respectively. Since the LS
L. EL GHAOUI AND H. LEBRET
solution has zero residual (U is invertible), we can prove (and check on the figure) that
the worst-case residual grows linearly with ae. In contrast, the RLS optimal worst-case
residual has a finite limit as ae !1.
RLS
ae
Fig. 5. Worst-case residuals of LS and euclidean-norm SRLS solutions for various values of perturbation
level ae. The worst-case residual for LS has been computed by solving problem (30), with
fixed.
We now assume that the perturbation bounds on y; u are not correlated. For
instance, we consider problem (48), with the bound kffik - ae replaced with
Physically, the above bounds mean that the output energy and peak input are bounded.
This problem can be formulated as minimizing the worst-case residual (35), with
[A
and \Delta has the following structure:
Here, the symbols \Theta denote dummy elements of \Delta that were added in order to work with
a square perturbation matrix. The above structure corresponds to the set D in (36),
with
In Fig.6, we show the worst-case residual vs. ae, the uncertainty size. We show the
curves corresponding to the values predicted by solving the SDP (43), with x variable
ROBUST LEAST SQUARES 25
upper bound (LS solution)
lower bound (LS solution)
upper bound (RLS solution)
lower bound (RLS solution)
ae
Fig. 6. Upper and lower bounds on worst-case residuals for LS and RLS solutions. The upper bound
for LS has been computed by solving the SDP (38), with fixed. The lower bounds corresponds
to the largest residuals kU(ffi trial )x \Gamma y(ffi trial )k among 100 trial points ffi trial , with
(RLS solution), and x fixed to the LS solution x LS . We also show lower bounds on the
worst-case, obtained using 100 trial points. This plot shows that, for the LS solution,
our estimate of the worst-case residual is not exact, and the discrepancy grows linearly
with uncertainty size. In contrast, for the RLS solution the estimate appears to be
exact for every value of ae.
7.6. Robust interpolation. The following example is a robust interpolation
problem that can be formulated as a linear-fractional SRLS problem. For given integers
a polynomial of degree
interpolates given points (a that is
If we assume that (a are known exactly, we obtain a linear equation in the unknown
x, with a Vandermonde structure:6 6 4
which can be solved via standard LS.
Now assume that the interpolation points are not known exactly. For instance, we
may assume that the b i 's are known, while the a i 's are parameter-dependent:
a
where the ffi i 's are unknown-but-bounded: jffi
We seek a robust interpolant, that is, a solution x that minimizes
kffik 1-ae
26 L. EL GHAOUI AND H. LEBRET
where
The above problem is a linear-fractional SRLS problem. Indeed, it can be shown
that
where
and, for each i,
. a i
. a i
. 1
(Note that det(I \Gamma D\Delta) 6= 0, since D is stricly upper triangular.)
In Fig. 7, we have shown the result
a 1 =6 423
The LS solution is very accurate (zero nominal residual: every point is interpolated ex-
actly), but has a (predicted) worst-case residual of 1:7977. The RLS solution trades off
this accuracy (only one point interpolated, and nominal residual of 0:8233) for robustness
(with a worst-case residual less than 1:1573). As ae ! 1, the RLS interpolation
polynomial becomes more and more horizontal. (This is consistent with the fact that
we allow perturbations on vector a only.) In the limit, the interpolation polynomial is
the solid line
ROBUST LEAST SQUARES 27
Fig. 7. Interpolation polynomials: LS and RLS solutions for 0:2. The LS solution interpolates the
points exactly, while the RLS one guarantees a worst-case residual error less than 1:1573. For
the RLS solution is the zero polynomial.
8. Conclusions. This paper shows that several robust least-squares (RLS) problems
with unknown-but-bounded data matrices are amenable to (convex) second-order
cone or semidefinite programming (SOCP or SDP). The implication is that these RLS
problems can be solved in polynomial-time, and efficiently in practice.
When the perturbation enters linearly in the data matrices, and its size is measured
by Euclidean norm, or in a linear-fractional problem with full perturbation matrix \Delta,
the method yields the exact value of the optimal worst-case residual. In the other
cases we have examined (such as arbitrary rational dependence of data matrices on the
perturbation parameters), computing the worst-case residual is NP-complete. We have
shown how to compute, and optimize, using SDP, an upper bound on the worst-case
residual, that takes into account structure information.
In the unstructured case, we have shown that both the worst-case residual and
the (unique) RLS solution are continuous. The unstructured RLS can be interpreted
as a regularization method for ill-conditioned problems. A striking fact is that the
cost of the RLS solution is equal to a small number of least-squares problems arising
in classical Tikhonov regularization approaches. This method provides a rigorous way
to compute the optimal parameter from the data and associated perturbation bounds.
Similar (weighted) least-squares interpretations and continuity results were given for
the structured case.
In our examples, we have demonstrated the use of a SOCP code [27], and a general-purpose
semidefinite programming code, SP [45]. Future work could be devoted to
writing special code that exploits the structure of these problems, in order to further
increase the efficiency of the method. For instance, it seems that, in many problems,
the perturbation matrices are sparse, and/or have special (e.g., Toeplitz) structure.
The method can be used for several related problems.
Constrained RLS. We may consider problems where additional (convex) constraints
are added on the vector x. (Such constraints arise naturally in e.g.,
image processing). For instance, we may consider problem (1) with an addi-
28 L. EL GHAOUI AND H. LEBRET
tional linear (resp. quadratic convex) constraint (Cx) i - 0,
To solve such a problem, it suffices
to add the related constraint to corresponding SOCP or SDP formulation.
(Note that the SVD approach of x3.3 fails in this case.)
ffl RLS problems with other norms. We may consider RLS problems in which the
worst-case residual error in measured in other norms, such as the maximum
ffl Matrix RLS. We may of course, derive similar results when the constant term
b is a matrix. The worst-case error can be evaluated in a variety of norms.
ffl Error-in-Variables RLS. We may consider problems where the solution x is
also subject to uncertainty (due to implementation and/or quantization errors).
That is, we may consider a worst-case residual of the form
are given. We may compute (and optimize) upper bounds
on the above quantity using SDP. This subject is examined in [25].
Acknowledgments
. The authors wish to thank the anonymous reviewers for their
precious comments, which led to many improvements over the first version of this paper.
We are particularly indebted to the reviewer who pointed out the SOCP formulation
for the unstructured problem. We also thank G. Golub and R. Tempo for providing
us with some related references, and A. Sayed for sending us the preliminary draft [5].
The paper has also benefited from many fruitful discussions with S. Boyd, F. Oustry,
B. Rottembourg and L. Vandenberghe.
A. Proof of Theorem 4.1. Introduce the eigendecomposition of F and a related
decomposition for g:
writes
at the optimum, then there exists a nonzero vector
u such that (-I \Gamma F From inequality (29), we conclude that g T In
other words, - g)-controllable, and u is an eigenvector that proves this
uncontrollability. Using in (49), we obtain the optimal value of - in this case:
Thus, the worst-case residual can be computed as claimed in the theorem.
ROBUST LEAST SQUARES 29
For every pair (-) that is optimal for problem (29), we can compute a worst-case
perturbation as follows. Define
We have - at the optimum if and only -
(that is, g)-controllable and the function f defined
in (31) satisfies
df
d-
0:
In this case, the optimal - satisfies
that is, kffi 1. Using this and (50), we obtain
F
This proves that ffi 0 is a worst-case perturbation.
at the optimum, then
df
d-
which implies that kffi 0 k - 1. Since - max there exists a vector u such that
loss of generality, we may assume that the vector
We have
F
This proves that ffi defined above is a worst-case perturbation.
In both cases seen above (- equals - worst-case perturbation is
any vector ffi such that
(We have just shown that the above equations always have a solution ffi when - is
optimal.) This ends our proof.
B. Proof of Lemma 5.1. We use the following result, due to Nemirovsky [32].
Lemma B.1. Let \Gamma(p; a) be a scalar function of positive integer p and p-dimensional
vector a, such that, first, \Gamma is well-defined and takes rational values from (0; kak \Gamma2 )
for all positive integers p and all p-dimensional vectors a with kak - 0:1, and second,
the value of this function at a given pair (p; a) can be computed in time polynomial in
p and the length of the standard representation of the (rational) vector a. Then the
problem
L. EL GHAOUI AND H. LEBRET
Given an integer p - 0 and a 2 R p , kak - 0:1, with rational positive
entries, determine whether
kffik 1-1
is NP-complete. Besides this, either (51) holds, or
kffik 1-1
where d(a) is the smallest common denominator of the entries of a.
To prove our result, it suffices to show that for some appropriate function \Gamma satisfying
to the conditions of lemma B.1, we can reduce, for any given p; a, problem
to ours, in polynomial time. Set
2a T a
(a T a
This function satisfies all requirements of lemma B.1, so problem P \Gamma (p; a) is NP-hard.
Given rational positive entries, set A, b, D and x as follows.
First, set D to be the set of diagonal matrices of R p\Thetap . Set
Finally, set A, b as in (34) and 1, the worst-case
residual for this problem is
r D (A; b;
kffik 1-1
kffik 1-1
Our proof is now complete.
C. Proof of Theorem 5.3. In this section, we only prove theorem 5.3. The proof
of theorem 5.2 follows the same lines. We start from problem (43), the dual of which is
the maximization of 2(b T w +R T
b u) subject to
and the linear constraints
A
G; TrG(Y
ROBUST LEAST SQUARES 31
Since both primal and dual problems are strictly feasible, every primal and dual
feasible points are optimal if and only if ZF(-; G; is defined in (38)
(see [46]). One obtains, in particular,
G.
Using equation (58) and (55), we obtain
which implies that from equality of the primal and dual objectives (the trivial
case can be easily ruled out).
Assume that the matrix \Theta defined in (39) is positive-definite at the optimum. From
equations (57)-(59), we deduce that the dual variable Z is rank-one:
w
Using (57) and (59), we obtain
\Theta
From (55), it is easy to derive the expression (42) for the optimal x in the case when
\Theta ? 0 at the optimum, and RA is full-rank.
We now show that the upper bound is exact at the optimum in this case. If we use
condition (54), and the expression for Z; V deduced from (53), we obtain
This implies that there exists I, such that
\Theta ? 0, a straightforward application of lemma 2.3 shows that det(I \Gamma D\Delta) 6= 0, so we
obtain
(from (61)) and
(from (53)), we have 1=2. We can now compute
(from (55) and (60)).
L. EL GHAOUI AND H. LEBRET
Therefore,
(from
We obtain which proves that the matrix \Delta is a worst-case
perturbation.
--R
An efficient newton barrier method for minimizing a sum of euclidean norms
Pertubed optimization under the second order regularity hypothesis
Linear Matrix Inequalities in System and Control Theory
A new linear least-squares type model for parameter estimation in the presence of data uncertainties
Computing the real structured singular value is NP-hard
Structured total least squares and L 2 approximation problems
Image reconstruction and restoration: overview of common estimation problems
Unifying robustness analysis and system ID
LMITOOL: A front-end for LMI op- timization
Algorithms for the regularization of ill conditioned least-squares problems
Robustness in the presence of mixed parametric uncertainty and unmodeled dynamics
Collinearity and total least squares
Optimization of weighting constant for regularization in least squares system identification
An analysis of the total least squares problem
Quadratically constrained least squares and quadratic prob- lems
The robust generalized least-squares estimator
Regularization methods for large-scale problems
Backward error and condition of structured linear systems
The application of constrained least-squares estimation to image restoration by digital computer
All controllers for the general H1 control problem: LMI existence conditions and state space formulas
Social Sciences
Synth'ese de diagrammes de r'eseaux d'antennes par optimisation convexe
On continuity/discontinuity in robustness indicators
Least squares methods for ill-posed problems with a prescribed bound
Several NP-hard problems arising in robust stability analysis
Interior point polynomial methods in convex programming: Theory and applications
and application of bounded parameter models
Robust solutions to uncertain semidefinite pro- grams
Checking robust nonsingularity is NP-hard
Matrix Anal.
a connection between robust control and identifica- tion
Indefinite trust region subproblems and nonsymmetric eigenvalue perturbations
Solutions of Ill-Posed Problems
The total least squares problem: computational aspects and analysis
Robust estimation techniques in regularized image restora- tion
Robust and Optimal Control
--TR
--CTR
Michele Covell , Sumit Roy , Beomjoo Seo, Predictive modeling of streaming servers, ACM SIGMETRICS Performance Evaluation Review, v.33 n.2, p.33-35, September 2005
Jos F. Sturm , Shuzhong Zhang, On cones of nonnegative quadratic functions, Mathematics of Operations Research, v.28 n.2, p.246-267, May
Alexei R. Pankov , Konstantin V. Siemenikhin, Minimax estimation for singular linear multivariate models with mixed uncertainty, Journal of Multivariate Analysis, v.98 n.1, p.145-176, January 2007
Arvind Nayak , Emanuele Trucco , Neil A. Thacker, When are Simple LS Estimators Enough? An Empirical Study of LS, TLS, and GTLS, International Journal of Computer Vision, v.68 n.2, p.203-216, June 2006
Mohit Kumar , Regina Stoll , Norbert Stoll, Robust Solution to Fuzzy Identification Problem with Uncertain Data by Regularization, Fuzzy Optimization and Decision Making, v.3 n.1, p.63-82, March 2004
Jianchao Yao, Estimation of 2D displacement field based on affine geometric invariance and scene constraints, International Journal of Computer Vision, v.46 n.1, p.25-50, January 2002
Budi Santosa , Theodore B. Trafalis, Robust multiclass kernel-based classifiers, Computational Optimization and Applications, v.38 n.2, p.261-279, November 2007
Dimitris Bertsimas , Dessislava Pachamanova, Robust multiperiod portfolio management in the presence of transaction costs, Computers and Operations Research, v.35 n.1, p.3-17, January, 2008
Juan Liu , Ying Zhang , Feng Zhao, Robust distributed node localization with error management, Proceedings of the seventh ACM international symposium on Mobile ad hoc networking and computing, May 22-25, 2006, Florence, Italy
D. Goldfarb , G. Iyengar, Robust portfolio selection problems, Mathematics of Operations Research, v.28 n.1, p.1-38, February
Pannagadatta K. Shivaswamy , Chiranjib Bhattacharyya , Alexander J. Smola, Second Order Cone Programming Approaches for Handling Missing and Uncertain Data, The Journal of Machine Learning Research, 7, p.1283-1314, 12/1/2006
Ivan Markovsky , Sabine Van Huffel, Overview of total least-squares methods, Signal Processing, v.87 n.10, p.2283-2302, October, 2007
Mung Chiang, Geometric programming for communication systems, Communications and Information Theory, v.2 n.1/2, p.1-154, July 2005 | uncertainty;robustness;ill-conditioned problem;regularization;least-squares problems;second-order cone programming;robust identification;robust interpolation;semidefinite programming |
273585 | Locality of Reference in LU Decomposition with Partial Pivoting. | This paper presents a new partitioned algorithm for LU decomposition with partial pivoting. The new algorithm, called the recursively partitioned algorithm, is based on a recursive partitioning of the matrix. The paper analyzes the locality of reference in the new algorithm and the locality of reference in a known and widely used partitioned algorithm for LU decomposition called the right-looking algorithm. The analysis reveals that the new algorithm performs a factor of $\Theta(\sqrt{M/n})$ fewer I/O operations (or cache misses) than the right-looking algorithm, where $n$ is the order of the matrix and $M$ is the size of primary memory. The analysis also determines the optimal block size for the right-looking algorithm. Experimental comparisons between the new algorithm and the right-looking algorithm show that an implementation of the new algorithm outperforms a similarly coded right-looking algorithm on six different RISC architectures, that the new algorithm performs fewer cache misses than any other algorithm tested, and that it benefits more from Strassen's matrix-multiplication algorithm. | Introduction
. Algorithms that partition dense matrices into blocks and operate
on entire blocks as much as possible are key to obtaining high performance
on computers with hierarchical memory systems. Partitioning a matrix into blocks
creates temporal locality of reference in the algorithm and reduces the number of
words that must be transferred between primary and secondary memories. This paper
describes a new partitioned algorithm for LU-factorization with partial pivoting,
called the recursively-partitioned algorithm. The paper also analyzes the number
of data transfers in a popular partitioned LU-factorization algorithm, the so-called
right-looking algorithm, which is used in LAPACK [1]. The performance characteristics
of other popular partitioned LU-factorization algorithms, in particular Crout
and the left-looking algorithm used in the NAG library [4], are similar to those of the
right-looking algorithm so they are not analyzed.
The analysis of the two algorithms leads to two interesting conclusions. First,
there is a simple system-independent formula for choosing the block size for the right-
looking algorithm which is almost always optimal. Second, the recursively-partitioned
algorithm generates asymptotically less memory traffic between memories than the
right-looking algorithm, even if the block size for the right looking algorithm is chosen
optimally. Numerical experiments indicate that the recursively-partitioned algorithm
generates fewer cache misses and runs faster than the right-looking algorithm.
The recursively-partitioned algorithm computes the LU decomposition with partial
pivoting of an n-by-m matrix while transferring only \Theta(nm
words between primary and secondary memories, where M is the size of the primary
memory. The right-looking algorithm, on the other hand, transfers at least
words. The number of words actually transferred by conventional
algorithms depends on a parameter r, which is not chosen optimally in
Parts of this research were performed while the author was a postdoctoral fellow at the IBM
T.J. Watson Research Center and a postdoctoral associate at the MIT Laboratory for Computer
Science. The work at MIT was supported in part by ARPA under Grant N00014-94-1-0985.
y Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, CA 94304.
S. TOLEDO
LAPACK. The new algorithm is optimal in the sense that the number of words that
it transfers is asymptotically the same as the number transferred by partitioned (or
blocked) algorithms for matrix multiplication and solution of triangular systems (at
least when the number of columns is not very small compared to the size of primary
memory). The right looking algorithm achieves such performance only when
the matrix is so large that a few rows fill the primary memory.
The recursively-partitioned algorithm algorithm has other advantages over conventional
algorithms. It has no block-size parameter that must be tuned in order to
achieve high performance. Since it is recursive, it is likely to perform better when the
memory system has more than two levels, for example, on computer systems with two
levels of cache or with both cache and virtual memory.
To understand the main idea behind the new algorithm, let us look first at the
conventional right-looking LU factorization algorithm. The algorithm decomposes
the input matrix into dn=re blocks of at most r columns. Starting from the leftmost
block of columns, the algorithm iteratively factors a block of r columns using a column-
oriented algorithm. After a block is factored, the algorithm updates the entire trailing
submatrix. The parameter r must be carefully chosen to minimize the number of
words transferred between memories. If r is larger than M=n, many words must
be transferred when a block of columns is factored. If r is too small, many trailing
submatrices must be updated, and most of the updates require the entire trailing
submatrix to be read from secondary memory.
The main insight behind the recursively-partitioned algorithm is that there is no
need to update the entire trailing submatrix after a block of columns is factored.
After factoring the first column of the matrix, the algorithm updates just the next
column to the right, which enables it to proceed. Once the second column is factored,
we must apply the updates from the first two columns before we can proceed. The
algorithm updates two more columns and proceeds. Once four columns are factored,
they are used to update four more, and so on. In other words, the algorithm does not
look all the way to the right every time a few columns are factored. As we shall see
below, this short-sighted approach pays off.
From another point of view, the new algorithm is a recursive algorithm. We
know that the larger r (the number of columns in a block), the smaller the number
of data transfers required for updating trailing submatrices. The algorithm therefore
chooses the largest possible size, m=2. If that many columns do not fit within
primary memory, they are factored recursively using the same algorithm, rather than
being factored using a naive column oriented algorithm. Once the left m=2 columns
are factored, they are used to update the right m=2 columns which are subsequently
The rest of the paper is organized as follows. Section 2 describes and analyzes the
recursively-partitioned algorithm. Section 3 analyzes the block-column right-looking
algorithm. The actual performance of LAPACK's right-looking algorithm and the
performance of the recursively-partitioned algorithm are compared in Section 4 on
several high-end workstations. Section 5 concludes the paper with a discussion of the
results and of related research.
2. Recursively-Partitioned LU Factorization. The recursively-partitioned
algorithm is not only more efficient than conventional partitioned algorithms, but it
is also simpler to describe and analyze. This section first describes the algorithm, and
then analyzes the complexity of the algorithm in terms of arithmetic operations and
in terms of the amount of data transferred between memories during its execution.
LOCALITY OF REFERENCE IN LU DECOMPOSITION 3
The algorithm. The algorithm factors an n-by-m matrix A into an n-by-n permutation
matrix P , an n-by-m unit lower triangular matrix L (that is, L's upper triangle
is all zeros), and an m-by-m upper triangular matrix U , such that . A is
treated as a block matrix
A 11 A 21
A 21 A 22
where A 11
is a square matrix of order m=2-by-m=2.
1. If (that is, perform pivoting and scaling)
A 11
A 21
U 11
and return.
2. Else, recursively factor
A 11
A 21
U 11
3. Permute
A 0A 0-
A 12
A 22
4. Solve the triangular system L 11
5. A 00/ A 0\Gamma L 21
6. Recursively factor P 2
A 00= L 22
U 22
7. Permute L 0/ P 2
8. Return
A 11 A 12
A 21 A 22
U 11 U 12
Complexity Analysis. It is not hard to see that the algorithm is numerically equivalent
to the conventional column-oriented algorithm. Therefore, the algorithm has the
name numerical properties as the conventional algorithm, and it performs same number
of floating point operations, about nm In fact, all the variants of the
LU-factorization algorithm discussed in this paper are essentially different schedules
for the same algorithm. That is, they all have the same data-flow graph.
We now analyze the number of words that must be transferred between the primary
and secondary memories for n - m. The size of primary memory is denoted by
M . For ease of exposition, we assume that the number of columns is a power of two.
We denote the number of words that the algorithm must transfer between memories
by IORP (n; m). We denote the number of words that must be transferred to solve an
n-by-n triangular linear system with m right hand sides where the solution overwrites
the right hand side by IOTS (n; m). We denote the number of words that must be
transferred to multiply multiply an n-by\Gammam matrix by an m-by-k matrix and add the
result to an n-by-k matrix by IOMM (n; m; k).
Since the factorization algorithm uses matrix multiplication and solution of triangular
linear system as subroutines, the number of I/O's it performs depends on the
4 S. TOLEDO
number of I/O's performed by these subroutines. A partitioned algorithm for solving
triangular linear systems performs at most
IOTS (m; m) -
M=3
M=3
M=3
I/O's. The actual number of I/O's performed is smaller, since the real crossover
point is
M=2, not
M=3. Incorporating the improved bound into the analysis
complicates the analysis with little effect on the final outcome. The number of I/O's
performed by a standard matrix-multiplication algorithm is at most
IOMM (n; n; m) -
M=3
M=3
The bound for matrix multiplication holds for all values of n - m. The analysis
here assumes the use of a conventional triangular solver and matrix multiplication,
rather than so-called "fast" or Strassen-like algorithms. The asymptotic bounds for
fast matrix-multiplication algorithms are better [5].
We analyze the recursively-partitioned algorithm using induction. Initially, the
analysis that does not take into account the permutation of rows that the algorithm
performs. We shall return to these permutations later in this section. The recurrence
that governs the total number of words that are transferred by the algorithm is
We first prove by induction that if 1=2 - m=2 -
M=3, then IORP (n; m) - 2nm(1+
lg m). The base case true. Assuming that the claim is true for m=2, for
2:5m 2+ 3nm
We now prove by induction that
IORP (n; m) - 2nm
mp
M=3
for m=2 -
M=3. The claim is true for the base case
M=3 since m=2 -
M=3 and since m=(2
1. Assuming that the claim is true for m=2, we have
IORP (n; m) - 2nm
mp
M=3
LOCALITY OF REFERENCE IN LU DECOMPOSITION 5
M=3
M=3
mp
M=3
M=3
nm 2p
M=3
M=3
mp
M=3
nm 2p
M=3
M=3
mp
M=3
nm 2p
M=3
mp
M=3
To bound the number word transfers due to permutations we compute the number
of permutations a column undergoes during the algorithm. Each column is permuted
either in the factorization in Step 2 and in the permutation in Step 7, or in the
permutation in Step 3 and in the factorization in Step 6. It follows that each column
is permuted 1 times. If each word is brought from secondary memory, then
the total number of I/O's required for permutations is at most 2n m). This
bound can be achieved when reading entire columns to primary memory
and permuting them in primary memory.
The following theorem summarizes the main result of this section.
Theorem 2.1. Given a matrix multiplication subroutine whose I/O performance
satisfies Equation (2.2) and a subroutine for solving triangular linear systems whose
I/O performance satisfies Equation (2.1), the recursively-partitioned LU decomposition
algorithm running on a computer with M words of primary memory computes
the LU decomposition with partial pivoting of an n-by-m matrix using at most
IORP (n; m) - 2nm
mp
M=3
I/O's.
3. Analysis of The Right-Looking LU Factorization. To put the performance
of the recursively-partitioned algorithm in perspective, we now analyze the
performance of the column-block right-looking algorithm. We first describe the algorithm
and then analyze the number of data transfers, or I/O's, it performs. While
6 S. TOLEDO
the bounds we obtain are asymptotically tight, we focus on lower bounds in terms of
the constants. The number of I/O's required during the solution of triangular linear
systems is smaller than the number of I/O's required during the updates to the trailing
submatrix (a rank r update to a matrix), so we ignore the triangular solves in the
analysis.
Right-Looking LU. The algorithm factors an n-by-m matrix A such that
m. The algorithm factors r columns in every iteration. In the kth
iteration we decompose A into
PA =4
A 11 A 12 A 13
A 21 A 22 A 23
where A 11
is a square matrix of order
is a square matrix of order r.
In the kth iteration the algorithm performs the following steps:
1. Factor
A 22
A
2. Permute
A 23
A 33
A 23
A 33
3. Permute
4. Solve the triangular system L 22
U
5. Update A 33
U
The number of I/O's required to factor an n-by-r matrix using the column-by-
column algorithm is
nr 2when M - nr=2, but only
when M - nr. To simplify the analysis, we ignore the range of M in which more than
half then matrix fits within primary memory but less then the entire matrix. (Using
one level of recursion leads to \Theta(nr) I/O's in this range). We use the facts that for
M=3
M=3
rs if r -
M=3
and that for r - s - t
2ts rs
M=3
trs
M=3
2ts if r -
M=3
(3.
LOCALITY OF REFERENCE IN LU DECOMPOSITION 7
The bound 2ts rs is an underestimate when M ! rs. We ignore this small
slack in the analysis.
The number of I/O's the algorithm performs depends on the relation of r to the
dimensions of the matrix and to the size of memory. If r is so small that M - nr, then
the updates to the trailing submatrix dominate the number of I/O's the algorithm
performs. The updates to the trailing submatrix require at least
I/O's. In particular, the first m=2r updates require at least
If r is larger, factoring the m=r blocks of r columns requires at least
r
nr 2= nmrI/O's. The number of I/O's required for the rank-r updates depends on the value of
r. If M=n - r -
M=3, then the total number of I/O's performed by the rank-r
updates is at least
Therefore, the number of I/O's performed by the algorithm is at least
nmr
which is minimized at
For m, the optimal value of r lies between
(The exact value might deviate slightly from this range, since the expression we derived
for the number of I/O's is only a lower bound). Substituting the optimal value of r,
we find that the algorithm performs at least
nm 1:5 \Gamma4
nm 1:5
I/O's in this range. If
then the value performance
than
m. If
M=3, then the value
M=3 yields better performance than
m.
If r is yet larger, r -
M=3, then the rank-r updates require
I/O's. In particular, the first m=2r updates require at least
M=3
nm 2p
M=3
M=3
nm 2p
M=3
8 S. TOLEDO
I/O's. The total number of I/O's in this range, including both the updates and the
factoring of blocks of columns, is therefore at least
nm 2p
M=3
M=3;M=n. The number of I/O's is minimized by choosing the smallest
possible
M=3.
If the matrix is not very large compared to the size of main memory, n 2 =3 - M ,
it is also possible to choose r such that
M=n. In this case, the total
number of I/O's is at least
nm 2p
M=3
The analysis can be summarized as follows. A value of r close to max(M=n;
m)
is optimal for almost all cases. The only exception is for truly huge matrices, where
m. For such matrices,
M=3 is better than
m. Combining the
results, we obtain the following theorem.
Theorem 3.1. Given a matrix multiplication subroutine whose I/O performance
satisfies Equation (3.1) and a subroutine for solving triangular linear systems whose
I/O performance satisfies Equation (3.2), the right-looking LU decomposition algorithm
running on a computer with M words of primary memory computes the LU
decomposition with partial pivoting of an n-by-m matrix using at least
IORL (n; m) -!
M=3
I/O's.
The first case, r = M=n, leads to better performance only when more than
columns fit within primary memory. Although these are lower bounds, they are
asymptotically tight. The value 1=4 is a lower bound on the actual constant, which
is higher than that.
4. Experimental Results. We have implemented and tested the recursively-
partitioned algorithm 1 . The goal of the experiments was to determine whether the
recursively partitioned algorithm is more efficient than the right-looking algorithm in
practice. The results of the experiments clearly show that the recursively-partitioned
algorithm performs less I/O and is that it is faster, at least on the computer on which
the experiments were conducted.
The results of the experiments complement our analysis of the two algorithms.
The analysis shows that the recursively-partitioned algorithm performs less I/O than
the right looking algorithm for most values of n and M . The analysis stops short of
demonstrating that one algorithm is faster than another in three respects. First, the
bounds in the analysis are not exact. Second, the analysis counts the total number
Our Fortran 90 implementation is available online by anonymous ftp from theory.lcs.mit.edu
as /pub/people/sivan/dgetrf90.f. The code can be compiled by many Fortran 77 compilers, in-
luding compilers from IBM, Silicon Graphics, and Digital, by removing the RECURSIVE keyword
and using a compiler option that enables recursion (see [11] for details).
LOCALITY OF REFERENCE IN LU DECOMPOSITION 9
of I/O's in the algorithm, but the distribution of the I/O within the algorithm is
significant. Finally, the analysis uses a simplified model of a two-level hierarchical
memory that does not capture all the subtleties of actual memory systems. The
experiments show that even though our analysis is not exact in these respects, the
recursively-partitioned algorithm is indeed faster.
Three sets of experiments are presented in this section. The first set presents and
analyzes in detail experiments on IBM RS/6000 workstations. The goal of this set of
experiments is to establish that the recursively-partitioned algorithm is faster than the
right-looking algorithm. The second set of experiments show, in less detail, that the
recursively-partitioned algorithm outperforms LAPACK's right-looking algorithm on
a wide range of architectures. The goal of the second set of experiments is to establish
the robustness of the performance of the recursively-partitioned algorithm. The third
set of experiments shows that using Strassen's matrix multiplication algorithm speeds
up the recursively-partitioned algorithm, but does not seem to speed up the right-
looking algorithm.
Some of the technical details of the experiments, such as operating system ver-
sions, compiler versions, and compiler options are omitted from this paper. These
details are fully described in our technical report [11].
Detailed Experimental Analyzes. The first set of experiments was performed
on an IBM RS/6000 workstation with a 66.5 MHz POWER2 processor [14], 128
Kbytes 4-way set associative level-1 data-cache, a 1 Mbytes direct mapped level-2
cache, and a 128-bit-wide main memory bus. The POWER2 processor is capable of issuing
two double-precision multiply-add instructions per clock cycle. Both LAPACK's
right looking LU-factorization subroutine DGETRF and the recursively partitioned
algorithm were compiled by IBM's XLF compiler version 3.2. All the algorithms
used the BLAS from IBM's Engineering and Scientific Subroutine Library (ESSL).
On square matrices we have also measured the performance of the LU-factorization
subroutine DGEF from ESSL. The interface of this subroutine only allows for the
factorization of square matrices. The coding style and the data structures used in the
recursively-partitioned algorithm are the same as the ones used by LAPACK. In par-
ticular, permutations are represented in both algorithms as a sequence of exchanges.
In all cases, the array that contains the matrix to be factored was allocated statically
and aligned on a 16-byte boundary. The leading dimension of the matrix was equal
to the number of rows (no padding).
The performance of the algorithms was assessed using measurements of both running
time and cache misses. Time was measured using the machines real-time clock,
which has a resolution of one cycle. The number of cache misses was measured using
the POWER2 performance monitor [13]. The performance monitor is a hardware sub-system
in the processor capable of counting cache misses and other processor events.
Both the real-time clock and the performance monitor are oblivious to time sharing.
To minimize the risk that measurements are influenced by other processes, we ran
the experiments when no other users used the machine (but it was connected to the
network). We later verified that the measurements are valid by comparing the real-
time-clock measurements with the user time reported by AIX's getrusage system call
on an experiment by experiment basis. All measurements reported here are based on
an average of 10 executions.
We have coded two variants of the recursively-partitioned algorithm. The two
versions differ in the way permutations are applied to submatrices. In one version,
permutations are applied using LAPACK's auxiliary subroutine DLASWP. This sub-
Table
The performance in millions of operations per second (Mflops) and the number of cache misses
per thousand floating point operations (CM/Kflop) of five LU-factorization algorithms on an IBM
RS/6000 Workstation, on square matrices. The figures for LAPACK's DGETRF are those of the
block size r with the best running time, in upright letters, and those of the block size with the smallest
number of cache misses, in italics. The minimum number of cache misses does not generally coincide
with the minimum running time. See the text for a full description of the experiments.
Subroutine Mflops CM/Kflop Mflops CM/Kflop
LAPACK's DGETRF, row exchanges 178, 176 5.81, 5.65 170, 168 5.45, 5.29
Recursively-partitioned, row exchanges 201 3.76 186 4.14
LAPACK's DGETRF, permuting by columns 201, 199 2.94, 2.81 198, 195 3.11, 3.02
Recursively-partitioned, permuting by columns 222 1.61 223 1.59
ESSL's DGEF 228 2.15 221 3.42
routine, which is also used by LAPACK's right-looking algorithm, permutes the rows
of a submatrix by exchanging rows using the vector exchange subroutine DSWAP, a
level-1 BLAS. The second version permutes the rows of the matrix by applying the
entire sequence of exchanges to one column after another. The difference amounts to
swapping the inner and outer loops. This change was suggested by Fred Gustavson.
The first experiment, whose results are summarized in Table 4.1, was designed
to determine the effects of a complex hierarchical memory system on the partitioned
algorithms. Four facts emerge from the table.
1. The recursively partitioned algorithm performs less cache misses and delivers
higher performance than the right-looking algorithm. ESSL's subroutine
performs less cache misses than LAPACK but more than the recursively-
partitioned algorithm, but it achieves best or close to best performance.
2. Permuting one column at a time leads to less cache misses and faster execution
than exchanging rows. This is true for both the right-looking algorithm
and the recursively-partitioned algorithm. This is probably a result of the
advantage of the stride-1 access to the column in the column permuting over
the large stride access to rows in the row exchanges.
3. The performance in term of both time and cache misses of all the algorithms
except the recursively-partitioned with column permuting is worse when the
leading dimension of the matrix is a power of 2 than when it is not. The
performance of the recursively-partitioned algorithm with column permuting
improves by less than half a percent. The degradation in performance on a
power of 2 is probably caused by fact that the caches are not fully associative.
4. The running time depends on the measured number of cache misses, but not
completely. This can be seen both from the fact that ESSL's DGEF performs
more cache misses than the recursively partitioned algorithm, but it
is faster, and from the fact that the block size that leads to the minimum
number of cache misses in the DGETRF does not lead to the best running
time. The discrepancy can be caused by several factors that are not mea-
sured, including misses and conflicts in the level-2 cache, TLB misses, and
instruction scheduling. In all four cases in the table the minimum running
time is achieved with a value of r that is higher than the number that leads
to a minimum number of cache misses. For example, on
with row exchanges performed the least number of cache misses with
but the fastest running time was achieved with may mean that
LOCALITY OF REFERENCE IN LU DECOMPOSITION 11
500 1000 1500 2000160200240
Mflops
Order of Matrix
RL, optimal r and r=64
500 1000 1500 2000246Cache
Reloads
Per
KFlop
Order of Matrix
RL, optimal r
RL, r=64
Fig. 4.1. The performance in Mflops (on the left) and the number of cache misses per Kflop (on
the right) of LU factorization algorithms on an IBM RS/6000 Workstation. These graphs depict
the performance of the recursively-partitioned (PR) and right-looking (RL) algorithms on square
matrices. The optimal value of r was selected experimentally from powers of 2 between 2 and 256.
The dashed lines represent the performance of the recursively-partitioned algorithms with column
permuting (CP).
the cause of the discrepancy is misses in the level-2 cache, which is larger
than the level-1 cache and therefore may favor a larger block size (since more
columns fit in it).
In summary, the experiment shows that although the implementation details of the
memory system influence the performance of the algorithms, the recursively-parti-
tioned algorithm still emerges as faster than the right-looking one when they are
implemented in a similar way.
The second set of experiments was designed to assess the performance of the
algorithms over a wide range of input sizes. The performance and number of cache
misses of the algorithms are presented in Figure 4.1 on square matrices ranging in
order from 200 to 2000. The level-1 cache is large enough to store a matrix of order 128.
The following points emerge from the experiment.
1. Beginning with matrices of order 300, the the recursively-partitioned
algorithm with column permuting is faster than the same algorithm with row
exchanges which is still faster than LAPACK's DGETRF with row exchanges
(we did not measure the performance of DGETRF with column permuting
in this experiment).
2. The performance of DGETRF with optimal block size r and with
essentially the same except at although the optimal block size clearly
leads to a smaller number of cache misses from
3. The recursively-partitioned algorithm performs less cache misses than ESSL's
DGEF on all input sizes, but it is not faster. As in the first experiment,
the experiment itself does not indicate what causes this phenomenon. We
speculate that it is caused by better instruction scheduling or fewer misses in
the level-2 cache.
The next experiment was designed to determine the sensitivity of the performance
of the right-looking algorithm to the block size r. We used the column permuting
strategy which proved more efficient in the previous experiments. The experiment
consists of running the algorithm on a range of block sizes on a square matrix of order
1007 and on a rectangular 62500-by-64 matrix. The factorization of a rectangular
Mflops
Block Size r
Cache
Reloads
Per
KFlop
Block Size r
Fig. 4.2. The performance in Mflops (on the left) and the number of cache misses per Kflop
(on the right) of the right-looking algorithm with column permuting with as a function of the block
size r. The order of the square matrix used is 1007. Note that the y-axes do not start from
zero.
Block Size r
Reloads
Per
KFlop
Block Size r
Fig. 4.3. The performance in Mflops (on the left) and the number of cache misses per Kflop
(on the right) of the right-looking algorithm with column permuting with as a function of the block
size r. The dimensions of the matrix are 62500 by 64. For comparison, the performance of the
recursively partitioned algorithm on this problem is 118 Mflops and 11:03 CM/Kflop.
matrix with arises as a subproblem in out-of-core LU factorization algorithms
that factor blocks of columns that fit within core. The specific dimensions of the
matrices were chosen so as to minimize the effects of conflicts in the memory system
on the results. The results for shown in Figures 4.2 show that the
minimum number of cache misses occurs at which is higher than
and that the best performance is achieved with an even higher value of r, 55. The
performance is not very sensitive to the choice of r, however, and all values between
about 50 and 70 yield essentially the same performance, 201 Mflops. The results for
matrices, shown in Figures 4.3, show that the minimum number of cache
misses occur at and the best performance occurs at happens
to coincide exactly with
m. The sensitivity to r is greater here than in the square
case, especially below the optimal value.
The last experiment in this set, presented in Figure 4.4, was designed to determine
whether the discrepancy between the optimal block size in terms of level-1 cache misses
LOCALITY OF REFERENCE IN LU DECOMPOSITION 13
Mflops
Block Size r
Cache
Reloads
Per
KFlop
Block Size r
Fig. 4.4. The performance in Mflops (on the left) and the number of cache misses per Kflop (on
the right) of the right-looking algorithm with column permuting with as a function of the block size r.
The order of the square matrix used is 1007. The machine used here has a bigger level-1 cache
and no level-2 cache than the machine used in all the other experiments. Compare to Figure 4.2.
For comparison, the performance of the recursively-partitioned algorithm on this problem on this
machine is 229 Mflops and 0.650 CM/Kflop.
and the optimal block size in terms of running time was caused by the level-2 cache.
The experiment repeats the last experiment for square matrices of order 1007, except
that the experiment was conducted on a machine with a 256-bit-wide main memory
bus, 256 Kbytes level-1 cache, and no level-2 cache. The two machines are identical
in all other respects. There is a discrepancy in optimal block sizes in Figure 4.4,
but it is smaller than the discrepancy in Figure 4.2. The experiment shows that the
discrepancy is not caused solely by the level-2 cache. It is not possible to determine
whether the smaller discrepancy in this experiment is due to the lack of level-2 cache
or to the larger level-1 cache.
Robustness Experiments. The second set of experiments show that the performance
advantage of the recursively partitioned algorithm, which was demonstrated
by the first set of experiments, is not limited to a single computer architecture. The
experiments accomplish this goal by showing that the recursively partitioned algorithm
outperforms the right looking algorithm on a wide range of architectures.
All the experiments in this set compare the performance of the recursively partitioned
algorithm with the performance of LAPACK's right-looking on two sizes of
square matrices, when the larger matrices do not fit
within main memory). These sizes were chosen so as to minimize the impact of cache
associativity on the results. Each measurement reported represents the average of the
best 5 out of 10 runs, to minimize the effect of other processes in the system. The
block size for the right-looking algorithm was LAPACK's default
We used the following machine configurations:
ffl A 66.5 MHz IBM RS/6000 workstation with a POWER2 processor, 128
Kbytes 4-way set associative data-cache, a 1 Mbytes direct mapped level-
cache. and a 128-bit-wide bus. We used the BLAS from IBM's ESSL.
ffl A 25 MHz IBM RS/6000 workstation with a POWER processor, 64 Kbytes
4-way set associative data-cache, and a 128-bit-wide bus. We used the BLAS
from IBM's ESSL.
ffl A 100 MHz Silicon Graphics Indy workstation with a MIPS R4600/R4610
14 S. TOLEDO
Table
The running time in seconds of LU factorization algorithms on several machines. For each
machine and each matrix order, the table shows the running times of the recursively-partitioned
(RP) algorithm and the right-looking (RL) algorithm with row exchanges and column permutations.
Some measurements are not available and marked as N/A because the amount of main memory is
insufficient to factor the larger matrix in core. See the text for a full description of the experiments.
Row Column Row Column
Exchanges Pivoting Exchanges Pivoting
Machine RL RP RL RP RL RP RL RP
IBM POWER 22.81 19.07 17.87 16.86 146.1 143.4 135.8 129.7
CPU/FPU pair, a 16 Kbytes direct mapped data cache, and a 64-bit-wide
bus. We used the SGI BLAS. This machine has only 32 Mbytes of main
memory, so the experiment does not include matrices of order
ffl A 250 MHz Silicon Graphics Onyx workstation with 4 MIPS R4400/R4010
CPU/FPU pairs, a 16 Kbytes direct mapped data cache per processor, a
4 Mbytes level-2 cache per processor, and a 2-way interleaved main memory
system with a 256-bit-wide bus. The experiment used only one processor.
We used the SGI BLAS.
ffl A 150 MHz DEC 3000 Model 500 with an Alpha 21064 processor, 8 Kbytes
direct mapped cache, and a 512 Kbytes level-2 cache. We used the BLAS from
DEC's DXML for IEEE floating point. A limit on the amount of physical
memory allocated to a process prevented us from running the experiment on
matrices of order
ffl A 300 MHz Digital AlphaServer with 4 Alpha 21164 processors, each with an
8 Kbytes level-1 data cache, a 96 Kbytes on-chip level-2 cache, and a 4 Mbytes
level-2 cache. The experiment used only one processor. We used the BLAS
from DEC's DXML for IEEE floating point.
The results, which are reported in Table 4.2, show that the recursively partitioned
algorithm consistently outperforms the right-looking algorithm. The results also show
that permuting columns is almost always faster than exchanging rows.
Experiments using Strassen's Algorithm. Performing the updates of the
trailing submatrix using a variant of Strassen's algorithm [10] improved the performance
of the recursively partitioned algorithm. We replaced the call to DGEMM,
the level-3 BLA subroutine for matrix multiply-add by a call to DGEMMB, a public
domain implementation 2 of a variant of Strassen algorithm [3]. (Replacing the calls to
DGEMM by calls to a Strassen matrix-multiplication subroutine in IBM's ESSL gave
similar results). DGEMMB uses Strassen's algorithm only when all the dimensions
of the input matrices are greater than a machine-dependent constant. The authors of
DGEMMB set this constant to 192 for IBM RS/6000 workstations.
In the recursively-partitioned algorithm with column permuting, the replacement
Available online from http://www.netlib.org/linalg/gemmw.
LOCALITY OF REFERENCE IN LU DECOMPOSITION 15
of DGEMM by DGEMMB reduced the factorization time on the POWER2 machine
to 2:99 seconds for 1007 and to 22:18 seconds for 2014. The factorization
times with the conventional matrix multiplication algorithm, reported in the first line
of
Table
4.2, are 3:05 and 23:45 seconds. The running time was reduced from 182:7 to
166:8 seconds on a matrix of order 4028. The change would have no effect on the
right-looking algorithm, since in all the matrices it multiplies at least one dimension
is r which was smaller than 192 in all the experiments.
A similar experiment carried out by Bailey, Lee, and Simon [2] showed that
Strassen's algorithm can accelerate the LAPACK's right-looking LU factorization on
a Cray Y-MP. The largest improvements in performance, however, occured when large
values of r were used. The fastest factorization of a matrix of order
example, was obtained with Such a value is likely to cause poor performance
on machines with caches. (The Cray Y-MP has no cache.) On the IBM POWER2 ma-
chine, which has caches, increasing r from 64 to 512 causes the factorization time with
a conventional matrix multiplication algorithm to increase from 30:8 seconds 54 sec-
onds. Replacing the matrix multiplication subroutine by DGEMMB with
reduces the solution time, but by less than 2 seconds.
5. Conclusions. The recursively-partitioned algorithm should be used instead
of the right-looking algorithm because it delivers similar or better performance without
parameters that must be tuned. No parameter to choose means that there is no
possibility of a poor choice, and hence the new algorithm is more robust. Section 4
shows that the performance of the right-looking algorithm can be sensitive to r, and
that the best performance does not always coincide with the block size that causes the
smallest number of cache misses. Choosing r can be especially difficult on machines
with more than two levels of memory. A recursive algorithm, on the other hand, is a
natural choice for hierarchical memory systems with more than two levels.
The recursively partitioned algorithm provides a good opportunity to use a fast
matrix multiplication algorithm, such as Strassen's algorithm. Since a significant
fraction of the work performed by the recursively partitioned algorithm is used to
multiply large matrices, the benefit of using Strassen's algorithm can be large. The
right-looking algorithm performs the same work by several multiplications of smaller
matrices, so the benefit of Strassen's algorithm should be smaller.
The analysis of the right-looking algorithm in Section 3 shows how the block size
r should be chosen. The value r -
m is optimal with two exceptions. When a single
row is too large to fit within primary memory, a value
M=3 leads to better
performance. When more than
columns fit within primary memory, r should
be set to M=n to minimize memory traffic. The extreme cases are the source of the
difficulty in choosing a good value of r for hierarchical memory systems with more
than two levels. In our experiments, the performance of the right-looking algorithm
on matrices with more rows than columns was very sensitive to the choice of r, but
it was not sensitive on large square matrices.
In the typical cases, when at least one row fits within primary memory, the
right-looking algorithm with an optimal choice of r performs a factor of \Theta(
more data transfers than the recursively partitioned algorithm. In our experiments
this factor led to a significant difference in both the number of cache misses and the
running time.
The conclusion that the value
m is often close to optimal shows that there
is a system-independent way to choose r. In comparison, the model implementation
of ILAENV, LAPACK's block-size-selection subroutine, uses a fixed value, 64,
S. TOLEDO
and LAPACK's User's Guide advises that system-dependent tuning of r could improve
performance. The viewpoint of the LAPACK designers seems to be that r is
a system-dependent parameter whose role is to hide the low bandwidth of the secondary
memory system during the updates of the trailing submatrices. Our analysis
here shows that the true role of r is to balance the number of data transfers between
the two components of the algorithm: the factorization of blocks of columns and the
updates of the trailing submatrices.
Designers of out-of-core LU decomposition codes often propose to use block-
column (or row) algorithms. Many of them propose to choose r = M=n so that
an entire block of columns fits within primary memory [4, 6, 7, 15]. This approach
works well when the columns are short and a large number of them fits within primary
memory, but the performance of such algorithms would be unacceptable when
only few columns fit within primary memory. Some researchers [7, 8, 9] suggest that
algorithms that use less primary memory than is necessary for storing a few columns
might have difficulty implementing partial pivoting. The analysis in this paper shows
that it is possible to achieve a low number of data transfers even when a single row
or column does not fit within primary memory.
Womble et al. [15] presented a recursively-partitioned LU decomposition algorithm
without pivoting. They claimed, without a proof, that pivoting can be incorporated
into the algorithm without asymptotically increasing the number of I/O's the
algorithm performs. They suggested that a recursive algorithm would be difficult to
implement, so they implemented instead a partitioned left-looking algorithm using
Toledo and Gustavson [12] describe a recursively-partitioned algorithm for out-
of-core LU decomposition with partial pivoting. Their algorithm uses recursion on
large submatrices, but switches to a left-looking variant on smaller submatrices (that
would still not fit within main memory). Depending on the size of main memory,
their algorithm can factor a matrix in 2=3 the amount of time used by an out-of-core
left-looking algorithm with a fixed block size.
6.
Acknowledgments
. Thanks to Rob Schreiber for reading several early versions
of this paper and commenting on them. Thanks to Fred Gustavson and Ramesh
Agarwal for helpful suggestions. Thanks to the anonymous referees for several helpful
comments.
--R
Using Strassen's algorithm to accelerate the solution of linear systems
GEMMW: A portable level 3 BLAS Winograd variant of Strassen's matrix-matrix multiply algorithm
A note on matrix multiplication in a paging environment
Solving systems of large dense linear equations
Matrix computations with Fortran and paging
Gaussian elimination is not optimal
Locality of reference in LU decomposition with partial pivoting
The design and implementation of SOLAR
The POWER2 performance monitor
POWER2: Next generation of the RISC System/6000 family
Beyond core: Making parallel computer I/O practical
--TR
--CTR
Bradley C. Kuszmaul, Cilk provides the "best overall productivity" for high performance computing: (and won the HPC challenge award to prove it), Proceedings of the nineteenth annual ACM symposium on Parallel algorithms and architectures, June 09-11, 2007, San Diego, California, USA
Florin Dobrian , Alex Pothen, The design of I/O-efficient sparse direct solvers, Proceedings of the 2001 ACM/IEEE conference on Supercomputing (CDROM), p.39-39, November 10-16, 2001, Denver, Colorado
Sivan Toledo , Eran Rabani, Very large electronic structure calculations using an out-of-core filter-diagonalization method, Journal of Computational Physics, v.180 n.1, p.256-269, July 20, 2002
Kang Su Gatlin , Larry Carter, Architecture-cognizant divide and conquer algorithms, Proceedings of the 1999 ACM/IEEE conference on Supercomputing (CDROM), p.25-es, November 14-19, 1999, Portland, Oregon, United States
Zizhong Chen , Jack Dongarra , Piotr Luszczek , Kenneth Roche, Self-adapting software for numerical linear algebra and LAPACK for clusters, Parallel Computing, v.29 n.11-12, p.1723-1743, November/December
Bjarne Stig Andersen , Jerzy Waniewski , Fred G. Gustavson, A recursive formulation of Cholesky factorization of a matrix in packed storage, ACM Transactions on Mathematical Software (TOMS), v.27 n.2, p.214-244, June 2001
Rezaul Alam Chowdhury , Vijaya Ramachandran, Cache-oblivious dynamic programming, Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, p.591-600, January 22-26, 2006, Miami, Florida
Matteo Frigo , Volker Strumpen, The memory behavior of cache oblivious stencil computations, The Journal of Supercomputing, v.39 n.2, p.93-112, February 2007
Vladimir Rotkin , Sivan Toledo, The design and implementation of a new out-of-core sparse cholesky factorization method, ACM Transactions on Mathematical Software (TOMS), v.30 n.1, p.19-46, March 2004
Alexander Tiskin, Communication-efficient parallel generic pairwise elimination, Future Generation Computer Systems, v.23 n.2, p.179-188, February 2007
Lars Arge , Michael A. Bender , Erik D. Demaine , Bryan Holland-Minkley , J. Ian Munro, Cache-oblivious priority queue and graph algorithm applications, Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, May 19-21, 2002, Montreal, Quebec, Canada
Jack Dongarra , Victor Eijkhout , Piotr uszczek, Recursive approach in sparse matrix LU factorization, Scientific Programming, v.9 n.1, p.51-60, January 2001
Siddhartha Chatterjee , Alvin R. Lebeck , Praveen K. Patnala , Mithuna Thottethodi, Recursive Array Layouts and Fast Matrix Multiplication, IEEE Transactions on Parallel and Distributed Systems, v.13 n.11, p.1105-1123, November 2002
Michael A. Bender , Ziyang Duan , John Iacono , Jing Wu, A locality-preserving cache-oblivious dynamic dictionary, Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, p.29-38, January 06-08, 2002, San Francisco, California
Dror Irony , Gil Shklarski , Sivan Toledo, Parallel and fully recursive multifrontal sparse Cholesky, Future Generation Computer Systems, v.20 n.3, p.425-440, April 2004
Isak Jonsson , Bo Kgstrm, Recursive blocked algorithms for solving triangular systemsPart I: one-sided and coupled Sylvester-type matrix equations, ACM Transactions on Mathematical Software (TOMS), v.28 n.4, p.392-415, December 2002
Richard Vuduc , James W. Demmel , Jeff A. Bilmes, Statistical Models for Empirical Search-Based Performance Tuning, International Journal of High Performance Computing Applications, v.18 n.1, p.65-94, February 2004 | LU factorization;gaussian elimination;cache misses;partial pivoting;locality of reference |
273592 | A Survey of Combinatorial Gray Codes. | The term combinatorial Gray code was introduced in 1980 to refer to any method for generating combinatorial objects so that successive objects differ in some prespecified, small way. This notion generalizes the classical binary reflected Gray code scheme for listing n-bit binary numbers so that successive numbers differ in exactly one bit position, as well as work in the 1960s and 1970s on minimal change listings for other combinatorial families, including permutations and combinations.The area of combinatorial Gray codes was popularized by Herbert Wilf in his invited address at the SIAM Conference on Discrete Mathematics in 1988 and his subsequent SIAM monograph [Combinatorial Algorithms: An Update, 1989] in which he posed some open problems and variations on the theme. This resulted in much recent activity in the area, and most of the problems posed by Wilf are now solved.In this paper, we survey the area of combinatorial Gray codes, describe recent results, variations, and trends, and highlight some open problems. | Introduction
One of the earliest problems addressed in the area of combinatorial algorithms was that of
efficiently generating items in a particular combinatorial class in such a way that each item is
generated exactly once. Many practical problems require for their solution the sampling of a
random object from a combinatorial class or, worse, an exhaustive search through all objects
in the class. Whereas early work in combinatorics focused on counting, by 1960, it was clear
that with the aid of a computer it would be feasible to list the objects in combinatorial
classes [Leh64]. However, in order for such a listing to be possible, even for objects of
moderate size, combinatorial generation methods must be extremely efficient. A common
approach has been to try to generate the objects as a list in which successive elements
differ only in a small way. The classic example is the binary reflected Gray code [Gil58,
Gra53] which is a scheme for listing all n-bit binary numbers so that successive numbers
differ in exactly one bit. The advantage anticipated by such an approach is two-fold.
First, generation of successive objects might be faster. Although for many combinatorial
families, a straightforward lexicographic listing algorithm requires only constant average
time per element, for other families, such as linear extensions, such performance has only
been achieved by a Gray code approach [PR94]. Secondly, for the application at hand, it is
likely that combinatorial objects which differ in only a small way are associated with feasible
solutions which differ by only a small computation. For example in [NW78], Nijenhuis and
show how to use a binary Gray code to speed up computation of the permanent. Aside
from computational considerations, open questions in several areas of mathematics can be
posed as Gray code problems. Finally, and perhaps one of the main attractions of the area,
Gray codes typically involve elegant recursive constructions which provide new insights into
the structure of combinatorial families.
The term combinatorial Gray code first appeared in [JWW80] and is now used to refer
to any method for generating combinatorial objects so that successive objects differ in some
pre-specified, usually small, way. However, the origins of minimal change listings can be
found in the early work of Gray [Gra53], Wells [Wel61], Trotter [Tro62], Johnson [Joh63],
Lehmer [Leh65], Chase [Cha70], Ehrlich [Ehr73], and Nijenhuis and Wilf [NW78], and in the
work of campanologists [Whi83]. In his article on the origins of the binary Gray code, Heath
describes a telegraph invented by Emile Baudot in 1878 which used the binary reflected Gray
code [Hea72]. (According to Heath, Baudot received a gold medal for his telegraph at the
Universal Exposition in Paris in 1978, as did Thomas Edison and Alexander Graham Bell.)
Examples of combinatorial Gray codes include (1) listing all permutations of
that consecutive permutations differ only by the swap of one pair of adjacent elements
[Joh63, Tro62], (2) listing all k-element subsets of an n-element set in such a way that consecutive
sets differ by exactly one element [BER76, BW84, EHR84, EM84, NW78, Rus88a],
(3) listing all binary trees so that consecutive trees differ only by a rotation at a single node
[Luc87, LRR93], (4) listing all spanning trees of a graph so that successive trees differ only
by a single edge [HH72, Cum66] (5) listing all partitions of an integer n so that in successive
partitions, one part has increased by one and one part has decreased by one [Sav89], (6)
listing the linear extensions of certain posets so that successive elements differ only by a
transposition [Rus92, PR91, Sta92, Wes93], and (7) listing the elements of a Coxeter group
so that successive elements differ by a reflection [CSW89].
Gray codes have found applications in such diverse areas as circuit testing [RC81], signal
encoding [Lud81], ordering of documents on shelves [Los92], data compression [Ric86],
statistics [DH94], graphics and image processing [ASD90], processor allocation in the hyper-cube
hashing [Fal88], computing the permanent [NW78], information storage and
retrieval [CCC92], and puzzles, such as the Chinese Rings and Towers of Hanoi [Gar72].
In recent variations on combinatorial Gray codes, generation problems have been considered
in which the difference between successive objects, although fixed, is not required to
be small. An example is the problem of listing all permutations of so that consecutive
permutations differ in every location [Wil89]. The problem of generating all objects in a
combinatorial class, each exactly once, so that successive objects differ in a pre-specified
way, can be formulated as a Hamilton path/cycle problem: the vertices of the graph are
the objects themselves, two vertices being joined by an edge if they differ from each other
in the pre-specified way. This graph has a Hamilton path if and only if the required listing
of combinatorial objects exists. A Hamilton cycle corresponds to a cyclic listing in which
the first and last items also differ in the pre-specified way. But since the problem of determining
whether a given graph has a Hamilton path or cycle is NP-complete [GJ79], there
is no efficient general algorithm for discovering combinatorial Gray codes.
Frequently in Gray code problems, however, the associated graph possesses a great deal
of symmetry. Specifically, it may belong to the class of vertex transitive graphs. A graph
G is vertex transitive if for any pair of vertices u, v of G, there is an automorphism OE of G
with v. For example, permutations differing by adjacent transpositions give rise to
a vertex transitive graph, as do k-subsets of an n-set differing by one element. It is a well-known
open problem, due to Lov'asz, whether every undirected, connected, vertex transitive
graph has a Hamilton path [Lov70]. Thus, schemes for generating combinatorial Gray codes
in many cases provide new examples of vertex transitive graphs with Hamilton paths or
cycles, from which we hope to gain insight into the more general open questions. It is also
unknown whether all connected Cayley graphs (a subclass of the vertex transitive graphs)
are hamiltonian. For many Gray code problems, especially those involving permutations,
the associated graph is a Cayley graph.
Although many Gray code schemes seem to require strategies tailored to the problem at
hand, a few general techniques and unifying structures have emerged. The paper [JWW80]
considers families of combinatorial objects, whose size is defined by a recurrence of a particular
form, and some general results are obtained about constructing Gray codes for these
families. Ruskey shows in [Rus92] that certain Gray code listing problems can be viewed
as special cases of the problem of listing the linear extensions of an associated poset so
that successive extensions differ by a transposition. In the other direction, the discovery
of a Gray code frequently gives new insight into the structure of the combinatorial class
involved.
So, the area of combinatorial Gray codes includes many questions of interest in com-
binatorics, graph theory, group theory, and computing, including some well-known open
problems. Although there has been steady progress in the area over the past fifteen years,
the recent spurt of activity can be traced to the invited address of Herbert Wilf at the
SIAM Conference on Discrete Mathematics in San Francisco in June 1988, Generalized
Gray Codes, in which Wilf described some results and open problems. (These are also reported
in his SIAM monograph [Wil89].) All of the open problems on Gray codes posed
by Wilf in [Wil89] have now been solved, as well as several related problems, and it is our
intention here to follow up on this work. In this paper, we give a brief survey of the area
of combinatorial Gray codes, describe recent results, variations and trends, and highlight
some (new and old) open problems.
This paper is organized into sections as follows: 1. Introduction; 2. Binary Numbers
and Variations; 3. Permutations; 4. Subsets, Combinations, and Compositions; 5. Integer
Partitions; 6. Set Partitions and Restricted Growth Functions; 7. Catalan Families; 8.
Necklaces and Variations; 9. Linear Extension of Posets; 10. Acyclic Orientations; 11.
Cayley Graphs and Permutation Gray Codes; 12. Generalizations of de Bruijn Sequences;
13. Concluding Remarks.
In the remainder of this section, we discuss some notation and terminology which will
be used throughout the paper.
A Gray code listing of a class of combinatorial objects will be called max-min if the
first element on the list is the lexicographically largest in the class and the last element is
the lexicographically smallest. The Gray code is cyclic if the first and last elements on the
list differ in the same way prescribed for successive elements of the list by the adjacency
criterion.
In many situations, the graph associated with a particular adjacency criterion is bipar-
tite. If the sizes of the two partite sets differ by more than one, the graph cannot have a
Hamilton cycle and thus there is no Gray code listing of the objects corresponding to the
vertices, at least for the given adjacency criterion. In this case, we say that a parity problem
exists.
An algorithm to exhaustively list elements of a class C is called loop-free if the worst
case time delay between listing successive elements is constant; the algorithm is called
if, after listing the first element, the total time required by the algorithm to list all elements
is O(N ), where N is the total number of elements in the class C. The term CAT was coined
a. Binary Reflected b. Balanced c. Maximum Gap d. Non-composite
00000 11000 00000 10111 00000 00101 00000 01001
01000 10000 00111 00100 00111 00010 01011 00010
Figure
1: Examples of 5-bit binary Gray codes.
by Frank Ruskey to stand for constant amortized time per element.
Finally, we note that a Gray code for a combinatorial class is intrinsically bound to the
representation of objects in the class. If sets A and B are two alternative representations
of a class C under the bijections ff B, the closeness of ff(x) and
ff(y) need not imply closeness of fi(x) and fi(y). That is, Gray codes are not necessarily
preserved under bijection. Examples of this will be seen for several families, including
integer partitions, set partitions, and Catalan families.
Binary Numbers and Variations
A Gray code for binary numbers is a listing of all n-bit numbers so that successive
numbers (including the first and last) differ in exactly one bit position. The best known
example is the binary reflected Gray code [Gil58, Gra53] which can be described as follows.
If L n denotes the listing for n-bit numbers, then L 1 is the list 0, 1; for n ? 1, L n is formed
by taking the list for L n\Gamma1 and pre-pending a bit of '0' to every number, then following that
list by the reverse of L n\Gamma1 with a bit of '1' prepended to every number. So, for example,
shown in Figure 1(a).
Since the first and last elements of L n also differ in one bit position, the code is in fact a
cycle. It can be implemented efficiently as a loop-free algorithm [BER76]. Note that a
binary Gray code can be viewed as a Hamilton cycle in the n-cube.
In practice, Gray codes with certain additional properties may be desirable (see [GLN88]
for a survey). For example, note that as the elements of L n are scanned, the lowest order
times, whereas the highest order bit changes only twice, counting
the return to the first element. In certain applications, it is necessary that the number
of bit changes be more uniformly distributed among the bit positions, i.e., a balanced Gray
code is required. (See Figure 1(b) for an example from [VS80].) Uniformly balanced Gray
codes were shown to exist for n a power of two by Wagner and West [WW91]. For general
were suggested, but not proved, in [LS81, VS80, RC81].
Recently we have shown, using the Robinson-Cohn construction [RC81], that balanced
Gray codes exist for all n in the following sense: Let a = b2 n =nc or b2 n =nc \Gamma 1, so that a is
even. For each n ? 1 there is a cyclic n-bit Gray code in which each bit position changes
either a or a
In other applications, the requirement is to maximize the gap in a Gray code, which is
defined in [GLN88] to be the shortest maximal consecutive sequence of 0's (or 1's) among
all bit positions. (See Figure 1(c) for an example from [GLN88] in which the gap is 4, which
is best possible for report a construction in which GAP(n)/n
goes to 1 as n goes to infinity [GG]. Another variation, non-composite n-bit Gray codes,
requires that no contiguous subsequence correspond to a path in any k-cube for
Non-composite Gray codes have been constructed for all n [Ram90]. (See Figure 1(d) for
an example from [Ram90].)
A new constraint is considered in [SW95]. Define the density of a binary string to be
the number of 1's in the string. Clearly, no Gray code for binary strings can list them in
non-decreasing order of density. However, suppose the requirement is relaxed somewhat.
Call a Gray code monotone if it runs through the density levels two at a time, that is,
consecutive pairs of strings of densities those of densities
Figure
2: A monotone Gray code for
n. It is shown in [SW95] that monotone Gray codes can be constructed for all
n. An example for shown in Figure 2.
be the Boolean lattice of subsets of the set
inclusion, and let H n denote the Hasse diagram of B n . The correspondence
ng
is a bijection from n-bit binary numbers to subsets of [n] and, under this bijection, a binary
Gray code corresponds to a Hamilton path in H n .
The vertices of H n can be partitioned into level sets, contains
all of the i-element subsets of [n]. Then, a monotone Gray code is a Hamilton path in H n
in which edges between levels i and must precede edges between levels j and
Figure
3.)
Monotone Gray codes have applications to the theory of interconnection networks, providing
an embedding of the hypercube into a linear array which minimizes dilation in both
directions [SW95]. In Section 4 we discuss their relationship to the middle two levels problem
Fix a binary string ff and let B(n; ff) be the set of clean words for ff, i.e., the n-bit
strings which do not contain ff as a contiguous substring. Does the subgraph of the n-cube
induced by B(n; ff) have a Hamilton path, i.e., is there a Gray code for B(n; ff)? Squire has
shown that the answer is yes if ff can be written as is a string with
the property that no nontrivial prefix of fi is also a suffix of fi; otherwise there are parity
@
@ @
@
@
@ @\Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi
\Gamma\Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Delta
Figure
3: The Hamilton path in H 5 corresponding to the monotone Gray code in Figure 2.
problems for infinitely many n [Squ96].
It is natural to consider an extension of binary Gray codes to m-ary Gray codes. It was
shown in [JWW80], using a generalization of the binary reflected Gray code scheme, that
it is always possible to list the Cartesian product of finite sets so that successive elements
differ only in one coordinate. A similar result is obtained in [Ric86] where each coordinate
i is allowed to assume values in some fixed range results
on clean words to m-ary Gray codes [Squ96], but leaves open the case when m is odd.
Another listing problem for binary numbers, posed by Doug West, involves a change
in the underlying graph. View an n-bit string as a subset of ng under the natural
bijection g. Call two sets adjacent if they differ only in that
one element increases by 1, one element decreases by 1, or the element '1' is deleted. The
problem is to determine whether there is a Hamilton path in the corresponding graph, called
the augmentation graph. When n(n \Gamma 1)=2 is even, a parity argument shows there is no
Hamilton path. Otherwise, the question is open for n ? 7.
Consider another criterion: two binary strings are adjacent if they differ either by (1)
a rotation one position left or right or (2) by a negation of the last bit. The underlying
graph is the shuffle-exchange network and a Hamilton path would be a "Gray code" for
binary strings respecting this adjacency criterion. The existence of a Hamilton path in the
shuffle-exchange graph, a long-standing open problem, was recently established by Feldman
and Mysliwietz [FM93].
In [Fre79], Fredman considers complexity issues involved in generating arbitrary subsets
of the set of n-bit strings so that successive strings differ only in one bit. He calls these
quasi-Gray codes and establishes bounds and trade-offs on the resources required to generate
successors using a decision assignment tree model of computation.
Permutations
Algorithms for generating all permutations of
surveyed by Sedgewick in [Sed77]. Efficiency considerations provided the motivation for
several early attempts to generate permutations in such a way that successive permutations
Figure
4: Johnson-Trotter scheme for generating permutations by adjacent transpositions.
Figure
5: Generating permutations by derangements, due to Lynn Yarbrough.
differ only by the exchange of two elements. Such a Gray code for permutations was shown to
be possible in several papers, including [Boo65, Boo67, Hea63, Wel61], which are described
in [Sed77]. One disadvantage of these algorithms is that the elements exchanged are not
necessarily in adjacent positions.
It was shown independently by Johnson [Joh63] and Trotter [Tro62] that it is possible
to generate permutations by transpositions even if the two elements exchanged are required
to be in adjacent positions. The recursive scheme, illustrated in Figure 4, inserts into each
permutation on the list for the element 'n' in each of the n possible positions, moving
alternately from right to left, then left to right.
A contrary approach to the problem is to require that permutations be listed so that each
one differs from its predecessor in every position, that is, by a derangement. This problem
was posed independently in [Rab84, Wil89]. The existence of such a list when n 6= 3
was established in [Met85] using Jackson's theorem [Jac80] and a constructive solution was
presented in [EW85]. A simpler construction, ascribed to Lynn Yarbrough, is discussed
in [RS87]. Yarbrough's solution is illustrated in Figure 5 and works as follows. Take
each permutation on the Johnson-Trotter list for append an 'n', and rotate the
resulting permutation, one position at a time, through its n possible cyclic shifts. As a
final twist, swap the last two cyclic shifts. It is straightforward to argue that successive
permutations differ in every position, using the property of the Johnson-Trotter list that
successive permutations differ by adjacent transpositions.
To generalize the problems of generating permutations, at one extreme, by adjacent
transpositions, and at the other extreme, by derangements, consider the following. Given
n and k satisfying n - k - 2, is it possible to list all permutations so that successive
permutations differ in exactly k positions? This is shown to be possible, unless
[Put89] and in [Sav90], where the listing is cyclic. It was shown further in [RS94a] that the
positions (in which successive permutations differ) could be required to be contiguous.
Putnam claims in [Put90] that when k is even (odd) all permutations (even permutations)
can be generated by k-cycles of elements in contiguous positions. (Putnam's k-cycles need
not be of the form (i;
An interesting question arose in connection with a problem on Hamilton cycles in Cayley
graphs (see Section 11.) Is it possible to generate permutations by "doubly adjacent"
transpositions, i.e., so that successive transpositions are of neighboring pairs? Pair
is considered to be a neighbor of (i
The Johnson-Trotter scheme satisfies this requirement for n - 3, but not for Such a
listing was shown to be possible by Chris Compton in his Ph.D. thesis [Com90]. It might be
hoped that this could result in a very efficient permutation generation algorithm: it would
become unnecessary to decide which of the adjacent pairs to transpose, only whether
the next transposition is to the left or right of the current one. However, in its current form,
Compton's algorithm is not practical, and is quite complex, even with the simplifications
in [CW93].
The problem of generating all permutations of a multiset by adjacent interchanges was
introduced by Lehmer as the Motel Problem [Leh65]. He shows that, because of parity
problems, this is not always possible. It becomes possible, however, if the interchanged
elements are not required to be adjacent and Ko and Ruskey give a CAT algorithm to
generate multiset permutations according to this criterion [KR92].
4 Subsets, Combinations, and Compositions
Since there is a bijection between the subsets of an n-element set (an n-set) and the n-bit
binary numbers, any binary Gray code defines a Gray code for subsets: two binary numbers
differing in one bit correspond to two subsets differing by the addition or deletion of one
element.
For the subclass of combinations ( k-subsets of an n-set for fixed k), several Gray codes
have been surveyed in [Wil89]. As observed in [BER76], a Gray code for combinations
can be extracted from the binary reflected Gray code for n-bit numbers: delete from the
binary reflected Gray code list all those elements corresponding to subsets which do not
have exactly k elements. That which remains is a list of all k-subsets in which successive
sets differ in exactly one element (see Figure 6(a) and compare to Figure 1(a)). The same
list is generated by the revolving door algorithm in [NW78] and it can be described by a
a. Revolving Door b. Strong Minimal Change c. Adjacent Interchange
Figure
Examples of Gray codes for combinations.
simple recursive expression.
A more stringent requirement is to list all k-sets with the strong minimal change property
[EM84]. That is, if a k-set is represented as a sorted k-tuple of its elements, successive k-sets
differ in only one position (see Figure 6(b)). Eades and McKay have shown that such a
listing is always possible. An earlier solution was reported by Chase in [Cha70].
Perhaps the most restrictive Gray code which has been proposed for combinations is to
generate k-subsets of an n-set so that successive sets differ in exactly one element and this
element has either increased or decreased by one. This is called the adjacent interchange
property since if the sets are represented as binary n-tuples, successive n-tuples may differ
only by the interchange of a 1 and a 0 in adjacent positions (see Figure 6(c)). However, this
is not always possible: it was shown that k-subsets of an n-set can be generated by adjacent
interchanges if (i) k=0, 1, n, or is even and k is odd. In all other cases, parity
problems prevent adjacent interchange generation [BW84, EHR84, HR88, Rus88a]. It was
shown by Chase [Cha89] and by a simpler construction in [Rus93] that combinations can
be generated so that successive elements differ either by an adjacent transposition or by the
transposition of two bits that have a single '0' bit between them.
There are several open problems about paths between levels of the Hasse diagram of
the Boolean lattice, B n . The most notorious is the middle two levels problem which is
attributed in [KT88] to Dejter, Erd-os, and Trotter and by others to H'avel and Kelley.
The middle two levels of B 2k+1 have the same number of elements and induce a bipartite,
vertex transitive graph on the k- and k + 1- element subsets of [2k 1]. The question is
whether there is a Hamilton cycle in the middle two levels of B 2k+1 . At first glance, it
would appear that one could take a Gray code listing of the k-subsets, in which successive
elements differ in one element, and, by taking unions of successive elements, create a list
of 1-subsets. Alternating between the lists would give a walk in the middle two levels
graph, but, unfortunately, not a Hamilton path, at least not for any known Gray code on
k-subsets.
The graph formed by the middle two levels is a connected, undirected, vertex-transitive
graph. Thus, either it has a Hamilton path, or it provides a counterexample to the Lov'asz
conjecture. One approach to this problem which has been considered is to try to form a
Hamilton cycle as the union of two edge-disjoint matchings. In [DSW88], it was shown
that a Hamilton cycle in the middle two levels cannot be the union of two lexicographic
matchings. However, other matchings may work and new matchings in the middle two
levels have been defined [KT88, DKS94].
The largest value of k for which a Hamilton cycle is known to exist is
Figure
7 for an example when 3. This unpublished work was done by Moews and
Reid using a computer search [MR]. To speed up the search, they used a necklace-based
approach, gambling that there would be a Hamilton path through necklaces which could
be lifted to a Hamilton cycle in the original graph. We feel that a focus on the middle two
levels of the necklace poset, as described in Section 8, is a promising approach to the middle
two levels problem.
Is there at least a good lower bound on the length of a longest cycle in the middle two
levels of the Boolean lattice? Since this graph is vertex-transitive, a result of Babai [Bab79]
shows that there is a cycle of length at least (3N(k)) 1=2 , where N(k) is the total number of
vertices in the middle two levels of B 2k+1 . A result of Dejter and Quintana gives a cycle of
length
This was improved in [Sav93] to
Figure
7: A Hamilton cycle in the middle two levels of B 7 .
In a welcome breakthrough, Felsner and Trotter showed the existence of cycles of
length at least 0:25N(k) [FT95]. The monotone Gray code, described in Section 1, contains
as a subpath, a path in the middle two levels of length at least 0:5N(k) [SW95]. In [SW95],
this was strengthened to get 'nearly Hamilton' cycles in the following sense: for every ffl ? 0,
there is an h - 1 so that if a Hamilton cycle exists in the middle two levels of B 2k+1 for
h, then there is a cycle of length at least (1 \Gamma ffl)N (k) in the mid-levels of B 2k+1 for
all k - 1. Since Hamilton cycles are known for 1 - k - 11, the construction guarantees a
cycle of length at least 0:839N(k) in the middle two levels of B 2k+1 for all k - 1.
A variation on this problem is the antipodal layers problem: for which values of k is
there a Hamilton path among the k-sets and sets of ng for all n, where two
sets are joined by an edge if and only if one is a subset of the other? Results for limited
values of k and n are given in [Hur94] and [Sim].
A composition of n into k parts is a sequence nonnegative integers whose
sum is n. This is traditionally viewed as a placement of n balls into k boxes. Nijenhuis and
asked in the first edition of [NW78] (p. 292, problem 34) whether it was possible to
a. P (7;
Figure
8: Gray codes for various families of integer partitions
generate the k-compositions of n so that each is obtained from its predecessor by moving
one ball from its box to another. Knuth solved this in 1974 while reading the galleys of the
book and in [Kli82], Klingsberg gives a CAT implementation of Knuth's Gray code.
Combinations and compositions can be simultaneously generalized as follows. Let
denote the set of all ordered t-tuples
t. If m i - s for is the set of
s-combinations of a t-element set. If X is the multiset consisting of m i copies of element
is the collection of s-element submultisets, or s-
combinations of X . In [Ehr73], Ehrlich provides a loopless algorithm to generate multiset
combinations so that successive elements differ in only two positions, but not necessarily
by just \Sigma1 in those positions. It is shown in [RS95] that a Gray code still exists when
the two position can change by only \Sigma1, thereby generalizing Gray code results for both
combinations and compositions.
5 Integer Partitions
A partition of an integer n is a sequence of positive integers x 1
satisfying Algorithms for generating integer partitions in standard
orders such as lexicographic and antilexicographic were presented in [FL80] and [NW78].
The performance of the algorithms in [FL80] is analyzed in [FL81].
An integer partition in standard representation, can also be written
as a list of pairs (y are the distinct integers appearing in the
sequence -, and m i is the number of times y i appears in -. Ruskey notes in [Rus95] that a
lexicographic listing of partitions in this ordered pairs representation has the property that
successive elements of the list differ at most in the last three ordered pairs.
asked the following question regarding a Gray code for integer partitions in the
standard representation there a way to list the partitions of
an integer n in such a way that consecutive partitions on the list differ only in that one part
has increased by 1 and one part has decreased by 1? may decrease to 0
or a 'part' of size 0 may increase to 1.) Yoshimura demonstrated that this was possible for
integers In [Sav89], it is shown constructively to be possible for all n.
The result is a bit more general: for all n - k - 1, there is a way to list the set P (n; k), of
all partitions of n into integers of size at most k, in Gray code order. Unless (n;
the Gray code is max-min. As a consequence, each of the following can also be listed in
Gray code order for all n, partitions of n whose largest part
is k, (2) all partitions of n into k or fewer parts, and (3) all partitions of n into exactly k
parts. See Figure 8(a) for a Gray code listing of P(7; 6). Exponents in the figure indicate
the number of multiple copies.
The approach in [Sav89], was to decompose the partitions problem, P (n; k), into sub-problems
of two forms, a 'P ' form, which was the same form as the original problem, and
a new 'M ' form. It was then shown that the P and M forms could be recursively defined
in terms of (smaller versions of) both forms, thereby yielding a doubly recursive construction
of the partitions Gray code. The algorithm has been implemented [Bee90] and can be
modified to run in time O(jP (n; k)j).
This strategy has been applied to yield Gray codes for other families of integer partitions.
be the set of all partitions of n into parts of size at most k, in which the parts
are required to be congruent to 1 modulo ffi . When just P (n; k). When
2, the elements of P ffi (n; are the partitions of n into odd parts of size at most k. It
is shown in [RSW95] that P ffi (n; can be listed so that between successive partitions, one
part increases by ffi (or ffi ones may appear) and another part decreases by ffi (or ffi ones may
disappear.) (See Figure 8(b).) The Gray code is max-min unless (n;
where a max-min Gray code is impossible.
For the case of D(n; k), the set of partitions of n into odd parts of size at most k, the
same strategy can be applied, but the construction becomes more complex. Surprisingly, it
is still possible to list D(n; in Gray code order: between successive partitions one part
increases by two and one part decreases by two [RSW95]. (See Figure 8(c).) The Gray
code is max-min unless (n; or (12; 6), in which cases a max-min Gray code is
impossible. One observation that follows from this work is that although there are bijections
between the sets of partitions of n into odd parts and partitions of n into distinct parts, no
bijection can preserve such Gray codes.
The same techniques can be used to investigate Gray codes in other families of integer
partitions, but each family has its own quirks: a small number of cases must be handled
specially and subsets needed for linking recursively listed pieces can become empty. Never-
theless, we conjecture that each of the following families has a Gray code enumeration, for
arbitrary values of the parameters n, partitions of n into (a) distinct
odd parts, (b) distinct parts congruent to 1 modulo ffi , (c) at most t copies of each part,
(d) parts congruent to 1 modulo ffi , at most t copies of each part, and (e) exactly d distinct
parts.
6 Set Partitions and Restricted Growth Functions
A set partition is a decomposition of ng into a disjoint union of nonempty subsets
called blocks. Let S(n) denote the set of all partitions of ng. For example, S(4) is
shown in Figure 9(a). The restricted growth functions (RG functions) of length n, denoted
R(n), are those strings a of non-negative integers satisfying a
There is a well-known bijection between S(n) and R(n). For - 2 S(n), order the blocks
of - according to their smallest element, for example, the blocks of
would be ordered f1; 2; 7g, f3; 5; 6; 8g, f4; 10; 11g, f9g. Label
the blocks of - in order by 0; :. The bijection assigns to - the string a
(a)S(4) (b)L(4) in (c) Knuth's (d) modified (e)Ehrlich's
lexicographic Gray code Knuth algorithm
order
Figure
9: Listings of S(4) and R(4).
is the label of the block containing i. The associated string for -
above is 2. For 4, the bijection is illustrated in the first two columns
of
Figure
9.
In [Kay76], Kaye gives a CAT implementation of a Gray code for S(n), attributed to
Knuth in [Wil89]. This was another problem posed by Nijenhuis and Wilf in their book
[NW78] (p. 292, problem 25) and solved by Knuth while reading the galleys. In this Gray
code, successive set partitions differ only in that one element moved to an adjacent block
Figure
9(c).) However, the associated RG functions may differ in many positions. Ruskey
[Rus95] describes a modification of Knuth's algorithm in which one element moves to a block
at most two away between successive partitions and the associated RG functions differ only
in one position by at most two (Figure 9(d).)
Call a Gray code for RG functions strict if successive elements differ in only one position
and in that position by only \Sigma1. Strict Gray codes for R(n) were considered in an early
paper of Ehrlich where it was shown that for infinitely many values of n, they do not exist
[Ehr73]. Nevertheless, Ehrlich was able to find an efficient listing algorithm for R(n) (loop-
free) which has the following interesting property: successive elements differ in one position
and the element in that position can change by 1, or, if it is the largest element in the string,
it can change to 0. Conversely, a 0 can change to a the the largest value v in the string or
to v + 1. For example,
conversely. In the associated list of set partitions, this change corresponds to moving one
element to an adjacent block in the partition, where the first and last blocks are considered
adjacent (Figure 9(e).)
Ehrlich's results are generalized in [RS94b] to the set of restricted growth tails, T (n; k),
which are strings of non-negative integers satisfying a 1 - k and a i - 1+maxfa
1g. (These are a variation of the T (n; m) used in [Wil85] for ranking and unranking set
partitions.) Note that T (n; R(n). Because of parity problems, for all k there are
infinitely many values of n for which T (n; has no strict Gray code, that is, one in which
only one position changes by 1. However, Gray codes satisfying Ehrlich's relaxed criterion
are constructed and they can be made cyclic or max-min, properties not possessed by the
earlier Gray codes.
Consider now set partitions into a fixed number of blocks. For
(n) be the set of partitions of ng into exactly b blocks. The bijection between
S(n) and R(n) restricts to a bijection between S b (n) and
The Ehrlich paper presents a loop-free algorithm for generating S b (n) in which successive
partitions differ only in that two elements have moved to different blocks [Ehr73]. Ruskey
describes a Gray code for R b (n) (and a CAT implementation) in which successive elements
differ in only one position, but possibly by more than 1 in that position [Rus93]. It is shown
in [RS94b] that in general, R b (n) does not have a strict Gray code, even under the relaxed
criterion of Ehlich.
It remains open whether there are strict Gray codes for R(n) and T (n; when the
parity difference is 0.
7 Catalan Families
In several families of combinatorial objects, the size is counted by the Catalan numbers,
defined for n - 0 by
These include binary trees on n vertices [SW86], well-formed sequences of 2n parentheses
[SW86], and triangulations of a labeled convex polygon with
bijections are known between most members of the Catalan family, a Gray code for one
member of the family gives implicitly a listing scheme for every other member of the family.
However, the resulting lists may not look like Gray codes, since bijections need not preserve
minimal changes between elements.
The problem of generating all binary trees with a given number of nodes was considered
in several early papers, including [RH77], [Zak80], and [Zer85]. However, Gray codes in
the Catalan family were first considered in [PR85], where binary trees were represented as
strings of balanced parentheses. It was shown in [PR85] that strings of balanced parentheses
could be listed so that consecutive strings differ only by the interchange of one left and one
right parenthesis. For example '(()())(())' could follow `(()(())())'. The same problem
was considered in [RP90] with the additional restriction that only adjacent left and right
parentheses could be interchanged. For example, now '(()())(())' could not follow `(()(())())',
but could follow '((()))(())'. The result of [RP90] is that all balanced strings of n pairs of
parentheses can be generated by adjacent interchanges if and only if n is even or n ! 5,
and for these cases, a CAT algorithm is given.
A different minimal change criterion, focusing on binary trees, was considered in [Luc87]
and [LRR93]: list all binary trees on n nodes so that consecutive trees differ only by a left
or right rotation at a single node. The rotation operation is common in data structures
where it is used to restructure binary search trees, while preserving the ordering properties.
It was shown that such a Gray code is always possible and it can be generated efficiently
[LRR93]. With a more intricate construction, Lucas was able to show that the associated
graph was hamiltonian [Luc87], giving a cyclic Gray code.
It so happens that under a particular bijection between binary trees with n nodes and
the set of all triangulations of a labeled convex polygon with vertices, rotation in a
binary tree corresponds to the flip of a diagonal in the triangulation [STT88]. So, the results
of [Luc87, LRR93] also give a listing of all triangulations of a polygon so that successive
triangulations differ only by the flip of a single diagonal.
8 Necklaces and Variations
An n-bead, k-color necklace is an equivalence class of k-ary n-tuples under rotation. Figure
lists the lexicographically smallest representatives of the n-bead, k-color necklaces
for (n; asked if it is possible to generate necklaces effi-
ciently, possibly in constant time per necklace. A proposed solution, the FKM algorithm
of Fredricksen, Kessler, and Maiorana, had no proven upper bound better than O(nk n )
[FK86, FM78]. In [WS90] a new algorithm was presented with time complexity O(nN n
where N n
k is the number of n-bead necklaces in k colors. Subsequently, a tight analysis of
the original FKM algorithm showed that it could, in fact, be implemented to run in time
O(N n
giving an optimal solution [RSW92].
Neither of the algorithms above gives a Gray code for necklaces. Can representatives of
all binary n-bead necklaces be listed so that successive strings differ only in one bit position
(as in
Figure
10)? A parity argument shows that this is impossible for even n, but for odd
n the question remains open. However, in the case of necklaces with a fixed number of 1's,
Wang showed, with a very intricate construction, how to construct a Gray code in which
successive necklace representatives differ only by the swap of a 0 and a 1 [Wan94, WS94]
Figure
11.)
It remains open whether necklaces with a fixed number of 1's can be generated in
constant amortized time, either by a modification the FKM algorithm, by a Gray code, or
by any other method.
The Gray code adjacency criterion can be generalized to necklaces with k - 2 beads
by requiring that successive necklaces differ in exactly one position and in that position by
only 1. We conjecture that this can be done if and only if nk is odd. (Parity problems
a. 5-bead binary b. 7-bead binary c. 3-bead ternary
Figure
10: Examples of Gray codes for necklaces.
prevent a Gray code when nk is even.)
For necklaces of fixed weight, when is it possible to list all n-bead k-color necklaces
of weight w so that successive necklaces differ in exactly two positions, one of which has
increased by one and the other, decreased by one? We know of no counterexamples.
To construct a slightly different set of objects, call two k-ary strings equivalent if one is
a rotation or a reversal of the other. The equivalence classes under this relation are called
bracelets. Lisonek [Lis93] shows how to modify the necklace algorithm of [WS90] to generate
bracelets. We know of no Gray code for bracelets and it is open whether it is possible to
generate bracelets in constant amortized time. When beads have distinct colors, bracelets
are the rosary permutations of [Har71, Rea72].
Define a new relation R on n-bead binary necklaces by xRy if some member of x becomes
a member of y by changing a 0 to a 1 in one bit position. The reflexive transitive closure,
R is a partial order and the resulting poset is the necklace poset. For k - 0 and
the middle two levels of this poset, consisting of necklaces of density k and k have the
same number of elements. Does the bipartite subgraph induced by the middle two levels
have a Hamilton path? This graph is not vertex-transitive, but it may encapsulate the
"hard part" of the middle two levels problem described in Section 4.
A necktie of n bands in k colors is an equivalence class of k-ary n-tuples under reversal.
7 beads, 4 ones 9 beads, 3 ones 8 beads, 4 ones
Figure
11: Examples of Gray codes for binary necklaces with a fixed number of ones.
If a necktie is identified with the lexicographically smallest element in its equivalence class,
Wang [Wan93] shows that for n - 3, a Gray code exists if and only if either n or k is odd.
For this result, two neckties are adjacent if and only if they differ only in one position and
in that position by \Sigma1 modulo k. Further results on neckties appear in [RW94].
9 Linear Extension of Posets
A partially ordered set (S; -) is a set S together with a binary relation - on S which is
reflexive, transitive, and antisymmetric. A linear extension of a poset is a permutation
of the elements of the poset which is consistent with the partial order, that
is, if x i - x j in the partial order, then i - j. The problem of efficiently generating all
the linear extensions of a poset, in any order, has been studied in [KV83, KS74, VR81].
The area of Gray codes for linear extensions of a poset was introduced by Frank Ruskey in
[Rus88b, PR91] as a setting in which to generalize the study of Gray codes for combinatorial
objects. For example, if the Hasse diagram of the poset consists of two disjoint chains, one
of length m and the other of length n, then there is a one-to-one correspondence between
the linear extensions of the poset and the combinations of m objects chosen from m
If the poset consists of a collection of disjoint chains, the linear extensions correspond to
multiset permutations. Other examples are described in [Rus92].
To study the existence of Gray codes, Ruskey constructs a transposition graph corresponding
to a given poset. The vertices are the linear extensions of the poset, two vertices
being joined by an edge if they differ by a transposition. The resulting graph is bipartite. In
[Rus88b], Ruskey makes the conjecture that whenever the parity difference is at most one,
the graph of the poset has a Hamilton path. The conjecture is shown to be true for some
special cases in [Rus92], including posets whose Hasse diagram consists of disjoint chains
and for series parallel posets in [PR93]. The techniques which have been successful so far
involve cutting and linking together listings for various subposets in rather intricate ways.
In many cases where it is known how to list linear extensions by transpositions, it is
also possible to require adjacent transpositions, although possibly with a more complicated
construction [PR91, RS93, Sta92, Wes93]. It has been shown that if the linear extensions
of a poset Q, with jQj even, can be listed by adjacent transpositions, then so can the linear
extensions of QjP , for any poset P [Sta92], where QjP represents the union of posets P
and Q with the additional relations fp qg.
However, most problems in this area remain open. For example, even if the Hasse
diagram of the poset consists of a single tree, the parity difference may be greater than one.
This makes an inductive approach difficult. If the Hasse diagram consists of two trees, each
with an odd number of vertices, the parity difference will be at most one, but it is unknown
whether the linear extensions can be listed in this case. The problem is also open for posets
whose Hasse diagram is a grid or tableau tilted ninety degrees [Rus].
Calculating the parity difference itself can be difficult and Ruskey [Rus] has several
examples of posets for which the parity difference is unknown. (Some parity differences are
calculated in [KR88].) Recently, Stachowiak has shown that computing the parity difference
is #P-complete [Sta]. Even counting the number of linear extensions of a poset is an open
problem for some specific posets, for example, the Boolean lattice [SK87]. Brightwell and
Winkler have recently shown that the problem of counting the number of linear extensions
of a given poset is #-P complete [BW92]. On the brighter side, Pruesse and Ruskey [PR94]
have found a CAT algorithm for listing linear extensions so that successive extensions differ
by one or two adjacent transpositions and Canfield and Williamson [CW95] have shown
how to make it loop-free.
In [PR93], Pruesse and Ruskey consider antimatroids, of which posets are a special
case. Analogous to the case of linear extensions of a poset, they show that the sets of
an antimatroid can be listed so that successive sets differ by at most two elements. In
particular, this gives a listing of the ideals of a poset so that successive ideals differ only in
one or two elements.
Orientations
For an undirected graph G, an acyclic orientation of G is a function on the edges of G which
assigns a direction (u; v) or (v; u) to each edge uv of G in such a way that the resulting
digraph has no directed cycles. Consider the problem of listing the acyclic orientations of G
so that successive list elements differ only by the orientation of a single edge. It is not hard
to see that when G is a tree with n edges, such a listing corresponds to an n bit binary Gray
code; when G is K n , an acyclic orientation corresponds to a permutation of the vertices and
the Johnson-Trotter Gray code for permutations provides the required listing for acyclic
orientations.
Denote by AO(G) the graph whose vertices are the acyclic orientations of G, two vertices
adjacent if and only if the corresponding orientations differ only in the orientation of a single
edge. The graph AO(G) is bipartite and is connected as long as G is simple. Edelman asked,
whenever the partite sets have the same size, whether AO(G) is hamiltonian. It is shown
in [SSW93] that the answer is yes for several classes of graphs, including trees, odd length
cycles, complete graphs, odd ladder graphs, and chordal graphs. On the other hand, the
parity difference is shown to be more than one for several cases, including cycles of even
length and the complete bipartite graphs K m;n with m; n ? 1 and m or n even. The
problem appears to be difficult and it is even open whether AO(K m;n ) is hamiltonian when
mn is odd. However, the square of AO(G) is hamiltonian for any G [PR95, SZ95, Squ94c],
which means that acyclic orientations can be listed so that successive elements differ in the
orientations of at most two edges.
The problem of counting acyclic orientations is #P-complete [Lin86] and it is an open
question whether there is a CAT algorithm to generate them. The fastest listing algorithm
known, due to Squire [Squ94b], requires O(n) average time per orientation, where n is the
number of vertices of the graph.
The linear extensions and acyclic orientations problems can be simultaneously generalized
as follows. For a simple undirected graph G and a subset R of the edges of G, fix an
acyclic orientation oe R of the edges of R. Let AOR (G) be the subgraph of AO(G) induced by
the acyclic orientations of G which agree with oe R on R. Is this bipartite graph hamiltonian
whenever the parity difference allows?
is the acyclic orientations graph of G. AOR (G) becomes the
linear extensions adjacency graph of an n element poset P when when
and oe R are defined by the covering relations in P . In contrast to the situation for linear
extensions and acyclic orientations, the square of AOR (G) is not necessarily hamiltonian.
Counterexamples appear in [Squ94c] and [PR95].
Cayley Graphs and Permutation Gray Codes
Many Gray code problems for permutations are best discussed in the setting of Cayley
graphs. Given a finite group G and a set X of elements of G, the Cayley graph of G,
C[G; X ], is the undirected graph whose vertices are the elements of G and in which there
is an edge joining u and v if and only if Equivalently,
uv is an edge in G if and only if u \Gamma1 v or v \Gamma1 u is in X . C[G; X ] is always vertex transitive
and is connected if and only if X [X \Gamma1 generates G. It is an open question whether every
Cayley graph is hamiltonian. (There are generating sets for which the Cayley digraph is not
hamiltonian [Ran48].) This is a special case of the more general conjecture of Lov'asz that
every connected, undirected, vertex-transitive graph has a Hamilton path [Lov70]. Results
on Hamilton cycles are surveyed in [Als81] for vertex transitive graphs and in [Gou91] for
general graphs. A survey of Hamilton cycles in Cayley graphs can be found in [WG84] and
in the recent update of Curran and Gallian [CG96]. We focus here on a few recent questions
which arose in the context of Gray codes.
Suppose the group G is S n , the symmetric group of permutations of n symbols, and let
X be a generating set of S n . Then a Hamilton cycle in the Cayley graph C[G; X ] can be
regarded as a Gray code for permutations in which successive permutations differ only by
a generator in X . Even in the special case of G = S n , it is still open whether every Cayley
graph of S n has a Hamilton cycle. One of the most general results on the hamiltonicity of
Cayley graphs for permutations was discovered by Kompel'makher and Liskovets in 1975.
First note that for S n generated by the basis n)g, the Johnson-
Trotter algorithm from Section 3 for generating permutations by adjacent transpositions
gives a Hamilton cycle in C[G; X ]. Kompel'makher and Liskovets generalized this result
to show that if X is any set of transpositions generating S n , then C[S
[KL75]. Independently, and with a much simpler argument, Slater showed that these graphs
have Hamilton paths [Sla78]. Tchuente [Tch82] extended the results of [KL75, Sla78] to
show that the Cayley graph of S n , on any generating set X of transpositions, is not only
hamiltonian, but Hamilton-laceable, that is, for any two vertices u; v of different parity
there is a Hamilton path which starts at u and ends at v.
It is unknown whether any of these results generalize to the case when X is a generating
set of involutions (elements of order 2) for S n . An involution need not be a transposition:
for example the product of disjoint transpositions is an involution. In perhaps the simplest
nontrivial case, when S n is generated by three involutions, it is easy to show that if any two
of the generators commute, then the Cayley graph is hamiltonian. (Cayley graphs arising
in change ringing frequently have this property [Ran48, Whi83].) However if no two of the
three involutions commute, it is open whether the Cayley graph is hamiltonian. As a specific
example, we have not been able to determine whether there is a Gray code for permutations
in which successive permutations may differ only by one of the three operations: (i) exchange
positions 1 and 2, (ii) reverse the sequence, and (iii) reverse positions 2 through n of the
sequence.
Conway, Sloane, and Wilks have a related result on Gray codes for reflection groups: if
G is an irreducible Coxeter group (a group generated by geometric reflections) and if X is
the canonical basis of reflections for the group, then C[G; X ] is hamiltonian [CSW89]. This
result makes use of the fact that in any set of three or more generators from this basis,
there is always some pair of generators which commute (the Coxeter diagram for the basis
is a tree). It is straightforward to show for groups G and H , with generating sets X and Y ,
respectively, that if C[G; X ] and C[H; Y ] are hamiltonian, and at least one of G and H have
even order, then C[G \Theta H; X \Theta Y ] is hamiltonian. As noted in [CSW89], since any reflection
group R is a direct product of irreducible Coxeter groups G 1 \Theta G 2 \Theta
canonical basis for G i , the Cayley graph of R with respect to the basis
is hamiltonian.
This result has an interesting geometric interpretation. We can associate with a finite
reflection group a tessellation of a surface in n-space by a spherical (n \Gamma 1)-simplex. The
spherical simplices of the tesselation correspond to group elements, and the boundary shared
by two simplices corresponds to a reflection in the bounding hyperplane. Thus, a Hamilton
cycle in the Cayley graph corresponds to a traversal of the surface, visiting each simplex
exactly once.
It seems likely that if there do exist non-hamiltonian Cayley graphs of S n , there will
be examples in which the vertices have small degree, such as S n generated by three involutions
as described above. As a candidate counterexample, Wilf suggested the group
of permutations generated by the two cycles (1 2) and (1
and Williamson were able to find a Hamilton cycle in this graph using their Gray code for
generating permutations by doubly adjacent transpositions, described in Section 3 [CW93].
The results of [KL75, Sla78] were generalized in a different way in [RS93]. It was shown
there that when n - 5, for any generating set X of transpositions of S n and for any
transposition permutations in S n can be listed so that successive permutations
differ only by a transposition in X and every other transposition is - . That is, the perfect
matching in C[S defined by - is contained in a Hamilton cycle. One application of this
result is to Cayley graphs for the alternating group, A n , consisting of all even permutations
n. For example, letting n), the result in
[RS93] implies that the Cayley graph of A n with respect to the generating set
is hamiltonian. This result was obtained earlier with a direct
argument by Gould and Roth [GR87].
12 Generalizations of de Bruijn Sequences
A de Bruijn sequence of order n is a circular binary sequence of length 2 n in which every
n-bit number appears as a contiguous subsequence. This provides a Gray code listing of
binary sequences in which successive elements differ by a rotation one position left followed
by a change of the last element. It is known that these sequences exist for all n and the
standard proof shows that a de Bruijn sequence of order n corresponds to an Euler tour in
the de Bruijn digraph whose vertices are the binary n-tuples, with one edge
binary n-tuple x
This result has been generalized for k-ary n-tuples [Fre82], for higher dimensions (de
Bruijn tori) [Coc88, FFMS85], and for k-ary tori [HI95]. It is known also, for any m
satisfying , that there is a cyclic binary sequence of length m in which no n-tuple
appears more than once [Yoe62]. So, the de Bruijn graph contains cycles of all lengths
n. Results on de Bruijn cycles have been applied to random number generation in
information theory [Gol64] and in computer architecture, where the de Bruijn graph is
recognized as a bounded degree derivative of the shuffle-exchange network [ABR90].
Chung, Diaconis, and Graham generalized the notion of a de Bruijn sequence for binary
numbers to universal cycles for other families of combinatorial objects [CDG92]. Universal
cycles for combinations were studied by Hurlbert in [Hur90] and some interesting problems
remain open. A universal cycle of order n for permutations is a circular sequence x
of length of symbols from Mg in which every permutation
is order isomorphic to some contiguous subsequence x
means that for 1 - As an example, the sequence
123415342154213541352435 is a universal cycle of order 4 with 5. In [CDG92], the
goal is to choose M as small as possible to guarantee the existence of a universal cycle.
It is clear that M must satisfy 2. It is conjectured in [CDG92] that
although the best upper bound they were able to obtain was
sufficient.
As another approach, we can relax the constraint on the length of the sequence, while
requiring for the shortest circular sequence of symbols from ng
which contains every permutation as a contiguous subsequence at least once. Jacobson and
West have a simple construction for such a sequence of length 2n! [JW].
Concluding Remarks
This paper has included a sampling of Gray code results in several areas, particularly those
which have appeared since the survey of Wilf [Wil89], in which many of these problems
were posed. Good references for early work on Gray codes are [Ehr73] and [NW78]. For
a comprehensive treatment of Gray codes and other topics in combinatorial generation, we
look forward to the book in preparation by Ruskey [Rus95]. Additional information on
Gray codes also appears in the survey of Squire [Squ94a]. In [Gol93], Goldberg considers
generating combinatorial structures for which achieving even polynomial delay is hard. For
surveys on related material, see [Als81] for long cycles in vertex transitive graphs, [Gou91]
for hamiltonian cycles, [WG84] and the recent update [CG96] for Cayley graphs, and [Sed77]
for permutations.
Acknowledgements
I am grateful to Herb Wilf for collecting and sharing such an intriguing
array of 'Gray code' problems. His work, as well as his enthusiasm, has been inspiring.
I would also like to thank Frank Ruskey, my frequent co-author, for many provoking dis-
cussions, and for his constant supply of interesting problems. Both have provided helpful
comments on earlier versions on this manuscript. For additional comments and suggestions,
I am grateful to Donald Knuth and to an anonymous referee.
--R
Group action graphs and parallel architectures.
The search for long paths and cycles in vertex-transitive graphs and digraphs
A data structure based on Gray code encoding for graphics and image processing.
Long cycles in vertex-transitive graphs
Implementation of an algorithm to list Gray code sequences of partitions.
Efficient generation of the binary reflected Gray code.
Algorithm 6.
Permutation of the elements of a vector (Algorithm 29) and Fast permutation of the elements of a vector (Algorithm 30).
Balanced Gray codes.
Gray codes with restricted density.
Counting linear extensions is
Symbolic Gray codes as a data allocation scheme for two disc systems.
Universal cycles for combinatorial structures.
Hamiltonian cycles and paths in Cayley graphs and digraphs - a survey
Algorithm 382: Combinations of M out of N objects.
Combination generation and Graylex ordering.
Toroidal tilings from de Bruijn-good cyclic sequences
Hamilton cycles in the Cayley graph of Sn and a doubly adjacent Gray code.
Subcube allocation and task migration in hypercube machines.
Gray codes for reflection groups.
Hamilton circuits in tree graphs.
Doubly adjacent Gray codes for the Symmetric group: how to braid n strands.
A combinatorial problem.
Gray codes for randomization procedures.
An explicit 1-factorization in the middle of the Boolean lattice
Long cycles in revolving door graphs.
Lexicographic matchings cannot form hamiltonian cycles.
Loopless algorithms for generating permutations
Some Hamilton paths and a minimal change algorithm.
An algorithm for generating subsets of fixed size with a strong minimal change property.
Problem 1186: Solution I.
Gray codes for partial match and range queries.
On de Bruijn arrays.
An algorithm for generating necklaces of beads in two colors.
A binary tree representation and related algorithms for generating integer partitions.
An analysis of two related loop-free algorithms for generating integer partitions
Necklaces of beads in k colors and k-ary de Bruijn sequences
The shuffle-exchange network has a hamiltonian path
Observations on the complexity of generating quasi-gray codes
A survey of full length nonlinear shift register cycle algorithms.
Colorings of diagrams of interval orders and ff-sequences
Curious properties of the Gray code and how it can be used to solve puzzles.
Personal communication.
Gray codes and paths on the n-cube
Computers and Intractability
Gray codes with optimized run lengths.
Digital Communications with Space Applications.
Efficient Algorithms for Listing Combinatorial Structures.
Updating the hamiltonian problem - a survey
Cayley digraphs and (1
Pulse code communications.
Generation of rosary permutations expressed in hamiltonian circuits.
Permutations by interchanges.
Origins on the binary code.
On the tree graph of a matroid.
On the de Bruijn torus problem.
An efficient implementation of the Eades
Universal Cycles: On Beyond de Bruijn.
The antipodal layers problem.
Hamilton cycles in regular 2-connected graphs
Generation of permutations by adjacent transpositions.
Personal communication.
Combinatorial Gray codes.
A Gray code for set partitions.
Sequential generation of arrangements by means of a basis of transpositions.
A Gray code for compositions.
Solution of some multi-dimensional lattice path parity difference recurrence relations
Generating permutations of a bag by interchanges.
A structured program to generate all topological sorting arrangements.
Explicit matchings in the middle levels of the Boolean lattice.
On the generation of all topological sortings.
The machine tools of combinatorics.
Permutation by adjacent interchanges.
Hard enumeration problems in geometry and combinatorics.
Generating bracelets
A Gray code based ordering for documents on shelves: Classificaton for browsing and retrieval.
Problem 11.
A technique for generating Gray codes.
The rotation graph of binary trees is hamiltonian.
Gray code generation for MPSK signals.
A problem in arrangements.
Problem 1186.
Electronic mail communication (via J.
Combinatorial Algorithms for Computers and Calculators.
Binary tree Gray codes.
Generating the linear extensions of certain posets by adjacent transpositions.
Gray codes from antimatroids.
Generating linear extensions fast.
The prism of the acyclic orientation graph is hamiltonian.
A Gray code variant: sequencing permutations via fixed points.
Combinatorial Gray code and a generalization of the Johnson- Trotter algorithm to contiguous k-cycles
Problem 1186.
A new method of generating Hamilton cycles on the n-cube
A campanological problem in group theory.
Counting sequences.
Note on the generation of rosary permutations.
Generating binary trees lexicographically.
Data compression and Gray-code sorting
Generating binary trees by transpositions.
Generating all permutations by graphical derangements.
Hamilton cycles which extend transposition matchings in Cayley graphs of Sn
Gray codes for set partitions and restricted growth tails.
A Gray code for combinations of a multiset.
Generating necklaces.
Gray code enumeration of families of integer partitions.
private communication.
Adjacent interchange generation of combinations.
Research problem 90.
Generating linear extensions of posets by transpositions.
Simple combinatorial Gray codes constructed by reversing sublists.
Combinatorial Generation.
Generating neckties: algorithms
Gray code sequences of partitions.
Generating permutations with k-differences
Long cycles in the middle two levels of the Boolean lattice.
Permutation generation methods.
Hamiltonian bipartite graphs.
The number of linear extensions of subset ordering.
Generating all permutations by graphical transpositions.
Combinatorial Gray codes and efficient generation.
Generating the acyclic orientations of a graph.
Two new Gray codes for acyclic orientations.
Gray codes for A-free strings
Gray code results for acyclic orientations.
Finding parity difference by involutions.
Hamilton paths in graphs of linear extensions for unions of posets.
Rotation distance
Constructive Combinatorics.
Monotone Gray codes and the middle two levels problem.
A note on the connectivity of acyclic orientations graphs
Generation of permutations by graphical exchanges.
An algorithm to generate all topological sorting arrangements.
A technique for generating specialized Gray codes.
A note on Gray codes for neckties
A Gray Code for Necklaces of Fixed Density.
Generation of permutations by transposition.
Generating linear extensions by adjacent transpositions.
A survey - hamiltonian cycles in Cayley graphs
Ringing the changes.
Combinatorics for Computer Science.
Generalized Gray codes.
Combinatorial Algorithms: An Update.
A new algorithm for generating necklaces.
Gray codes for necklaces of fixed density
Construction of uniform Gray codes.
Binary ring sequences.
Ranking and unranking algorithms for trees and other combinatorial ob- jects
Lexicographic generation of ordered trees.
Generating binary trees using rotations.
--TR
--CTR
Elizabeth L. Wilmer , Michael D. Ernst, Graphs induced by Gray codes, Discrete Mathematics, v.257 n.2-3, p.585-598, 28 November
Colin Murray , Carsten Friedrich, Visualisation of satisfiability using the logic engine, proceedings of the 2005 Asia-Pacific symposium on Information visualisation, p.147-152, January 01, 2005, Sydney, Australia
Khaled A. S. Abdel-Ghaffar, Maximum number of edges joining vertices on a cube, Information Processing Letters, v.87 n.2, p.95-99, 31 July
Tadao Takaoka , Stephen Violich, Combinatorial generation by fusing loopless algorithms, Proceedings of the 12th Computing: The Australasian Theroy Symposium, p.69-77, January 16-19, 2006, Hobart, Australia
V. V. Kuliamin, Test Sequence Construction Using Minimum Information on the Tested System, Programming and Computing Software, v.31 n.6, p.301-309, November 2005
Vincent Vajnovszki, A loopless algorithm for generating the permutations of a multiset, Theoretical Computer Science, v.307 n.2, p.415-431, 7 October
James Korsh , Paul Lafollette, A loopless Gray code for rooted trees, ACM Transactions on Algorithms (TALG), v.2 n.2, p.135-152, April 2006
Gerard J. Chang , Sen-Peng Eu , Chung-Heng Yeh, On the (n,t)-antipodal Gray codes, Theoretical Computer Science, v.374 n.1-3, p.82-90, April, 2007
Kenneth A. Ross, Selection conditions in main memory, ACM Transactions on Database Systems (TODS), v.29 n.1, p.132-161, March 2004
Jean Pallo, Generating binary trees by Glivenko classes on Tamari lattices, Information Processing Letters, v.85 n.5, p.235-238, March | cayley graphs;hamilton cycles;permutations;combinations;linear extensions;vertex-transitive graphs;de Bruijn sequences;boolean lattice;set partitions;acyclic orientations;gray codes;restricted growth functions;catalan families;integer partitions;necklaces;compositions;binary strings |
273702 | Fully Discrete Finite Element Analysis of Multiphase Flow in Groundwater Hydrology. | This paper deals with the development and analysis of a fully discrete finite element method for a nonlinear differential system for describing an air-water system in groundwater hydrology. The nonlinear system is written in a fractional flow formulation, i.e., in terms of a saturation and a global pressure. The saturation equation is approximated by a finite element method, while the pressure equation is treated by a mixed finite element method. The analysis is carried out first for the case where the capillary diffusion coefficient is assumed to be uniformly positive, and is then extended to a degenerate case where the diffusion coefficient can be zero. It is shown that error estimates of optimal order in the $L^2$-norm and almost optimal order in the $L^\infty$-norm can be obtained in the nondegenerate case. In the degenerate case we consider a regularization of the saturation equation by perturbing the diffusion coefficient. The norm of error estimates depends on the severity of the degeneracy in diffusivity, with almost optimal order convergence for nonsevere degeneracy. Implementation of the fractional flow formulation with various nonhomogeneous boundary conditions is also discussed. Results of numerical experiments using the present approach for modeling groundwater flow in porous media are reported. | Introduction
. In this paper we develop and analyze a fully-discrete finite element
procedure for solving the flow equations for an air-water system in groundwater
hydrology,
@t
kk rff
porous medium, OE and k are the porosity and absolute
permeability of the porous system, ae ff , s ff , p ff , u ff , and - ff are the density, saturation,
pressure, volumetric velocity, and viscosity of the ff-phase, f ff is the source/sink term,
k rff is the relative permeability of the ff-phase, and g is the gravitational, downward-
pointing, constant vector.
Flow simulation in groundwater reservoirs has been extensively studied in past
years (see, e.g., [26], [28] and the bibliographies therein). However, in most previous
works the air-phase equation is eliminated by the assumption that the air-phase
remains essentially at atmospheric pressure. This assumption, as mentioned in [13],
is reasonable in most cases because the mobility of air is much larger than that of
water, due to the viscosity difference between the two fluids. When the air-phase
pressure is assumed constant, the air-phase mass balance equation can be eliminated
and thus only the water-phase equation remains. Namely, the Richards equation is
used to model the movement of water in groundwater reservoirs. However, it provides
* Department of Mathematics and the Institute for Scientific Computation, Texas A&M Uni-
versity, College Station, Partly supported by the Department of Energy under contract
DE-ACOS-840R21400. email: zchen@isc.tamu.edu, ewing@ewing.tamu.edu.
no information on the motion of air. If contaminant transport is the main concern
and the contaminant can be transported in the air-phase, the air-phase needs to be
included to determine the advective component of air-phase contaminant transport
[7]. Furthermore, the dynamic interaction between the air and water phases is also
important in vapor extraction systems. Hence in these cases the coupled system of
nonlinear equations for the air-water system must be solved. It is the purpose of this
paper that is to develop and analyze a finite element procedure for approximating
the solution of the coupled system of nonlinear equations for the air-water system in
groundwater hydrology.
In petroleum reservoir simulation the governing equations that describe fluid flow
are usually written in a fractional flow formulation, i.e., in terms of a saturation and
a global pressure [1], [8]. The main reason for this fractional flow approach is that
efficient numerical methods can be devised to take advantage of many physical properties
inherent in the flow equations. However, this pressure-saturation formulation
has not yet achieved application in groundwater hydrology. In petroleum reservoirs
total flux type boundary conditions are conveniently imposed and often used, but in
groundwater reservoirs boundary conditions are very complicated. The most commonly
encountered boundary conditions for a groundwater reservoir are of first-type
(Dirichlet), second-type (Neumann), third-type (mixed), and "well" type [8]. The
problem of incorporating these nonhomogeneous boundary conditions into the fractional
flow formulation has been a challenge [12]. In particular, in using the fractional
flow approach a difficulty arises when the Dirichlet boundary condition is imposed for
one phase (e.g. air) and the Neumann type is used for another phase (e.g. water).
This paper follows the fractional flow formulation. Based on this approach, we
develop a fully-discrete finite element procedure for the saturation and pressure equa-
tions. The saturation equation is approximated by a Galerkin finite element method,
while the pressure equation is treated by a mixed finite element method. It is well
known that the physical transport dominates the diffusive effects in incompressible
flow in petroleum reservoirs. In the air-water system studied here, the transport
again dominates the entire process. Hence it is important to obtain good approximate
velocities. This motivates the use of the parabolic mixed method, as in [17], in the
computation of the pressure and the velocity. Also, due to its convection-dominated
feature, more efficient approximate procedures should be used to solve the saturation
equation. However, since this is the first time to carry out an analysis for the
present problem, it is of some importance to establish that the standard finite element
method for this model converges at an asymptotically optimal rate for smooth
problems. Characteristic Petrov-Galerkin methods based on operator splitting [20],
transport diffusion methods [32], and other characteristic based methods will be considered
in forthcoming papers.
The main part of this paper deals with an asymptotical analysis for the fully
discrete finite element method for the first-type and second-type boundary conditions
where p ffD and d ff are given functions,
being disjoint, and
- is the outer unit normal to @ We point out that petroleum reservoir simulation
is different from groundwater reservoir simulation. The flow of two incompressible
fluids (e.g. water and oil) is usually considered in the former case, while the latter
system consists of the air and water phases. Consequently, the finite element analyses
for these two cases differ. As shown here, compressibility and combination of the
boundary conditions (1.3) and (1.4) complicate error analyses. Indeed, if optimality is
to be preserved for the finite element method, the standard error argument just fails
unless we work with higher order time-differentiated forms of error equations, which
require properly scaling initial conditions. Also, we mention that a slightly compressible
miscible displacement problem was treated in [14], [18], [23], [33]; however, only
the single phase was handled, gravitational terms were omitted, and total flux type
boundary conditions were assumed. Furthermore, the so-called "quadratic" terms in
velocity were neglected. The dropping of these quadratic terms may not be valid near
wells, and so the miscible displacement model was oversimplified both physically and
mathematically. The analysis of this paper includes these terms. Finally, only the
Raviart-Thomas mixed finite element spaces [34] have been considered in these earlier
papers. We are here able to discuss all existing mixed spaces.
The error analysis is given first for the case where the capillary diffusion coefficient
is assumed to be uniformly positive. In this case, we show error estimates of optimal
order in the L 2 -norm and almost optimal order in the L 1 -norm. Then we treat a
degenerate case where the diffusion coefficient vanishes for two values of saturation.
In the degenerate case we consider a regularization of the saturation equation by
perturbing the diffusion coefficient to obtain a nondegenerate problem with smooth
solutions. It is shown that the regularized solutions converge to the original solution
as the perturbation parameter goes to zero with specific convergence rates given. The
norm of error estimates depends on the severity of the degeneracy in diffusivity, with
almost optimal order convergence for the degeneracy under consideration.
The rest of this paper is concerned with implementation of the fractional flow
formulation with various nonhomogeneous boundary conditions. We show that all
the commonly encountered boundary conditions can be incorporated in the fractional
flow formulation. Normally the "global" boundary conditions are highly nonlinear
functions of the physical boundary conditions for the original two flow phases. This
means that we have to iterate on these global boundary conditions as part of the solution
process. We here develop a general solution approach to handle these boundary
conditions. Results of numerical experiments using the present approach for modeling
groundwater flow are reported here.
The paper is organized as follows. In x2, we define a fractional flow formulation for
equations (1.1)-(1.4). Then, in x3 we introduce weak forms of the pressure-saturation
equations, and in x4 a fully-discrete finite element procedure for solving these equa-
tions. An asymptotical analysis is given in x5 and x6 for the nondegenerate case
and the degenerate case, respectively. Finally, in x7 we discuss implementation of
various nonhomogeneous boundary conditions and present the results of numerical
experiments.
2. A pressure-saturation formulation. In addition to (1.1)-(1.4), we impose
the customary property that the fluid fills the volume:
and define the capillary pressure function p c by
Introduce the phase mobilities
and the total mobility
To devise our numerical method, it is important to choose a reasonable set of dependent
variables. Since equal to the water residual saturation [3],
pw cannot generally be expected to lie in any Sobolev space. Air being a continuous
phase implies that p a is well behaved. Hence, as mentioned in the introduction, we
define the global pressure [1] with
sc
-w
d-
d-
Z pc (s)\Gamma -w
c (-)
The integral in the right-hand side of (2.3) is well defined [1], [8].
As usual, assume that ae ff depends on p [8]. Then we define the total velocity
where
g:
Now it can be easily seen that
(2.5a)
where q
Consequently,
Equations (1.1) and (1.2) can be manipulated using (2.1)-(2.6) to have the pressure
equation
@t
a
ff=wae ff
OEs ff
@ae ff
@t
and the saturation equation
OE @sw
@t
\Gammas w
@t
ae w
OEs w
@t
Terms of the form u ff \Delta rae ff , neglected in compressible miscible
displacement problems [14], [18], [23], [33]. The dropping of these terms may not
be valid near wells. Also, if they are neglected, the model may not be qualitatively
equivalent to the usual formulation of two phase flow. Hence we keep them in this
paper. However, the water phase is usually assumed to be incompressible. With the
incompressibility of the water phase and the following notation:
c(s;
ae a
dae a
dp
ds
ae a
dae a
dp
~
ae w
ae a
dae a
dp
ae)
f(s; p) =ae a
dae a
dp k-w q a (rp c \Gamma ~
f a
ae a
ae w
@t
equations (2.7) and (2.8) can be now written as
c(s; p) @p
@t
OE @s
@t
@t
The boundary conditions for the pressure-saturation equations become
(D(s)rs
where s D and pD are the transforms of pwD and paD by (2.2) and (2.3), and ~
The model is completed by specifying the initial conditions
The later analysis for the nondegenerate case in x5 is given under a number of
assumptions. First, the solution is assumed smooth; i.e., the external source terms
are smoothly distributed, the coefficients are smooth, the boundary and initial data
satisfy the compatibility condition, and the domain has at least the regularity required
for a standard elliptic problem to have H
2(\Omega\Gamma671/64748/ y and more if error estimates
of order bigger than one are required. Second, the coefficients a(s), OE, and c(s; p) are
assumed bounded below positively:
Finally, the capillary diffusion coefficient D(s) is assumed to satisfy
While the phase mobilities can be zero, the total mobility is always positive [31].
The assumptions (2.18) and (2.19) are physically reasonable. Also, the present analysis
obviously applies to the incompressible case where c(s; In this case, the analysis
is simpler since we have an elliptic pressure equation instead of the parabolic equation
(2.9). Thus we assume condition (2.20) for the compressible case under consideration.
Next, although the reasonableness of the assumption (2.21) is discussed in [16], the
diffusion coefficient D(s) can be zero in reality. It is for this reason that section six
is devoted to consideration of the case where the solution is not required smooth and
the assumption (2.21) is removed. As a final remark, we mention that for the case
where point sources and sinks occur in a porous medium, an argument was given in
[22] for the incompressible miscible displacement problem and can be extended to the
present case.
3. Weak forms. To handle the difficulty associated with the inhomogeneous
Neumann boundary condition (2.13) in the analysis of the mixed finite element method,
let d be such that d \Delta
d and introduce the change of variable equations
(2.9)-(2.11). Then the homogeneous Neumann boundary condition holds for ~ u. Thus,
without loss of generality, we assume that ~
To be compatible, we also require
that this homogeneous condition holds when
In the two-dimensional case, let
while it is accordingly defined in the three-dimensional case as follows:
Also, set
The weak form of (2.9)-(2.11) on which the finite element procedure is based is given
below. Let is the time interval of interest. The mixed formulation
for the pressure is defined by seeking a pair of maps
2(\Omega\Gamma such that
(3.1a)
(c(s; p) @p
@t
the inner products (\Delta; \Delta) are to be interpreted to be in L
or (L 2
(\Omega\Gamma4 d , as appropriate, and h\Delta; \Deltai
denotes the duality between H 1=2
H \Gamma1=2 (\Gamma 1 ). The weak form for the saturation s : J
OE
@s
@t
where the boundary condition (2.15) is used. Finally, to treat the nonzero initial
conditions imposed on s and p in (2.16) and (2.17), we introduce the following transformations
7in (3.1) and (3.2):
where
we have zero initial conditions for s, p, and u. Hence, without loss
of generality again, we assume that
The reason for introducing these transformations to have zero initial conditions is to
validate equation (5.15) later.
4. Fully-discrete finite element procedures.
Let\Omega be a polygonal domain.
For partitions into ele-
ments, say, simplexes, rectangular parallelepipeds, and/or prisms. In both partitions,
we also need that adjacent elements completely share their common edge or face. Let
be a standard C 0 -finite element space associated with T h such
that
where hK is the norm in the Sobolev space W k;q (K)
(we omit K when K
=\Omega and kvk
(\Omega\Gamma be the Raviart-Thomas-Nedelec [34], [29], the Brezzi-Douglas-Fortin-Marini
[5], the Brezzi-Douglas-Marini [6] (if 2), the Brezzi-Douglas-Dur'an-Fortin [4] (if
or the Chen-Douglas [11] mixed finite element space associated with the
partition T hp of index such that the approximation properties below are satisfied:
kr
where h p;K for the first two spaces,
for the second two spaces, and both cases are included in the last space.
Finally, let ft n g nT
n=0 be a quasi-uniform partition of J with t
set \Deltat
We are now in a position to introduce our finite element procedure.
The fully-discrete finite element method is given as follows. The approximation
procedure for the pressure is defined by the mixed method for a pair of maps fu n
(ff(s
(4.5a)
(c(s
(4.5b)
and the finite element method for the saturation is given for s n
h )rs n
@t
The initial conditions satisfy
After startup, for equations (4.5) and (4.6) are computed as
follows. First, using s
h , and (4.5), evaluate fu n
g. Since it is linear, (4.5)
has a unique solution for each n [10], [27]. Next, using s
h g, and (4.6),
calculate s n
h . Again, (4.6) has a unique solution for \Deltat n sufficiently small for each n
[39].
We end this section with a remark. While the backward Euler scheme is used
in (4.5b) and (4.6), the Crank-Nicolson scheme and more accurate time stepping
procedures (see, e.g., [21]) can be used. The present analysis applies to these schemes.
5. An error analysis for the fully-discrete scheme. In this section we give a
convergence analysis for the finite element procedure (4.5) and (4.6) under assumption
(2.21). As usual, it is convenient to use an elliptic projection of the solution of
into the finite element space M h . Let ~ defined by
Then it follows from standard results of the finite element method [15], [30], [37] that
(5.3a)
1. The same result applies to the time-
differentiated forms of (5.1) [40]:
@t
@t
@s
@t
As for the analysis of the mixed finite element method, we use the the following
two projections instead of the elliptic projections introduced in [14] and [18]. So the
present analysis is different from and in fact simpler than those in [14] and [18]. Each
of our mixed finite element spaces [4]-[6], [11], [29], [34] has the property that there
are projection operators \Pi
such that
kr
and (see, e.g., [9], [19])
(r
Note that, by (3.3) and (4.7),
Finally, we prove some bounds of the projections ~ s and ~
p. Let be the
interpolant of s in M h . Then we see, by (4.1), (5.3b), the approximation property of
s, and an inverse inequality in M h , that
ks
ks
ks \Gamma sk 0;1
ks
where fl is given as in (5.3b). This implies that k~sk 1;1 is bounded for sufficiently
smooth solutions since k - 1. The same argument applies to k@~s=@tk 1;1 . Next, note
that, by the approximation property of the projection P h [27],
These bounds on ~
are used below.
We are now ready to prove some results. Below " is a generic positive constant
as small as we please.
5.1. Analysis of the mixed method. We first analyze the mixed method
(4.5). We set The following error
equation is obtained by subtracting (4.5) from (3.1) at applying (5.8) and
(ff(s
@t
@t
Below C i indicates a generic constant with the given dependencies.
Lemma 5.1. Let (u; p) and solve (3.1) and (4.5), respectively. Then
ks
@t
@t
@t
Proof. Set add the resulting equations at
use (3.3), (4.7), and (5.12) to see that
where
@t
@t
Then (5.15) can be easily seen.
Lemma 5.2. Let (u; p) and and (4.5), respectively. Then
ae
@t
ks
\Theta k@'
@t
\Psi \Deltat n
oe
@t
@t
@t
Proof. Difference equations (5.13) and (5.14) with respect to n, set
in the resulting equations, divide by \Deltat n , and add to obtain
where
\Deltat n
\Deltat n
\Deltat n
\Deltat n
@t
@t
@t
@t
\Deltat n
\Deltat n
\Deltat n
\Deltat n
\Deltat n
Observe that the left-hand side of (5.16) is larger than the quantity
(5.17)2\Deltat n
where
We estimate the new term T n
2 in detail. Other terms can be bounded by a simpler
argument. To estimate T n
2 , we write
\Gamma\Phi
\Deltat n
\Deltat n
\Deltat n
Note that
[A(s
@s
(bs
@A
@s
(bs
@A
where
and similar inequalities hold for b
s
Consequently, with -
see that
@s
(bs
@p@s
(p
\Deltat
(bp
\Deltat
so that
ks n\Gamma2
where
and an analogous inequality holds for b s
Also, we see that
[A(s
\Phi @A
@s
(p
which implies that
ks
Next, it can be easily seen that
ks
Finally, since
we find that
Hence T n
2 can be bounded in terms of T n
Other terms are bounded as
follows:
ks n\Gamma2
ks n\Gamma2
ks
ks
@t
@t
ks
ks n\Gamma2
can be bounded as in (5.19), e.g.,
kbs
ks
ks
ks
ks
apply these inequalities and (5.17)-(5.20), multiply (5.16) by \Deltat n , sum n, and
properly arrange terms to complete the proof of the lemma.
The error equations (5.13) and (5.14) are usually exploited to derive error estimates
in the parabolic mixed finite element method [18], [27]. To handle the difficulty
arising from the combination of the Dirichlet boundary condition (1.3) and the non-linearity
of the differential system (2.9)-(2.11), we must use their time-differentiated
forms, as mentioned before. Also, the three terms T n
care of the quadratic
terms in the velocities, which require more regularity on u than those without
these quadratic terms, as seen from Lemma 5.2.
5.2. Analysis of the saturation equation. We now turn to analyzing the
finite element method (4.6).
Lemma 5.3. Let s and s h solve (3.2) and (4.6), respectively. Then
ae
ks
@t
ks
0;1 \Deltat n
oe
@t
@t
Proof. Subtract (4.6) from (3.2) at use (5.1) at set the test
function to see that
where
@t
@t
The left-hand side of (5.21) is bigger than the quantity
\Gamma2\Deltat n (D(s n\Gamma2
defined by
and is bounded by
Next, it can be easily seen that
@t
To avoid an apparent loss of a factor h in B n
use summation by
parts on these items. We work on B n
3 in detail, and other quantities can be estimated
similarly. Applying summation by parts in n and the fact that - we see that
\Psi
so that, using the same argument as for (5.18),
3 \Deltat n
ks
ks
oe
where kbs n
can be estimated as in (5.20). The term
7 \Deltat n has the
same bound as in (5.25). Also, we find that
4 \Deltat n
oe
and
5 \Deltat n
ks
ks
oe
multiply (5.21) by \Deltat n , sum n, and use (5.22)-(5.27) to complete the proof of
the lemma.
5.3. estimates. We now prove the main result in this section. Define
K2Thp
@t
K2Thp
@t
@s
@t
Theorem 5.4. Let (u; p; s) and
respectively. Then, if the parameters \Deltat, h p , and h satisfy
we have
ks
@t
@t
Proof. Take a 1)-multiple of the inequality in Lemma 5.3, add the resulting
inequality and the inequality in Lemma 5.2, and use (5.3)-(5.7), (5.15), and the
extension of the solution for t - 0 to obtain
ae
\Theta (k@-
oe
where In deriving (5.29), we required that the " appearing in Lemma
5.3 be sufficiently small that (C 1 increases C 2 , but not C 1 . Observe
that, by (5.12),
The same result holds for - fl and fi fl . Combine (5.29), (5.30), and an inverse inequality
to see that
ae
\Theta (k@-
oe
We now make the induction hypothesis that
. Note that, by (5.12), (5.32) holds trivially for
(5.32), (5.31) becomes
ae
\Theta (k@-
oe
Using (5.28), we choose the discretization parameters so small that
Then it follows from (5.33) that
ae
oe
which, together with Gronwall's inequality, implies that
where
for \Deltat not too large. Consequently, the induction argument is completed and the
theorem follows.
We remark that, if h and h p are of the same order as they tend to zero, then
since k - k + 1. Since k - 1,
3:
Also, if k - 2, we see that
3:
Thus, for (5.28) to be satisfied, we assume that k - 2. This excludes the mixed
finite element spaces of lowest order, i.e., k 1. The lowest order case has to be
treated using different techniques. If the nonlinear coefficients ff(s) and c(s; p) in (4.5)
are projected into the finite element space W h , the technique developed in [10] can be
used to handle the lowest order case. We shall not pursue this here.
5.4. estimates. The main objective of this paper is to establish the
estimates given in Theorem 5.4. For completeness, we end this section with
a statement of L 1 -estimates for the errors in the two-dimensional
case.
Theorem 5.5. Assume that (p; s) and (p h ; s h ) satisfy (3.1), (3.2) and (4.5),
(4.6), respectively, and the parameters h p and h satisfy (5.28). Then
(\Omega\Gamma/
ks
Proof. First, it follows from the approximation property of the projection P h [27]
that
Also, from [27, Lemma 1.2] and (5.13), we see that
so that, by Theorem 5.4,
This, together with (5.37), implies (5.35). Finally, apply the embedding inequality
(5.3b), and (5.34) to obtain (5.36).
6. Finite elements for a degenerate problem. In this section we consider
a degenerate case where the diffusion coefficient D(s) can be zero. Since the pressure
equation is the same as before, we here focus on the saturation equation. For simplicity
we neglect gravity. Then the saturation equation (2.11) can be written as
@s
@t
@t
2\Omega \Theta J:
For technical reasons we only consider the Neumann boundary condition (2.15):
@\Omega \Theta J;
and the initial condition is given by
We impose the following conditions on the degeneracy
of D(s):
where the fi i are positive constants and ff j and - satisfy the conditions:
2:
Difficulties arise when trying to derive error estimates for the approximate solution
of (6.1) and (6.2) with D(s) satisfying the condition (6.3). To get around this
problem, we consider the perturbed diffusion coefficient D - (s) defined by [13], [24],
[35], [38]
g. Since the coefficient D - (s) is bounded away from
zero, the previous error analysis applies to the perturbed problem:
OE
@t
@t
2\Omega \Theta J;
(6.4a)
(D
@\Omega \Theta J;
(6.4c)
We now state a result on the convergence of s - to s as - tends to zero. Its proof
is given in [24] for the case where dw j 0 and the right-hand side of (6.1) is zero, and
can be easily extended to the present case.
Theorem 6.1. Assume that D(s) satisfies (6.3) and there is a constant C ? 0
such that
where
Z sD(-)d-:
Then there is C independent of -, s, and - such that
As shown in [24], the requirement (6.5) is reasonable. We now consider a fully-
discrete finite element method for (6.4). Let M h be the standard C 0 piecewise linear
polynomial space associated with T h ; due to the roughness of the solution to (6.1) and
(6.2), no improvements in the asymptotic convergence rates result from taking higher
order finite element spaces. Also, we extend the domain of D - and q w as follows:
ae D - (1) if -
and
Now the finite element solution s n
to (6.4) is given by
(6.7a)
where P h is the L 2 -projection onto M h . The following theorem states the convergence
of s h to s. For (6.8) below to be satisfied, we see from (6.6) that the perturbation
parameter - need to satisfy the relation
Theorem 6.2. Let s and s h solve (6.1), (6.2) and (6.7), respectively, and let the
hypotheses of Theorem 6.1 be satisfied. Then there is C independent of -, s, and -
such that
The proof can be carried out as in [25], [35], and [38]; we omit the details.
7. Simulation with various boundary conditions. Let
@\Omega be a set of four
disjoint regions As
mentioned in the introduction, the most commonly encountered boundary conditions
for the two-pressure equations are of first-type, second-type, third-type, and well type.
Then we consider for
(7.3a)
(7.4a)
are given functions, d j is an arbitrary scaling constant,
and - is the outer unit normal to @ Note that \Gamma 1 is of the first type, \Gamma 2 is of the
third type (it reduces to the second type as - ff j 0), \Gamma 3 is of the well type, and on \Gamma 4
we have the Dirichlet condition for the air phase and the Neumann condition for the
water phase. Let \Gamma Then the
global boundary conditions for the pressure-saturation equations (2.9)-(2.11) become
(7.7a)
where pD and s D are the transforms of pwD and paD by (2.2) and (2.3), and
Z pc (s)q a
c (-)
Z pc (s)q a
c (-)
Z pc (s)q w
c (-)
We now incorporate the boundary conditions (7.5)-(7.10) in the finite element
scheme given in (4.5) and (4.6). The constraint V h ae V says that the normal components
of the members of V h are continuous across the interior boundaries in T hp .
Following [2], [9], we relax this constraint on V h by introducing Lagrange multipliers
over interior boundaries. Since the mixed space V h is finite dimensional and defined
locally on each element K in T hp , let V h . Then we define
~
for each K 2 T hp
ae
e
for each j;
oe
and W h and M h are given as before. The mixed finite element solution of the pressure
equation is fu n
(c(s
(ff(s
and the finite element method for the saturation is given for s n
satisfying
h )rs n
@t
. The computation of these equations can be carried out as in
(4.5) and (4.6). Note that the last equation in the unconstrained mixed formulation
above enforces the continuity requirement on u h , so in fact . It is well known
[2], [9] that the linear system arising from this unconstrained mixed formulation leads
to a symmetric, positive definite system for the Lagrange multipliers, which can be
easily solved. Also, the introduction of the Lagrange multipliers makes it easier to
incorporate the boundary conditions (7.5)-(7.10).
We now present a numerical example. The relative permeability functions are
taken as follows:
where s rw and s ra are the irreducible saturations of the water and air phases, respec-
tively. The capillary pressure function is of the form
where fl and \Theta are functions of the irreducible saturations. The water and air viscosities
and densities are set to be 1cP and 0:8cP , and 100kg=m 3 and 1:3kg=m 3 ,
respectively. The permeability rate is two-dimensional domain of 4m
width by 1m depth is simulated. Finally, the boundary of the domain is divided into
the following segments:
A uniform partition
of\Omega into rectangles with \Deltay is taken, and the time
step \Deltat is required to satisfy (5.28). The Raviart-Thomas space of lowest-order over
rectangles is chosen. Tables 1 and 2 describe the errors and convergence orders for the
pressure and saturation at time respectively. Experiments at other times
and on finer meshes are also carried out; similar results are observed and not reported
here.
Table
1. Convergence of p h at
Table
2. Convergence of s h at
From
Table
1, we see that the scheme is first-order accurate both in L 2 and L 1
norms for the pressure, i.e., optimal order. Table 2 shows that the scheme is almost
optimal order for the saturation. Thus the numerical experiments in the two tables
are in agreement with our earlier analytic results.
--R
On the solvability of boundary value problems for degenerate two-phase porous flow equations
Mixed and nonconforming finite element methods: implementation
Dynamics of Fluids in Porous Media
one dimensional simulation and air phase velocities
Mathematical Models and Finite Elements for Reservoir Simula- tion
Analysis of mixed methods using conforming and nonconforming finite element methods
Multiphase flow simulation with various boundary conditions
Mixed finite element methods for compressible miscible displacement in porous media
The Finite Element Method for Elliptic Problems
The approximation of the pressure by a mixed method in the simulation of miscible displacement
Characteristic Petrov-Galerkin subdomain methods for two phase immiscible flow
Efficient time-stepping methods for miscible displacement problems in porous media
Galerkin methods for miscible displacement problems with point sources and sinks-unit mobility ratio case
Timestepping along characteristics for a mixed finite element approximation for compressible flow of contamination from nuclear waste in porous media
A priori estimates and regularization for a class of porous medium equations
Fundamentals of Soil Physics
estimates for some mixed finite element methods for parabolic type problems
Mixed finite elements in
Fundamentals of Numerical Reservoir Simulation
On the transport-diffusion algorithm and its application to the Navier-Stokes equations
An implicit diffusive numerical procedure for a slightly compressible miscible displacement problem in porous media
A mixed finite element method for second order elliptic problems
Numerical Methods for flow through porous media I
Maximum norm stability and error estimates in parabolic finite element equations
Optimal L 1 estimates for the finite element method on irregular meshes
A near optimal order approximation to a class of two sided nonlinear degenerate parabolic partial differential equations
Galerkin Finite Element Methods for Parabolic Problems
A priori L 2 error estimates for Galerkin approximation to parabolic partial differential equations
--TR
--CTR
M. Afif , B. Amaziane, On convergence of finite volume schemes for one-dimensional two-phase flow in porous media, Journal of Computational and Applied Mathematics, v.145 n.1, p.31-48, 1 August 2002
E. Abreu , J. Douglas, Jr. , F. Furtado , D. Marchesin , F. Pereira, Three-phase immiscible displacement in heterogeneous petroleum reservoirs, Mathematics and Computers in Simulation, v.73 n.1, p.2-20, 6 November 2006
Z. Chen , R. E. Ewing, Degenerate Two-Phase Incompressible Flow IV: Local Refinement and Domain Decomposition, Journal of Scientific Computing, v.18 n.3, p.329-360, June | error estimate;air-water system;finite element;porous media;numerical experiments;mixed method;compressible flow;time discretization |
273705 | Convergence of a Multigrid Method for Elliptic Equations with Highly Oscillatory Coefficients. | Standard multigrid methods are not so effective for equations with highly oscillatory coefficients. New coarse grid operators based on homogenized operators are introduced to restore the fast convergence rate of multigrid methods. Finite difference approximations are used for the discretization of the equations. Convergence analysis is based on the homogenization theory. Proofs are given for a two-level multigrid method with the homogenized coarse grid operator for two classes of two-dimensional elliptic equations with Dirichlet boundary conditions. | Introduction
Consider the multigrid method arising from the finite difference approximations to elliptic
equations with highly oscillatory coefficients of the following type
@
a ffl
where a ffl
strictly positive, continuous and 1-periodic in
each component of y. Also, the operator L ffl is uniformly elliptic. That is, there exist two
positive constants q and Q independent of ffl, such that
for any - i is assumed to be very small, representing the length of the
oscillations. These equations have important practical applications, for example, in the
study of elasticity and heat conduction for composite materials. One major mathematical
technique to deal with these equations is homogenization theory. The theory associates the
original equation with its microstructure to some macrostructure effective equation that
does not have oscillatory coefficients [2]. By homogenization, as ffl gets small, the solution
of (1.1) will converge to the solution u(x) of the following homogenized equation,
are constants, given by the following expressions,
)dy:
Here - j (y) is 1-periodic in y and satisfies
@
@
a ik (y):
Also, the homogenized operator L - retains the ellipticity property of the operator L ffl .
Multigrid methods are usually not so effective when applied to (1.1). Standard construction
of coarse grid operators may produce operators with different properties than those
of the fine grid operators [1, 3, 12]. In order to restore the high efficiency of the multigrid
method, a new operator for the coarser grid operator is developed [5, 6]. This operator is
called a homogenized coarse grid operator, based on the homogenized form of the equa-
tion. For full multigrid or with more general coefficients, the homogenized operator can be
numerically calculated from the finer grids based on the local solution of the so called cell
problem [5]. For numerical examples on model problems and on the approximation of heat
conduction in composite materials [6].
One difficulty for these problems is that the smaller eigenvalues do not correspond to
very smooth eigenfunctions. It is thus not easy to represent these eigenfunctions on the
coarser grids. After classical smoothing iterations on the fine grid, we know that the high
frequency eigenmodes of the errors are reduced, and only the low frequency eigenmodes are
significant. Partially following [8], one may realize that the low frequency eigenmodes can
be approximated by the corresponding homogenized eigenmodes. This is the reason why
effective or homogenized operators are useful when defining the coarse grid operator.
In this paper, using homogenized coarse grid operators, the convergence of the two
level method applied to two classes of (1.1) with Dirichlet boundary conditions is analyzed.
In chapter 2, we consider the equation with coefficient oscillatory in x direction In
chapter 3, we consider the equation with coefficient oscillatory diagonally. We show that
as both ffl and h go to zeros, our two level multigrid method converges when the number
of smoothing iteration fl is large enough as a function of h. We also require the ratio h=ffl
not to belong to a small resonance set. More precisely the convergence is proved under the
following conditions:
ffl For the first case in chapter 2,
ffl For the second case in chapter 3,
if h belongs to the set S(ffl; h 0 ) of Diophantine number,
kh
In [4], Engquist
called it the convergence essentially independent of ffl.
The purpose of this paper is to present new analysis in order to give a theoretical
explanation of the computational results presented in [5, 6]. The bounds on fl given above
are overly pessimistic compared to the numerical experiments but the h dependence in fl
exists also in the computations [5, 6]. The effect of not requiring h 2 S is also seen in the
numerical tests [5, 9].
If the coarse grid operator is defined, i.e., by direct arithmetic averaging an eigenmode
analysis of the type given in section 2 and 3 produces the estimate
difference between the correctly homogenized coarse grid operators and other operators is
qualitatively consistent with the computational results of [5, 6]. The O(h \Gamma2 ) estimate means
that there is no multigird effect and the convergence is only produced by the smoothing
iterations.
The l 2 difference between the inverse of the analytic operator L ffl and that of the corresponding
homogenized operator L - is of the order O(ffl) [2]. This indicates that an eigenmode
analysis of the type used in this paper cannot give estimates better than
This is close to the estimate in one space dimension fl - Ch \Gamma6=5 ln h in [9].
In special cases, it is possible to design prolongation, restriction and coarse grid operators
under that the resulting method corresponds to a direct solver [7]. This type of
algorithm and methods based on special discretizations with built in a priori knowledge of
the oscillatory behavior is outside the scope of this paper.
Since in the sequel of the paper the following lemma is often cited, we introduce it here
first.
Lemma 1.1 [4] Suppose g(x; y) 2 C 3 ([0; 1] \Theta [0; 1]) and is 1-periodic in y. Let x
We always denote the domain [0; 1] \Theta [0; 1] by -
(0; 1) by \Omega\Gamma and
-\Omega =\Omega by @ We
discretize the domain by the same number of grid points N with equal step size
both in x- and y-directions. The step size h is chose to belong to the set of S(ffl; h 0 ). And,
the ratio of h to the wavelength ffl is fixed to be an strictly irrational number.
-\Omega h denotes
the set of grid points (ih; jh) 2 -
\Omega\Gamma\Omega h for (ih; jh) 2 \Omega\Gamma and
@\Omega h for (ih; jh) 2 @
present some constants, independent of ffl and h. D i
are standard forward and
backward finite difference operators in x direction; D j
are for y direction. k \Delta k h
represents the discrete L 2 \Gammanorm, indexed from 1 to N \Gamma 1.
Oscillation Along a Coordinate Direction
2.1 Model Equation
Consider a special case of (1.1), a two-dimensional elliptic problem with coefficients oscillatory
in x-direction only,
@
@x a ffl (x) @OE ffl
@
@y a ffl (x) @OE ffl
@y
where a ffl (x) is a strictly positive continuous function, and a ffl
the operator satisfies the property of (1.2). From (1.3), the corresponding homogenized
equation of (2.1) is:
\Gamma-
0 a(x)dx are the harmonic and arithmetical averages
respectively.
As ffl goes to zero, we know that the solution OE ffl of (2.1) converges to the solution OE of
(2.2).
Now, consider a corresponding discretized equation of (2.1),
where a
Denote the discretization of the homogenized operator \Gamma- @ 2
a @ 2
in (2.2) by
The operator of the two level method by using
the homogenized coarse grid operator [5, 6] can be expressed as
For simplicity, in the sequel of the paper, I H
h and I h
H always denote the weighting restriction
and bilinear interpolation respectively, and the smoothing operator S,
where ff is the inverse of the largest eigenvalue of L ffl;h , has order of h \Gamma2 . LH is taken to be
the corresponding homogenized operator L -;H .
2.2 Convergence Analysis
Instead of the operator M , we consider a simplified operator, denoted by M 1 ,
Theorem 2.1 If the ratio of h to ffl is fixed, h belongs to the set of Diophantine number
defined in (1.6), then there exist two constants C and ae 0 such that
whenever
Let's introduce some lemmate first, which are used in the proof of Theorem 2.1.
Lemma 2.1 Assume
Then, Z i is bounded, and satisfies
Proof. By Lemma 1.1, taking
Z 11
a(y)
Hence,
Z 11
a(y)
Z 11
a(y)
Z i is bounded for satisfies the following equation,
a
Proof. Directly applying Lemma 1.1, we can establish the result. 2
Lemma 2.3 Assume U ij satisfies
where
ij ) is a normalized eigenpair of L ffl;h . That is,
Then,
(D i
Proof. Multiplied by U ij ,
added by parts, (2.9) then follows.
For (2.10), note first for any grid function U ij , vanishing on
@\Omega h , we have
(D i
(D i
(D i
Multiply D i
both sides of (2.8), and take summation,
Then, we can establish
for some constant C. Analogously, we can get
(2.11), (2.12) and (2.13) complete the proof. 2
Lemma 2.4 Assume OE h
are defined in Lemma 2.3. Then,
Proof. Introduce the following discrete function
where Z i is defined in Lemma 2.1. Such G ij vanishes at boundary, i.e.,
By calculation, we have
where (i;
2\Omega h , and j i is defined in Lemma 2.2. Multiply G ij on both sides, and take
summation,
a
a
(D i
(D j
(D j
\Theta
(D i
By Lemma 2.3, we have
(D i
Hence, v u u u t
(D i
By Poincare inequality,
By Lemma 2.1 and Lemma 2.3,
Hence,
Meanwhile, the inequality tells us that
The proof of lemma is completed. 2
We are now able to show Theorem 2.1.
Proof of Theorem 2.1. Denote the eigenvalues of L ffl;h and \Delta h (Laplacian operator) by - ffl
respectively, where by dividing the set of eigenvalues into
two subsets, say f-
we can split the complete eigenspace of L ffl;h into two orthogonal subspaces. Namely, the
space of low frequency expanded by the eigenfunctions whose corresponding eigenvalues
belong to the first set, and that of high frequency expanded by the eigenfunctions whose
corresponding eigenvalues belong to the second set. By minimax principle of eigenvalues,
it is easy to see that
for some constants c and C.
For any normalized vector - such that
where
Thus,
In the following analysis, we consider (2.17) in two steps.
Step 1: Low frequency subspace.
I
-;h L ffl;h )OE ffl
By Lemma 2.4,
-;h L ffl;h )OE ffl
The corresponding eigenvalue of Laplacian operator \Delta h can be explicitly expressed as
By Taylor expansion, it follows that
Hence,
I
By the constraint \Sigma
I
Since the ratio of h to ffl is fixed to be an irrational number, h has the same order as ffl. We
have
I 1 - Chk 3
Therefore, in order to make I 1 ! 1, it's sufficient to have
Step 2: High frequency subspace.
I
-;h L 1ffl;h )k h kL2
-;h L2
I
r
k 2), we have
I
For I 2 - 1, it is sufficient to have
Combining (2.21) and (2.24), we have
The proof is completed. 2
Now, we are ready to show the main result.
Theorem 2.2 There exists constant C, the operator M defined by (2.5) satisfies
whenever h belongs to the set S(ffl; h 0 ) of Diophantine number, and
Before we carry out the proof, we need the following lemma. Let
Lemma 2.5 For some constant C,
Proof. Since L -;h and are the homogenized operators defined respectively on
fine and coarser grid with constant coefficients, they are well behaved [11, 12]. Furthermore,
Therefore,
(2.26)Proof of Theorem 2.2. Note that
Therefore,
Since ae(M) - kMk h , by Theorem 2.1 and Lemma 2.5, the rest of the proof can easily
Oscillation Along the Diagonal Direction
3.1 Model Equation
Consider another special case of (1.1), a two-dimensional elliptic problem with coefficients
oscillatory diagonally,
@
@x
a ffl
@
@y
a ffl
@y
where a ffl (x) is a strictly positive continuous function, and a ffl
Also, the operator has the property of (1.2). From (1.3), the corresponding homogenized
equation of (3.1) is:
As ffl goes to zero, we know that the solution OE ffl of (3.1) converges to the solution OE of
(3.2).
Now, consider a corresponding discretized equation of (3.1),
where
Here, we assume the discretized coefficients have the following property,
a
Denote the discretization of the homogenized operator
a
in (3.2) by
(D i
(D i
k=1a k0
k=1 a k0 .
The operator of the two level method can be expressed as
where ff is the inverse of the largest eigenvalue of L ffl;h , has order of h \Gamma2 . And
3.2 Convergence Analysis
We still first consider the simplified operator M 1 defined in (2.6).
Theorem 3.1 If the ratio of h to ffl is strictly irrational, h belongs to the set of Diophantine
number defined in (1.6), then there exist two constants C and ae 0 such that
whenever
In order to prove Theorem 3.1, we introduce the following lemmate first.
Lemma 3.1 Define two discrete functions on
k=1a kj
k=1a k0
are bounded, and satisfy
for (i;
2\Omega h .
Proof. Notice that by the assumption of the coefficients,
a kj
Applying the operator L ffl;h to Z 1
ij as follows,
k=1a kj
By Lemma 1.1,
k=1a kj
Hence, Z 1
ij is bounded for (i;
-\Omega h , and satisfies
The result can be deduced similarly for Z 2
ij . We can also show that Z 2
ij , and satisfies
This proves the lemma. 2
Remark. The explicit forms of Z 1
ij and Z 2
ij depend on a ffl being a function of x \Gamma y.
For the general angular dependences a ffl (ffx + fiy), these forms would not be possible.
Lemma 3.2 Assume discrete function defined as
a
bounded, and has the following properties
Proof. Using the symmetric properties of the coefficients, we can establish the results
similarly as in Lemma 3.1. 2
Lemma 3.3 Assume U ij satisfies
(3.
where
ij ) is a normalized eigenpair of L ffl;h . Then, we have
and
Proof. First, we observe that
Applying U ij to the following equation and taking summation, we have
then follows. For (3.12), since U ij vanishes at the boundary, it satisfies2
(D i
(D i
(D i
Multiplying D i
to (3.10), we get
(D i
(D i
(D i
(D i
By the uniformly ellipticity property of the homogenized operator (3.4), we have
An similar argument gives
(3.13), (3.14) and (3.15) complete the proof of lemma. 2
Lemma 3.4 Assume that U ij defined in Lemma 3.3 satisfies the following boundary conditions
Then,
(D i
(D i
Proof. Since
we have
(D i
(D i
Then,
(D i
(D i
(D i
Combined (3.12) with the following relation,
the rest of the proof follows. 2
Theorem 3.2 Assume OE h
are defined in Lemma 3.3 and Lemma 3.4. Then,
Proof. First, we introduce the following discrete functions
for (i;
\Omega h . By the assumption (3.16), G ij vanishes at boundary, i.e.,
For
ij , we have
(D i
(D j
Then,
For
ij , we establish similarly
(D i
(D j
Then,
a
a
(b ij+1 Z 2
For simplicity, we introduce another two operators, L 1 and L by
(D i
and
a
a
Observe that
from (3.19), (3.20) and (3.21), we get
(b ij+1 Z 2
Meanwhile, by Lemma 3.2, taking summations by parts, we have
(D j
\Gammaffl
\Gammaffl
By the symmetric property of the coefficients, a
Proceeding in the same way as before, we obtain
Combining (3.23), (3.24), (3.25) and (3.26), we get
(D j
Further,
[a
(D i
(D i
(D i
(D i
The exact same order for the last three terms in (3.22) can be established similarly. Con-
sequently, from (3.27) and (3.28),
(D i
By Poincare inequality,
By Lemma 3.1 and Lemma 3.3,
which implies,
We hence complete the proof. 2
Remark. The result in (3.18) here is consistent with the result established in [8] for the
continuous case.
Proof of Theorem 3.1. The procedure is exactly the same as that in Theorem 2.1, except
different inequalities estimated. By Theorem 3.2, we have
I
instead of (2.20). Therefore, in order to make I 1 ! 1, we set
instead of (2.21). For I 2 - 1, it is sufficient to have
Combining (3.31) and (3.32), we have
what we have done in previous section, we consequently establish the following main
Theorem.
Theorem 3.3 There exists constant C, the operator M defined by (2.5) satisfies
whenever the step size h belongs to the set S(ffl; h 0 ) of Diophantine numbers, and
The analysis of the proof strongly indicates us the role of homogenization, which plays in
the convergence process. If, for example, the coarse grid operator is replaced by its averaged
operator in one dimensional problem [5], the direct estimate for multigrid convergence rate
is not asymptotically better than just using the damped Jacobi smoothing operator. This
follows from the effect of the oscillations on the low eigenmodes. The homogenized coarse
grid operator reduces the number of smoothing operation from O(h \Gamma2 ) to O(h \Gamma6=5 ln h),
when the step size h belongs to the set S(ffl; h 0 ) of Diophantine numbers. In [9], it has
also been shown that the number of smoothing iteration needed for the convergence of
the multigird method with the average coarse grid operator guarantees the one with the
homogenized coarse grid operator.
The theoretical results established in this paper seem a little bit disappointing. From a
number of numerical experiments [5, 6], we can get much faster convergence rate in practice
than that required in the theoretical results. However, numerical results do indicate that
the convergent rate depends on the grid size h for these types of equations with oscillatory
coefficients [5, 6].
There are some inequalities in the implementation of the proof, which are potential to
be improved so that a sharper convergent rate is possible. One of them is to enlarge the
space of low eigenmodes, which can be approximated by the corresponding homogenized
eigenmodes. Such as to improve (3.18) to (2.14), which we think is the sharpest inequality
one can establish. We established the same inequality for the one dimensional case as
in the two dimensional case oscillatory along a coordinate direction [9]. However,
the portion of the eigenmodes that can be approximated by the homogenized ones in later
case is relatively much smaller than the previous one. That's why we obtain O(h \Gamma4=3 ln h)
for the number of smoothing iterations instead of O(h \Gamma6=5 ln h) for the later case, although
there have the same inequality (2.14) for the space of low eigenmodes.
Nevertheless, from the analysis of homogenization, we understand that there always
exists a boundary layer [2, 10], which makes it hard to get the first lower order correction
of the eigenfunctions. The case we discussed in chapter 2, which is equivalently to one
dimensional problem, doesn't have such a boundary layer. We hence get an estimate
as in (2.14). For the case in chapter 3, all we can establish is (3.18), which consists of
the result established in [8] for the continuous case. And, it also defines us a smaller
low eigenspace. However, numerical examples tells us that there are also some difference
between these two cases. That a complete understanding of the first lower order correction
for the eigenfunctions is required to further improve the estimates.
--R
The Multi-Grid Method for the Diffusion Equation with Strongly Discontinuous Coefficients
Asymptotic Analysis for Periodic Structure
Computation of Oscillatory Solutions for Partial Differential Equations
Multigrid Methods For Differential Equations With Highly Oscillatory Coefficients.
New Coarse Grid Operators of Multigrid Methods for Highly Oscillatory Coefficient Elliptic Problems.
Grid Transfer Operators for Highly Variable Coefficient Problems.
Homogenization of Elliptic Eigenvalue Problems: Part 1
Multigrid Method for Elliptic Equation with Oscillatory Coefficients.
First Order Corrections to the Homogenized Eigenvalues of a Periodic Composite Medium.
New York: springer- Verlag
Multigrid Methods
--TR | convergence;oscillation;homogenization theory;elliptic equation;finite difference;multigrid method |
273720 | Multidimensional Interpolatory Subdivision Schemes. | This paper presents a general construction of multidimensional interpolatory subdivision schemes. In particular, we provide a concrete method for the construction of bivariate interpolatory subdivision schemes of increasing smoothness by finding an appropriate mask to convolve with the mask of a three-direction box spline Br,r,r of equal multiplicities. The resulting mask for the interpolatory subdivision exhibits all the symmetries of the three-direction box spline and with this increased symmetry comes increased smoothness. Several examples are computed (for in terms of the refinement mask are established and applied to the examples to estimate their smoothness. | iii. The function ' is in some H-older class C ff for suitable ff.
The function ' is fundamental, if i holds, and it is refinable, if ii holds. The sequence h is
called the refinement mask of the function '.
In that sense the paper is a continuation of [25] where we considered compactly supported
fundamental solutions given as linear combination of B-splines in the univariate setting and of
box splines in the multivariate setting. While those fundamental solutions exhibit nice symmetry,
regularity and approximation properties, they fail to satisfy a refinement relation, which precludes
their use in subdivision schemes. In fact, it was proven in [19] for the univariate case that there
are no compactly supported piecewise polynomial functions (splines) which are refinable and fundamental
except for piecewise constant or piecewise linear with integer knots. On the otherhand,
the univariate refinable and fundamental functions given in [7] and [8] are convolutions of B-splines
with some distributions. Univariate refinable and fundamental functions can be derived also as the
autocorrelations of refinable functions constructed in [5] in the context of wavelets; again these functions
have the form of convolutions of B-splines with distributions. This indicates that multivariate
functions satisfying i-iii may not necessarily be splines, but they could be convolutions of box
splines with some distributions. It is our goal here to provide in the multivariate setting compactly
supported and refinable fundamental solutions with the nice properties of symmetry, regularity and
approximation. The functions given here are convolutions of box splines with distributions.
Some compactly supported interpolatory subdivision schemes have already been given in the
literature. In particular, we mention the work of [9], [12], [13] and [14]. The butterfly subdivision
scheme given by [12] was the first example of bivariate C 1 refinable and fundamental function '.
An improvement of the smoothness analysis of the scheme can be found in [14]. Several continuous
bivariate refinable and fundamental functions are also given in [9]. Applications of interpolatory
subdivision schemes to the generation of surfaces can be found in [9], [12], [13], [14] and [23], and
applications to wavelets decompositions and image compression can be found in [10] and [11]. The
use of fundamental solutions for cardinal interpolation to obtain fundamental solutions for Hermite
interpolation on the lattice was discussed in [15]. Connections of fundamental and refinable functions
to refinable functions having orthogonal shifts was discussed in [20] and [22].
We first establish some notation and some consequences of the refinement relation (1.1). For
a finite sequence a, the symbol of a is the trigonometric polynomial ea on IR s with extension to a
Laurent polynomial, e
A, on C s as defined by the equations
For a compactly supported continuous function ' on IR s , the symbol takes the form
Introduction and Method
the last equality by the Poisson summation formula. In other words, it is the symbol of the sequence
restricted to ZZ s .
Upon taking Fourier transforms, the refinement relation (1.1) is equivalent to
When ZZ s
represents the vertices of the unit cube in IR s , a standard argument yields
-2ZZ se h(
or, in terms of the symbols,
se
Before we apply relation (1.5) to the problem at hand, we introduce another set of polynomials
e
2 , for a sequence a and relate them to the polynomials e
A
. We can decompose
e
A modulo ZZ s
2 as follows:
se
A - (z) :=
Then
e
A
Hence, we see that the polynomials e
A
2 , are obtained from the polynomials
2 g under the action of the unitary matrix U := f(\Gamma1) - 2 ZZ s
A
z - e
s:
Now, if the function ' is a fundamental solution for cardinal interpolation so that i. holds,
then from (1.2) and (1.3), e
se
se
Multidimensional Interpolatory Subdivision Schemes 3
Thus, the equation
is necessary in order that the function ' be a compactly supported and refinable fundamental
solution for cardinal interpolation.
Our point of view will be to try to define an appropriate polynomial e
H satisfying (1.8) and so
that the subdivision mask derived from the coefficients of e h will converge to the desired fundamental
function '. We hope to do this with good estimates on the smoothness of the resulting fundamental
solution as well. To this end we take our cue from the construction of compactly supported refinable
functions in the univariate case in [5] where the function e h takes the factored form
The left hand factor is the symbol of the refinement mask for the cardinal B-spline of order N .
It is this left hand factor that gives the smoothness to the resulting refinable functions while the
contribution of the trigonometric polynomial factor G(y) takes away from that smoothness. An
appropriately chosen G not only gives the basic orthogonality properties of the refinable functions,
but also does not lessen the smoothness too much.
For several variables, the appropriate generalization of the cardinal B-splines are the box splines
defined for a given s \Theta n matrix \Xi of full rank with integer entries. The basic facts and much of the
notation concerning box splines are taken from [2]; the reader is referred to [2] for the appropriate
references. In the case of the univariate cardinal spline, the number N plays several roles: it is the
order of the spline (one more than the degree of its polynomial pieces); the interval [0 : :N ] is the
support of the spline; and, the spline belongs to C N \Gamma2 , or even finer, its 1st derivative is in
For the multivariate box splines, these three things are encoded in the direction matrix \Xi,
but in more complicated ways. The (total) degree of the polynomial pieces of the box spline does
not exceed n \Gamma s. The support of the box spline is the polyhedron
where the summation runs over the columns of the matrix \Xi and as the notation indicates, we shall
apply set notation to \Xi as a set of its columns. Finally, the box spline belongs to C
is the minimum number of columns that can be discarded from \Xi to obtain a
matrix of rank ! s.
A box spline B \Xi satisfies the refinement equation
e
Y
In what follows, we shall drop the subscript indication of dependence on the direction set \Xi unless
it is needed to resolve ambiguities.
In the univariate case, the shifts of any cardinal B-spline form a Riesz basis, but this is no
longer the case for box splines in higher dimensions. However, there is an easy to check criterion
for when the shifts of a box spline do form a Riesz basis; namely, when the direction set \Xi is
4 1. Introduction and Method
a unimodular matrix (all bases of columns from \Xi have determinant \Sigma1). The last condition is
equivalent to there being no y 2 IR s at which all of the functions b
vanish. The
latter fact in turn implies that the symbol for the autocorrelation function B au := B B(\Gamma \Delta ) is
positive for . In this case, (1.5) reads
where e
B, e
M are the symbols associated with the box spline B, its autocorrelation and
its refinement mask m respectively. That observation is the basic step in the proof of
Proposition(1.11). If the direction matrix \Xi defining the box spline
neither set of Laurent polynomials
2 g and
have common zeros in (Cnf0g) s . Here f
M is the symbol for the refinement mask m of the box spline.
Proof. Since e
B au is positive for implies that the
first set of Laurent polynomials in the Proposition have no common zeros for
The relation (1.7) then implies that the second set of Laurent polynomials in the Proposition have
no common zeros for . But from (1.6) and (1.9) it follows that the second set
of Laurent polynomials can only have common zeros where z = exp(\Gammaiy). Hence, again by (1.7),
the first set of Laurent polynomials can only have common zeros where one of the components of
z is zero. -
We shall use this Proposition to find candidates for e
H that satisfy (1.8). By Bezout's theorem,
there exist Laurent polynomials e
such that
We set Q :=
-2ZZ sz \Gamma- e
m(y); or e
In that case,
e
se
so that
e
se
and (1.8) is satisfied.
The proceeding paragraph outlines a general construction of the mask. There are many possibilities
for the polynomials e
and choices of ff 2 IN s in (1.12). Which ones give rise to the
refinement mask of a regular fundamental solution for cardinal interpolation with the properties
we desire? We present the two dimensional case in detail in the next section.
Multidimensional Interpolatory Subdivision Schemes 5
Once a construction of the mask is carried out, the Fourier transform of the corresponding
refinable function ' with mask h can be represented by
If e the infinite product converges in the sense of distributions. In this case the function
' is a compactly supported distribution.
The function ' can be constructed iteratively. Begin with one of the simplest continuous
examples of a ', say the hat function ' 0 := B 1;1;1 (the piecewise linear box spline having value 1
at (1; 1)) and define recursively
h(j)'
This is called the cascade algorithm. If ' n converges to ' uniformly, the function ' can be obtained
through this interative process and ' is a compactly supported L1 (IR s ) function. If ' is stable,
converges to ' uniformly. Recall that a compactly supported continuous function ' is stable,
if there is C ? 0, so that
With the refinement mask h satisfying e we have a function ', at least in the distributional
sense, which satisfies equation (1.1). It is still a matter to check whether the obtained
refinement mask defines a continuous function ', whether the resulting refinable function is the
fundamental solution for cardinal interpolation and whether the corresponding subdivision scheme
converges uniformly. Since only the mask is at hand, all criteria used to test the above properties
should be in terms of the refinement mask h and it should be possible to implement the criteria for
reasonable examples.
We first of all require that ' be continuous. Regularity or smoothness criteria that can be
applied to our case are discussed in the x3. All the examples we shall construct in x4 will be better
than C 1 .
Once we know that the resulting function ' is continuous, then we must show that it is
fundamental. As a corollary of the characterization of the stability and orthonormality of refinable
functions in terms of refinement mask, the results in [20] provide necessary and sufficient conditions
to test whether a mask that determines a continuous compactly supported ' is also fundamental:
Theorem(1.16). ([20] Proposition 4.1) Suppose ' is a compactly supported continuous function
satisfying the refinement equation (1.1) with the mask sequence h supported in [\Gamma(N \Gamma1) : :(N \Gamma1)] s .
Then ' is fundamental if and only if the ffi sequence is the unique eigenvector (up to constant
multiples) of the matrix
IH :=
corresponding to a simple eigenvalue 1.
For the examples of our construction given in Section 4, the task of determining the eigenvalues
and eigenvectors can be carried out numerically; for example, using MATLAB. Once we know that
' is fundamental, then it is automatically stable, since there is C ? 0 so that
6 2. The Bivariate Construction
(see [17]). Consequently ' n converges to ' uniformly (e.g. see [3]).
Suppose that the mask h defines a fundamental and refinable continuous function '. An interpolatory
subdivision scheme is defined as follows: Let c := fc ff : ff 2 ZZ s g be a sequence of control
points. The subdivision operator S is defined by
This gives us new control points,
The new control point sequence c 1 is determined linearly from c by 2 s different convolution
rules, and sequence c 1 consists of 2 s different copies of the original control point sequence c which
are mixed together. Since the scaling factor is 2, the new control polygon is parameterized, so that
the points c 1
ff correspond to the finer grid 2
c(fi), the new control
point sequence interpolates the previous one. Continuing this process, we get the control point
sequences c corresponding to the grids 2 \Gamman ZZ s . This process is called an interpolatory
subdivision scheme. The subdivision scheme is said to converge for an arbitrary control point
sequence c 2 ' 1 (ZZ s ), if there exists a continuous function f so that
lim
ck
The limit function f is
c ff '(
In particular, if c then the limit function is '.
It was shown in [3] that if ' is stable, the subdivision scheme converges. Therefore, the interpolatory
subdivision schemes discussed in this paper converge. Interested readers should consult
[3] for details.
Finally, we remark that since the refinable functions constructed here
possible to construct prewavelets from them. Interested readers should consult [17], [18] and [24]
for details of the construction of prewavelets.
2. The Bivariate Construction
Here we detail a construction in the two-dimensional case. The box splines that have stable
shifts are those defined for direction matrices based on the three directions
1). We shall take these directions to be given with equal multiplicities r. These
box splines will be denoted by B r;r;r . The symbol of the mask in this case takes the form
The box spline B r;r;r belongs to C 2r\Gamma2 . In case r = 1, the box spline B 1;1;1 is itself interpolatory
so there is no need to find a suitable multiplier e
Q for the symbol. Hence, we assume that r - 2 in
the sequel.
Multidimensional Interpolatory Subdivision Schemes 7
It has been our experience [25] that out of the many possible fundamental solutions that one
can obtain through the use of Bezout's theorem, most do not have practical (or aesthetic) value
because they have large variation (and often large max norm) over their support. The method of
construction here leads to examples of fundamental solutions of increasing smoothness with the
classic shape: centrally symmetric with value equal 1 at the origin. In fact, from the cases we have
computed, it appears that with greater symmetry comes greater smoothness. Our object in the end
will be to preserve the well-known symmetry structure of the box spline [1].
The existence of a function e
Q implies that there is some square that will contain the support of
the mask h for e
H. A smaller square means smaller support for the mask h and consequently, smaller
support for the refinable function. Initially, we try to fit the support of the mask h for e
H into a
square of side length 4r \Gamma 1. It turns out that for our examples this is possible. Later, we will impose
further conditions to make its support look like that of (twice) the support of the box spline within
that square. We consider the even lattice in the somewhat smaller square S
central point 2). Choosing this central point for the ff in (1.12), we find polynomials
e
such that
The idea of the construction is simply this: The masks of f
each occupy a smaller rectangle
S - in the lower right hand corner of S. The mask rectangle S - (with its values) is permitted to
shift in the three directions long as the shifted rectangles remain within
S. Each of the distinct such shifts are assigned an unknown coefficient which multiplies each of the
entries in the mask. In this way, the points in 2ZZ are assigned values which are a
linear combination of the values of the masks for the f
(the values in ZZ 2 n2ZZ 2 are
automatically zero). This leads to a linear system of equations when we ask that the resulting mask
be zero everywhere except at should be equal 1.
Our first goal is to analyze this system, even when some additional symmetry conditions
inherited from f
are imposed. First, we observe from (2.1) and the definition of f
M - that
2:
Therefore, the upper right corner of the rectangle S 0;0 can be shifted to the even lattice points in
the rectangle [2r . There are (r \Gamma 1) 2 such shifts. The new mask formed when these shifts
are added together will be the mask for the polynomial@ X
a 0;0 (j)z 2jA f
and the candidate for e
Q 0;0 is the first factor. Similar reasoning applies to the other rectangles S -
8 2. The Bivariate Construction
and we find that the candidates for the e
are:
e
a 0;0 (j)z 2j
e
a 1;0 (j)z 2j
e
a 0;1 (j)z 2j
e
a 1;1 (j)z 2j :
There are (r \Gamma unknowns in the above equations which equals
precisely the number of even lattice points in [0 . Thus, the resulting system of equations
has a solution if it is consistent and then each such solution satisfies (2.2).
In general, there may be infinitely many solutions because of the symmetries inherent in the
masks of the f
First, observe that f
M is invariant under the interchange of z(1) and z(2). Thus,
f
f
For any given set of solutions e
to (2.2), we define
e
e
Then the e
- also provide solutions for (2.2) as does the combination e
- )=2 and the
latter also satisfy
e
e
Next we observe that f
also satisfies the reciprocal relation
z
M(z
This implies that
Again, given any set of solutions e
Multidimensional Interpolatory Subdivision Schemes 9
Observe that if the e
then so do the e
- . Furthermore,
-2ZZ 2z 4r\Gamma2 z \Gamma2-\Gamma2- f
so that the e
- also satisfy (2.2). Moreover, the monomials comprising e
still correspond to shifts
of S - that would remain in S. Finally, the combinations e
with the property that
Assume now that the e
are solutions of (2.2) that satisfy both (2.7) and (2.10). We have
shown that they can be obtained from any solution by the above procedure. Define
Then the relations e
e
Q(z
follow from (2.7) and (2.10).
We are now in a position to define a suitable e
With this definition, e
H is symmetric in the components of the variable z = (z(1); z(2)) and is
reciprocal
H(z
These two properties imply immediately that the associated mask of coefficients h is symmetric
through the direction and is symmetric through the origin:
Furthermore,
e
so that the necessary condition (1.8) is fulfilled. Moreover, since
we find that
2. The Bivariate Construction
The maximum support square for the mask of any f
while that for the
mask of any e
. Thus, the support of the mask for f
Q is in
and
We now restrict ourselves to consider the periodic symbol e h :=
The reciprocal relation (2.13) implies that e h is a real cosine polynomial. From (2.14) we have
2 nf0g, we also have
This gives four linear equations for the
, from which easily follows
In particular, since e
implies that
The above construction does lead to solutions that provide fairly smooth convergent interpolatory
subdivision schemes. But as it stands, the solutions are not uniquely determined since we
have not taken into account the full symmetries of the box spline. Here we use the symmetries of
the "centered" form of the box spline B r;r;r (see [1]), translated to our setting. The symmetries of
the mask f
M will be generated by the two matrices
and G 2 :=
acting on the exponents of z through the two relations
M(z); and z(2) r f
For later purposes, we note that the complete set of symmetries are embodied in the matrices
G :=
'oe
The symmetries for the box spline were derived by mapping the three directions into permutations
of themselves. This has significance for the problem at hand because the three directions are in fact
the non-zero elements of ZZ 2
2 n0; for all G 2 G:
Multidimensional Interpolatory Subdivision Schemes 11
The first of relations (2.17) just combines the interchange of z(1) and z(2) and the reciprocal
relation and so has already been taken into account in our construction above. The second relation
is a much stronger requirement that will restrict further the shifts of the S - that will be permitted,
thus reducing the number of unknowns. The restrictions will depend on the parity of r.
We first check how the second relation in (2.17) translates to the f
r even:
r odd:
The symmetry on e
H imposed by the matrix G 2 acting on its exponents is
This, together with the definition of e
H , (2.12), and the second relation in (2.17), makes the following
requirement of e
Q:
As was the case with f
places restrictions on the e
r even:
re
r odd:
If we require that the e
retain the form (2.4), then substituting that form into the right
hand side of (2.22) places further restrictions on which monomials may have non-zero coefficients.
Carrying this out, we find that the e
should have the form:
r even:
e
\Gamma( r
a 0;0 (j)z 2j ;
e
\Gamma( r
e
a 0;1 (j)z 2j ;
e
\Gamma( r
a 1;1 (j)z 2j ;
r odd:
e
e
e
e
12 2. The Bivariate Construction
First observe that in either case there are coefficients in (2.23).
Next, we note that the relations (2.19) put similar restrictions on the possible nonzero coefficients
of z(1) 2j(1) z(2) 2j(2) for the f
f
is even, or
f
r is even, or
f
is even, or
f
is even, or
Thus, under these restrictions the monomials z(1) 2j(1) z(2) 2j(2) in
coefficients have indices that satisfy
Hence, there are equations when these coefficients are compared in (2.2) with
the e
Once again, a solution exists if the system is consistent.
The two operations (2.6) and (2.9) both produce functions of the type (2.23) if the original
functions are of that type. Hence as before, there exist solutions e
of (2.2) that satisfy both (2.7)
and (2.10). It is possible to make the solution satisfy (2.21) as well by making use of
r even:
e
e
e
e
r odd:
e
e
e
e
If the e
have the form (2.23) and already satisfy (2.7), (2.10), and (2.2), then the functions
e
e
e
e
will be of the form (2.23) and satisfy (2.2), (2.7), (2.10) and (2.22).
We summarize the findings so far in the following theorem:
Multidimensional Interpolatory Subdivision Schemes 13
Theorem(2.25). Among the solution sets f e
-2ZZ 2, of
having the form (2.23), there is one that satisfies (2.7), (2.10) and (2.22). For that solution set the
Laurent polynomial
e
Q(z) with e
is invariant under the action of the group G acting on its exponents:
e
G:
The mask sequence h corresponding to e
H is supported in
and satisfies
G:
The relation (2.27) together with (2.18) gives a very strong symmetry on the mask h. (The
reader may find it very helpful to consult the masks of section 4 while reading this.) Since
G;
as a set, the numbers fh(2j are the same for each - 2 ZZ 2
2 nf0g. These numbers are
arranged symmetrically in the six cones generated by the lines
on hexagonal rings of lattice points about the origin:
A consequence of this is that the mask h is symmetric on its support along any of the lines
const. For example, if const - 0, then
The relations (2.16) were a result of f
f
which implies e
e
Comparing coefficients in these equations and taking into account
ffi(j), we arrive at further relations for the mask h on the lines
j(2)=const
j(2)=2const
j(1)=const
j(1)=2const
j(1)\Gammaj(2)=const
j(1)\Gammaj(2)=2const
14 3. Regularity of refinable functions
In particular, the sum of entries in the interior of each of the 6 cones is 0.
The final symmetry comes from the fact that f
e
3. Regularity of refinable functions
In this section we will provide some criteria for the regularity of refinable functions in terms of
their masks. These criteria are useful for the estimation of the regularity of the refinable functions
derived from interpolatory subdivision schemes constructed in the next section.
Unvariate counterparts of our regularity theorems and more can be found in [6, x7.1.3], [16],
[4] and references cited therein. The proofs of some of these results is carried to the multivariate
case and the dilation matrix 2I in order to provide estimates for our examples. This can be done
without encountering many difficulties; however, we include the proofs for the sake of completeness.
The function
for some constant independent of x.
The number ff is related to weighted L p exponents - p defined as
Z
In this paper, we only use - 1 and - 2 . The regularity order ff is related to - 1 and - 2 by the inequality
The idea here is the usual Littlewood-Paley technique. The domain of the Fourier transform
of ' is broken into the pieces C n := 2 n TT s
Z
then Z
Hence, log 2, or when
To estimate
R
'(w)j p dw, we consider the operator on L 2 (TT s ) defined in terms of a eg with
the symbol of a finite mask
Wf :=
This is the Fourier transform of the transition operator W for the mask g as defined in [20]:
Wa :=
Multidimensional Interpolatory Subdivision Schemes 15
As discussed in [20], if g is supported on [\Gamma(N the space of all
sequences supported on [\Gamma(N is an invariant subspace of W . The restriction of W
to ' N is called the restricted transition operator. The restricted transition operator W on ' N can
be represented by the matrix
G :=
(cf. Theorem 1.16). We only use the restricted operator in this section and assume that g is supported
on . Similarly, the space b
the space of Fourier transforms
of all sequences in ' N , is an invariant subspace of c W .
Proposition (3.2). Let b
' n be defined inductively by
Then for any f 2 L 2 (TT s ), we have
and
Z
Z
\Gamman w)b' n
Z
\Gamman w)\Pi n
with equality whenever both f - 0 and eg - 0.
Proof. For
For general n, using the fact that
3. Regularity of refinable functions
we fiind that
c
-2ZZ se g(w=2
\Gamman (w
Inequality, (3.4) follows directly by (3.3), since
Z
Z
\Gamman (w
Z
\Gamman w)b' n (w)jdw:
A similar result was proved in [21].
Suppose that eg satisfies
Define the space of trigonometric polynomials
Since D fi
2 nf0g, we have that
ae and for all jfij - ae.
Hence, the space V ae is an invariant subspace of the restricted transition operator c W defined by
(3.1).
Proposition(3.6). Suppose e
satisfies conditions (3.5) and eg - 0. Let - be the spectral radius of
c
. For the function ' g defined by
there is a constant C such that
Z
Multidimensional Interpolatory Subdivision Schemes 17
Proof. For any given ffi ? 0, there exists a norm k \Delta k on V ae so that for any f 2 V ae ,
Hence
Z
Since all the choices of the constants in this proof do not depend on n, for simplicity, we denote all
the constants by C even though the value of C may change with each occurance.
Cjwj, the function b
' g is bounded on TT s . Hence,
Z
Z
ae, we have f 2 V ae . Note that
Z
Z
\Gamman w)b' n (w)jdw
Z
\Gamman w)b' n (w)jdw
Z
It was proved in [20] that if ' g is fundamental, then - ! 1. Hence, the condition
whenever the mask g is the mask for a convergent interpolatory subdivision scheme.
Theorem(3.7). Suppose e g satisfies conditions (3.5) where e g is derived from an interpolatory
subdivision scheme with mask h as follows:
eg :=
fails to hold.
Let - be the spectral radius of c W j V ae
, then the fundamental solution ' for the interpolatory subdivision
scheme is in C ff\Gammaffl for any ffl ? 0 and
log -= log 2; if e h - 0 or;
log -=2 log fails to hold.
Proof. If e h - 0, then ' Proposition 3.6 and the result follows from the Littlewood-
Paley technique mentioned at the beginning of this section with
Similarly, if e h - 0 fails, then we use the autocorrelation function ' au := '( \Delta ) '(\Gamma \Delta ) of '. The
function ' au is refinable with the refinement mask h au := 2 \Gammas h( \Delta ) h(\Gamma \Delta ). Hence e h
nonnegative. Thus, ' Proposition 3.6 and the result follows from the Littlewood-Paley
technique mentioned at the beginning of this section with 2. -
The above regularity results only depended on the property (3.5) and the nonnegativity of the
mask eg and did not require any factorization of the mask eg. However, factorization can be useful
3. Regularity of refinable functions
both to establish (3.5) in some cases and to isolate a nonnegative part. This is the case for the
bivariate construction discussed in the last section where we have the factorization
and b
' is the product of the box spline B r;r;r 2 C 2r\Gamma2 with the Fourier transform of a distribution.
Since D fi e
2 nf0g, we have that
e h, and
Hence, the space V 2r\Gamma1 is an invariant subspace of the restricted transition operator c
W defined by
(3.1) with e
e h, while V 4r\Gamma1 is an invariant subspace of c
W if the autocorrelation function is
used.
In applications, we use either the matrix IH defined in Theorem 1.16 or the matrix
IH au :=
as an operator on the restricted sequence spaces to check the regularity of '. Since c W is the
Fourier transform of the operator IH (respectively, IH au ) on the restricted sequence space, the
eigenvalues of IH (respectively, IH au ) are the eigenvalues of c
W and the Fourier transform of the
eigenvectors of IH (respectively, IH au ) are the corresponding eigenvectors of c
W . This observation
together with Theorem 3.7 allows the following procedure to estimate the regularity of '. First find
all eigenvalues and eigenvectors of IH (respectively, IH au ), then throw out all those eignvalues whose
Fourier transform of the corresponding eigenvectors are not in V ae , where
. Note that the Fourier transform of the eigenvector v := (v fi ) of IH (respectively,
IH au ) is not in V ae if and only if
0-j-j-ae
The maximum absolute value of the remaining eigenvalues is the - to be used in Theorem 3.7.
For the examples of our construction in the next section, the task of determining which of
the largest eigenvectors are definitely NOT in V ae can be carried out numerically using MATLAB;
simply order the eigenvalues according to decreasing modulus and proceed down the list checking
(3.8) until a numerically significant drop in value occurs.
The problem with this procedure is that as the matrices get large, it becomes quite difficult
to carry out the numerical procedures. The complexity of using IH au is substantially more than in
using IH, yet we are forced to use it since e h will fail to be positive for odd r. The next observation
will be useful for our particular examples.
In the case e
mjeq, where e
m is the mask of a box spline B \Xi and e
is a finitely supported mask of some distribution with e , we can define an operator, c W 2 \Gammas eq ,
as in (3.1) but using q. If the support of the mask q is in
consider the operator c W 2 \Gammas eq restricted to b ' N .
Multidimensional Interpolatory Subdivision Schemes 19
mjeq where e
m is the refinement mask of a box spline B \Xi , and the
trigonometric polynomial e q is nonnegative,
e g satisfies (3.5). Let - be the spectral
radius of the restricted transition operator c W 2 \Gammas eq , i.e. the spectral radius of the matrix
restricted to V ae . Then ' 2 C m(\Xi)+s\Gamma1\Gammalog -= log 2\Gammaffl , where ffl ? 0 is arbitrary.
Proof. We note that
e q
e q
(cf. [2, p. 102]). Hence, just as in the proof of Proposition 3.6 and using Theorem 3.7, we have
Z
Z
Y
e
dw
Z
(recall that the value of C may change with each occurance). Hence, that ' 2 C m(\Xi)+s\Gamma1\Gammalog -= log 2\Gammaffl
follows from the Littlewood-Paley technique. -
The factorization in Corollary 3.9 does not require that we take the largest box spline factor,
just that the remaining factor be nonnegative.
4. Examples
We present the bivariate examples corresponding to decreasing
detail) and discuss their regularity. We note that we have chosen the three direction mesh used in
box spline theory. It is an easy transformation to convert this to the hexagonal mesh used by several
people in wavelet analysis. One should realize that the three-direction symmetry of our examples
would transform to a beautiful hexagonal symmetry in the hexagonal mesh.
2. The box spline B 2;2;2 has continuous second partial derivatives and its third partial
derivatives are in L1 (IR 2 ). If we apply our method to obtain e h, then the support of the mask
h 2;2;2 is expected to be in the square [\Gamma3 : Indeed, we obtain a mask exhibiting the full
symmetries of the box spline
4. Examples
where the entry at the origin is distinguished by boldface type. Here we note that there are only
nonzero entries and only the 6 nonzero entries in the cone are required since the rest
follow from the symmetries. If we use Theorem 1.16 on the 49 \Theta 49 matrix IH 2;2;2 determined by
h 2;2;2 we find that it has 1 as a simple eigenvalue with ffi as its eigenvector. Hence, the function
defined via (1.14) is a fundamental solution for cardinal interpolation. That ' 2;2;2 enjoys
the classic shape of a fundamental solution and the symmetries of the tree direction box spline is
clearly seen in Figure 4.1.
Figure
(4.1). The function ' 2;2;2 on [\Gamma3 : :3] 2 and its level lines \Gamma:1
3. The box spline B 3;3;3 belongs to C 4 (IR 2 ). The support of the mask h 3;3;3 will be in
the square Again we obtain a mask exhibiting the full symmetries of the box spline
Multidimensional Interpolatory Subdivision Schemes 21
where the entry at the origin is distinguished by boldface type. Only the entries in the cone
are needed to describe the matrix with the symmetries are taken into consideration. The eigenvalues
and eigenvectors of the 121 \Theta 121 sparse matrix IH 3;3;3 are easily determined by MATLAB. Again,
1 is a simple eigenvalue with ffi as its eigenvector. Hence, the function ' 3;3;3 defined via (1.14) is a
fundamental solution for cardinal interpolation by Theorem 1.16(see Figure 4.2).
-5 -4 -3 -2
-5
Figure
(4.2). The function ' 3;3;3 on [\Gamma5 : :5] 2 and its level curves \Gamma:1
4. The box spline B 4;4;4 belongs to C 6 (IR 2 ). The support of the mask h 4;4;4 will be in
the square [\Gamma7 : . Again, the mask exhibits the full symmetries of the box spline
22 4. Examples6 6
\Gamma350 12950 \Gamma48650 36050 \Gamma48650 12950 \Gamma350 0
where the entry at the origin is distinguished by boldface type. Again, analysing the matrix IH 4;4;4 ,
we find that the resulting function ' 4;4;4 is fundamental (see, Figure 4.3).
-226
-226
Figure
(4.3). The function ' 4;4;4 on [\Gamma7 : :7] 2 and its level lines [\Gamma:1
Multidimensional Interpolatory Subdivision Schemes 23
8. As is already apparent in the case 4, the masks soon become too large to
give fully in the text. We have given the three cases full so that the symmetries alluded
to in Section 2 can be firmly established in the minds of the reader. We give the masks of the four
remaining cases in a more compact form, using the cones of symmetry. We use the first quadrant
with 1 in the lower left corner sitting at the origin, the rest of the diagonal is left blank and one
mask is given in the cone and the other in 0 - k ! j. The two included tables give the
masks for 8. While the mask for 8 is a 31 \Theta 31 matrix which is quite large
for numerical purposes, it does compare favorably with using a fundamental box spline interpolant
of comparable smoothness (for example, the fundamental solution corresponding to B 3;3;3 ) which
has exponential decay and so must be truncated appropriately for use.
The matrices IH grow more quickly and for is already an 941 \Theta 941 matrix. The
computations still show that the eigenvalue 1 is simple with ffi as its eigenvector. The computations
of the sums (3.8) do show the effects of the added complexity, but are none-the-less useful for the
estimation of regularity.
Regularity. The examples for sufficient variety to test various smoothness
criteria. We use Theorem 3.7 and Corollary 3.9 to estimate the regularity of our examples. We have
computed three separate quantities.
The first quantity is the spectral radius of c W j V
for e in all cases, even though by
Theorem 3.7 it applies only in the cases when e h - 0. It turns out that e h - 0 holds if and only if r is
even, since exp(i(r \Gamma 2) \Delta )eq - 4 in each case. We suspect that this quantity gives the best estimate
for regularity even in the case of odd r when it has not been shown to apply.
The second computation yields an estimate for the spectral radius of c
for
the size of the matrix IH au quickly impedes the computation and we were successful in carrying it
out only for 4. In every case, the estimate for the smoothness is smaller than that obtained
using the first quantity, although this estimate is valid in all cases.
Finally, we use the factorization e
in Corollary 3.9. In this
case, is the spectral radius of c W j V 2r\Gamma3
for
qj.
The results of the computations are listed in the following table: Recall that ' r;r;r belongs to
C ff\Gammaffl where ff is estimated by \Gamma log(-)= log(2) when r is even, by \Gamma log(-)=(2
and by 2 \Gamma log(-)= log(2) for r odd.
e
qj
3 2.8301e+00 2.1751e+00 2.6100e+00
6 4.7767e+00 ?? NA
--R
Bivariate cardinal interpolation by splines on a three-direction mesh
splines
A new technique to estimate the regularity of a refinable function
Orthonormal bases of compactly supported wavelets
Ten Lectures on Wavelets
Interpolation through an iterative scheme
Symmetric iterative interpolation processes
A butterfly subdivision scheme for surface interpolation with tension control
"Multivariate Approximation and Interpolation,"
Using parameters to increase smoothness of curves and surfaces generated by subdivision
Hermite interpolation on the lattice hZZ d
Subdivision schemes in L p spaces
Using the refinement equation for the construction of pre- wavelets II: Powers of two
Multiresolution and wavelets
Complete characterization of refinable splines
Stability and orthonormality of multivariate refinable functions
Biorthogonal wavelet bases on IR d
Interpolatory subdivision schemes and wavelets
Smooth surface interpolation over arbitrary triangulations by subdivision algorithms
Wavelets and pre-wavelets in low dimensions
--TR
--CTR
Yun-Zhang Li, On the holes of a class of bidimensional nonseparable wavelets, Journal of Approximation Theory, v.125 n.2, p.151-168, December
Guiqing Li , Weiyin Ma, Interpolatory ternary subdivision surfaces, Computer Aided Geometric Design, v.23 n.1, p.45-77, January 2005
Bin Han , Rong-Qing Jia, Quincunx fundamental refinable functions and Quincunx biorthogonal wavelets, Mathematics of Computation, v.71 n.237, p.165-196, January 2002
Bin Dong , Zuowei Shen, Construction of biorthogonal wavelets from pseudo-splines, Journal of Approximation Theory, v.138 n.2, p.211-231, February 2006 | subdivision schemes;box splines;wavelets;interpolatory subdivision schemes;interpolation |
273734 | Quasi-Optimal Schwarz Methods for the Conforming Spectral Element Discretization. | The spectral element method is used to discretize self-adjoint elliptic equations in three-dimensional domains. The domain is decomposed into hexahedral elements, and in each of the elements the discretization space is the set of polynomials of degree N in each variable. A conforming Galerkin formulation is used, the corresponding integrals are computed approximately with Gauss--Lobatto--Legendre (GLL) quadrature rules of order N, and a Lagrange interpolation basis associated with the GLL nodes is used. Fast methods are developed for solving the resulting linear system by the preconditioned conjugate gradient method. The conforming finite element space on the GLL mesh, consisting of piecewise Q1 or P1 functions, produces a stiffness matrix Kh that is known to be spectrally equivalent to the spectral element stiffness matrix KN. Kh is replaced by a preconditioner $\tilde{K}_h$ which is well adapted to parallel computer architectures. The preconditioned operator is then $\tilde{K}_h^{-1} K_N$.Techniques for nonregular meshes are developed, which make it possible to estimate the condition number of $\tilde{K}_h^{-1} K_N$, where $\tilde{K}_h$ is a standard finite element preconditioner of Kh , based on the GLL mesh. Two finite element--based preconditioners: the wirebasket method of Smith and the overlapping Schwarz algorithm for the spectral element method are given as examples of the use of these tools. Numerical experiments performed by Pahl are briefly discussed to illustrate the efficiency of these methods in two dimensions. | Introduction
. In the past decade, many preconditioners have been developed
for the large systems of linear equations arising from the finite element discretization
of elliptic self-adjoint partial differential equations; see e.g. [6], [14], [27]. A specially
challenging problem is the design of preconditioners for three dimensional problems.
More recently, spectral element discretizations of such equations have been proposed,
and their efficiency has been demonstrated; see [15], [16], and references therein. In
large scale problems, long range interactions of the basis elements produce quite dense
and expensive factorizations of the stiffness matrix, and the use of direct methods is
not economical due to the large memory requirements [12].
Early work on preconditioners for these equations was done by Pavarino [20], [21],
[19]. His algorithms are numerically scalable (i.e., the number of iterations is independent
of the number of substructures) and quasi-optimal (the number of iterations
grows slowly with the degree of the polynomials.) However, each application of the
preconditioner can be very expensive.
Several iterative substructuring methods which preserve scalability and quasi-
optimality was introduced by Pavarino and Widlund [22], [24]. These preconditioners
can be viewed as block-Jacobi methods after transforming the matrix to a particular
basis. The subspaces used are the analogues of those proposed by Smith [28] for
piecewise linear finite element discretizations. The bounds for the condition number
of the preconditioned operator grows only slowly with the polynomial degree, and are
independent of the number of substructures.
Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York,
N.Y. 10012. Electronic mail address: casarin@cims.nyu.edu. This work has been supported in part by
a Brazilian graduate student fellowship from CNPq, and in part by the U. S. Department of Energy
under contract DE-FG02-92ER25127.
The tensorial character of the spectral element matrix can be exploited when evaluating
its action in a vector [16], but does not help when evaluating the action of the
inverse of blocks of this matrix, as in the case of the above preconditioners. Following
Pahl [17], based on the work of Deville and Mund [7] and of Canuto [5], the above
constructions give rise to different and spectrally equivalent preconditioners using the
same block partitioning of the finite element matrix generated by Q 1 elements on
the hexahedrals of the Gauss-Lobatto-Legendre (GLL) mesh. This observation and
experiments for a model problem in two dimensions were made by Pahl [17], who
demonstrated experimentally that this preconditioner is very efficient. Thus, high order
accuracy can be combined with efficient and inexpensive low-order preconditioning.
We remark that similar ideas also appear in [18] and [26].
The analysis of Schwarz preconditioners for piecewise linear finite elements for
the h-method, has relied upon shape regularity of the mesh [10], [9], [3], which clearly
does not hold for the GLL meshes. We extend the analysis to such meshes, deriving
estimates for these finite element preconditioners of spectral element methods.
We give polylogarithmic bounds on the condition number of the preconditioned
operators for iterative substructuring methods, and a result analogous to the standard
bound for overlapping Schwarz algorithms. Then, by applying Canuto's result, [5],
we propose and analyze a new overlapping preconditioner that depends only on the
spectral element matrix. We also give a new proof of one of the estimates in [23] by
using the same equivalence.
The remainder of this paper is organized as follows. The next section contains
some notation and a precise definition of the discrete problem. Our motivation and
strategy are presented in detail in Section 3. In Section 4 we give the statement and
proofs of our technical results. In the two remaining sections we formulate and analyze
our algorithms.
2. Differential and Discrete Model Problems.
Let\Omega be a bounded polyhedral
region in R 3 with diameter of order 1. We consider the following elliptic self-adjoint
Find
(1)
where
Z\Omega
Z\Omega
fv dx for f 2 L
This problem is discretized by the spectral element method (SEM) as follows;
see [16]. We
triangulate\Omega into non-overlapping substructures
of diameter
on the order of H .
Each\Omega i is the image of the reference cube
a mapping F is an isotropic dilation and G i a C 1 mapping
such that its derivative and inverse of the derivative are uniformly bounded by a
constant close to one. Moreover, we suppose that the intersection between the closure
of two substructures is either empty, a vertex, a whole edge or a whole face. Each
substructure\Omega i is called a distorted cube. We notice that some additional properties of
the mappings F i are required to guarantee an optimal convergence rate. We refer to [2],
problem 2 and the references therein for further detail on this issue, but remark that
affine mappings are covered by the available convergence theory for these methods.
We assume for simplicity that k(x) has the constant value k i in each
with possibly large jumps occurring only across substructure boundaries. This point
is important only in the analysis of iterative substructuring algorithms in Section 5,
where our estimates are independent of the jumps of k(x).
We define the space P N (
-\Omega ) as the space of QN functions, i.e. polynomials of
degree at most N in each of the variables separately. The space P
is the space
of functions v N such that v N ffi F i belongs to P N (
-\Omega ). The conforming space P N
H 1(\Omega\Gamma is the space of continuous functions the restrictions of which
belong to
The discrete L 2 inner product is defined by
(2)
are, respectively, the Gauss-Lobatto-Legendre (GLL) quadrature
points and weights in the interval [\Gamma1; +1]; see [2].
The discrete problem is: find uN 2 P N
0(\Omega\Gamma9 such that
aQ
The functions OE N
of P N(\Omega\Gamma that are one at the GLL node j and zero at the other
nodes form the nodal basis of this space which gives rise in the standard way to the
linear system KN b. Note that the mass matrix of this nodal basis generated
by the discrete L 2 inner product 2 is diagonal. The analysis of the SEM method
just described and experimental evidence show that it achieves very good accuracy
for reasonably small N for a wide range of problems; see [2], [16], [15] and references
therein. The practical application of this approach for large scale problems, however,
depends on fast and reliable solution methods for the system KN b. The condition
number of KN is very large even for moderate values of N [2]. Our approach is to
solve this system by a preconditioned conjugate gradient algorithm. The following
low-order discretization is used to define several preconditioners in the next sections.
The GLL points define a triangulation T - h of
-\Omega into parallelepipeds, and on this
triangulation we define the space P - h ( -
\Omega\Gamma of continuous piecewise trilinear
tions. The spaces P
are defined analogously to P
The finite element discrete problem associated with (1) is: Find u
0(\Omega\Gamma5 such
that
The standard nodal basis f -
in P - h ( -
\Omega\Gamma is mapped by the F j , 1
basis for P h(\Omega\Gamma4 This basis also gives rise to a system K h in the standard way.
We use the following notations: x - y , z - u, and v i w to express that there
are positive constants C and c such that
Here and elsewhere c and C are moderate constants independent of H or N .
Let - h be the distance between the first two GLL points in the interval [\Gamma1; +1];
- h is proportional to 1=N 2 [2], and the sides h of an element K belonging
to T - h satisfy
depending on the location of K inside
-\Omega . The triangulation is therefore non-regular.
In the case of a region of diameter H; such as a
we use a norm
with weights generated by dilation starting from a region of unit diameter,
3. General Setup and Simplifications. In this section, we give our plan of
study.
uN be a function belonging to P N ( -
\Omega\Gamma3 and let -
I h
uN be the function of
-\Omega ) for which
uN
for every GLL point xG in -
and
where a -
Q is given by (2) and (3) with see [5] and [2]. We remark that the key
point of these results is the stability of the interpolation operator at the GLL nodes
for functions of
proved by Bernardi and Maday [1], [2].
Consider now a function v defined in a substructure
-\Omega with diameter of order H .
Changing variables to the reference substructure by - using simple
estimates on the Jacobian of F i , we obtain
2(\Omega
where the dimension d is equal to 1, 2, or 3.
These estimates can be interpreted as spectral equivalences of the stiffness and
mass matrices generated by the norms introduced above. Indeed, the nodal basis f -
is mapped by interpolation at the GLL nodes to a nodal basis of P N (
can be written as
u is the vector of nodal values of both -
uN or - u h , and -
K h and -
KN are the stiffness
matrices corresponding to j
and a -
Therefore, if K (i)
h and K (i)
N are the stiffness matrices generated by the basis fOE h
and fOE N
respectively, for all nodes j in the closure
and
a
(\Delta; \Delta), then
where u is the vector of nodal values, by (9), (8), and (5). The stiffness matrices KN
and K h are formed by subassembly [9],
h
for any nodal vector u, where u (i) are the sub-vectors of nodal values
an analogous
expression is true for KN . These last two relations imply
for any vector u. All these matrix equivalences and their analogues in terms of norms
are hereafter called FEM-SEM equivalence.
The equivalence (11) shows that K h is an optimal preconditioner for KN in terms
of number of iterations. However, the solution of systems K h expensive to
be used as an efficient preconditioner for large scale problems, which typically involve
many substructures.
We next show that the same reasoning applies to the Schur complements S h
and SN , i.e., the matrices obtained by eliminating the interior nodes of
in a
classical way; see [9]. Let uN be Q-discrete (piecewise) harmonic if aQ
for all i and all v N belonging to P N
The definition of h-discrete (piecewise)
harmonic functions is analogous. It is easy to see that u T SN
and uN are respectively Q and h-discrete harmonic and
u is the vector of the nodal values on the interfaces of the substructures.
The matrices S h and SN are spectrally equivalent. Indeed, by the subassembly
equation (10), it is enough to verify the spectral equivalence for each substructure
separately. For the
where I h
N is the interpolation at the nodes of T h , H h is the h-discrete harmonic extension
of the interface values, and the
subscript\Omega i indicates the restriction of the
bilinear form to this substructure. Here, we have used FEM-SEM equivalence and
the well known minimizing property of the discrete harmonic extension. The reverse
inequality is obtained in an analogous way.
This equivalence means that S h is also an optimal preconditioner for SN . As
before, the action of the inverse of S h is too expensive to produce an efficient preconditioner
for large problems.
In his Master's thesis [17], Pahl proposed the use of easily invertible finite element
preconditioners B h and S h;WB for K h and S h , respectively. If the condition number
with a moderately increasing function C(N ), then a simple Rayleigh quotient argument
shows that -(B \Gamma1
analogously for S h;WB and SN . Since the
evaluation of the action of B h
h;WB is much cheaper, these are very efficient
preconditioners.
Our goal is to establish (13) and its analogue for S h and S \Gamma1
h;WB . We note that the
triangulation T h is non-regular, and that all the bounds of this form established in the
literature require some kind of inverse condition, or regularity of the triangulation,
which does not hold for the GLL mesh.
4. Technical Results. This section presents the technical lemmas needed to
prove our results. As it is clear from the start, we draw heavily upon the results,
techniques, and organization of Dryja, Smith, and Widlund [9].
4.1. Some estimates for non-regular triangulations. We state here all the
estimates necessary to extend the technical tools developed in [9] to the case of non-regular
hexahedral triangulations. We let -
3 be the reference element, and
K be its image under an affine mapping F . K ae
-\Omega is an element of the triangulation
sides proportional to h 1 , h 2 and h 3 . The function u is a piecewise trilinear
defined in K. Notice that in this subsection we use hats to represent
functions and points of -
K.
The first result concerns the expressions of the L 2 and H 1 norms in terms of the
nodal values. Let - e i be one of the coordinate directions of -
K, and let - a, - b, - c and -
d
be the nodes in one of the faces that is perpendicular to -
a
etc. be the
corresponding points on the parallel face. The notation x i denotes a generic node of
K, and a; b; are the images of -a and - b, etc.
Lemma 1.
x=a;b;c;d
Proof. These expressions follow by changing variables, and by using the equivalence
of norms in the finite dimensional space Q
K).
In the next lemma we give a bound on the gradient of a trilinear function in terms
of bounds on the difference of the values at the nodes (vertices).
Lemma 2. Let u be trilinear in the element K such that ju(a)\Gammau(b)j - Cdist(a; b)=r
for some constants C and r, and for any two vertices a and b of the element K. Then
r
Proof. The functions u and u x can be written as
fy
The values of u x at the vertices belonging to the face are clearly bounded
by C=r. This implies estimates for the coefficients of u x and then the desired estimate.
The other derivatives of u are treated analogously.
Lemma 3. Let u be a trilinear function defined in K, and let # be a C 1 function
such that jr#j - C=r, and j#j - C for some constants C and r. Then
Here C is independent of all the parameters, and I - h is the interpolation to a Q 1
function of the values in the vertices of K.
Proof. By equation (15), and letting h 1 , h 2 , and h 3 be the sides of the element K:
x=a;b;c;d
Each term in the sum above can be bounded by
The bound on r# implies that
I - h (#u)jj 2
x=a;b;c;d
x=a;b;c;d
since # is bounded.
4.2. Technical tools. We introduce notations related to certain geometrical ob-
jects, since the iterative substructuring algorithms are based on subspaces directly
related to the interiors of the substructures, the faces, edges and vertices.
be the union of two
which share a common face,
wirebasket of the
which is the union of all the edges
and vertices of this subdomain. We note that a face in the interior of the
region\Omega is
common to exactly two substructures, an interior edge is shared by more than two,
and an interior vertex is common to still more substructures. All the substructures,
faces, and edges are regarded as open sets.
The following simple standard reductions greatly simplify our analysis in the next
sections.
The preconditioner S h;WB that we use is defined by subassembly of the matrices
h;WB , see Section 5. Therefore we can restrict our analysis to one substructure. The
results for the whole region follow by a standard Rayleigh quotient argument. It is
also enough to estimate the preconditioning of -
S h by -
these results can
be translated into results for each substructure by the equivalences (7), (8), and (5).
The assumption that the fF i g M
are arbitrary smooth mappings improves the
flexibility of the triangulation, but does not make the situation essentially different
from the case of affine mappings. This is seen from the estimates in Section 3, where
we only used properties of the derivative of F i . Therefore, without loss of generality,
we assume, from now on, that the F i are affine mappings.
In some of the following results, we state the result for substructures of diameter
proportional to H , but prove the theorem only for a reference substructure. The
introduction of the scaling factors into the final formulas by the methods and results
of Section 3 are routine.
For a proof of Lemma 4 and a general discussion, see Bramble and Xu [4].
Lemma 4. Let Q H u h be the L 2 projection of the finite element function u h onto
the coarse space
and
We remark that these bounds are not necessarily independent of the values K i of
the coefficient. To guarantee that, one has to work with weighted norms, and insist
that the coefficients k i satisfy the quasi-monotone condition [8], [25].
Lemma 5. Let - u h
be the average value of u h on W j ; the wirebasket of the
and
Similar bounds also hold for an individual substructure edge.
Proof. In the reference substructure, we know that P - h ae V - h , where V - h is a
standard space defined on a shape regular triangulation that includes
. This can be done by refining appropriately all the elements of T - h with sides bigger
than, say, 3 - h=2.
Now we apply the well-known result for shape regular triangulations, lemma 4.3
in [9], to get both estimates, recalling that in the reference substructure - h i 1=N 2 .
In the abstract Schwarz convergence theory, the crucial point in the estimate of the
rate of convergence of the algorithm is to demonstrate that all functions in the finite
element space can be decomposed into components belonging to the subspaces such
that the sum of the resulting energies are uniformly, or almost uniformly, bounded
with respect to the parameters H and N . The main technique for deriving such a
decomposition is the use of a suitable partition of unity. In the next two lemmas, we
explicitly construct such a partition.
Lemma 6. Let F k be the common face
k be the function
in P h
(\Omega\Gamma that is equal to one at the interior nodes of F k , zero on the remainder
of
1(\Omega
The same bound also holds for the other
Proof. We define the functions -
and -
in the reference cube; ' F k
and #F k
are
obtained, as usual, by mapping, see subsection 3. We construct a function -
having
Fig. 1. One of the segments CCk
the same boundary values as - ' F k
, and then prove the bound for the former. The
standard energy minimizing property of discrete harmonic extensions then implies the
bound for -
. The six functions which correspond to the six faces of the cube also
form a partition of unity at all nodes at the closure of the substructure except those
on the wirebasket; this property is used in the next lemma.
We divide the substructure into twenty four tetrahedra by connecting its center
C to all the vertices and to all the six centers C k of the faces, and by drawing the
diagonals of the faces of
Fig 1.
The function -
associated to the face F k is defined as being 1=6 at the point C.
The values at the centers of the faces are defined by -
is the
Kronecker symbol. -
is defined to be linear at the segments CC j for 6. The
values inside each subtetrahedron formed by a segment CC j and one edge of the cube
are defined to be constant on the intersection of any plane through that edge, and is
given by the value, already known, at the segment CC j . The values at the edge of the
cube belonging to this subtetrahedron are then modified to be equal to zero. Next,
the whole function -
is modified to be a piecewise Q 1 function by interpolating at
the vertices of all the GLL nodes of the reference cube.
We claim that jr -
x is a point belonging to any element K
that does not touch any edge of the cube, and r is the distance between the center of
K and the closest edge of the cube. Let ab be a side of K. We analyze in detail the
situation depicted in Fig 2, where ab is parallel to CC k . Let e be the intersection of
the plane containing these two segments with the edge of the cube that is closest to
ab.
(a)j - D, by construction of -
, where D is the size of the
radial projection of ab on CC k . By similarity of triangles, we may write:
where r 0 is the distance between e and the midpoint of ab. Here we have used that the
distance between e and CC k is of order 1. If the segment ab is not parallel to CC k ,
the difference j -
(a)j is even smaller, and (18) is still valid. Notice that r 0
is within a multiple of 2 of r. Therefore Lemma 2 implies that jr -
In order to estimate the energy of -
, we start with the elements K that touch
one of the edges of the face F k . Let h 3 be the largest side of one of these elements.
Since the nodal values of -
at K are 0, 1, and 1=6,
Fig. 2. Geometry underlying equation (18)
a
r
by a simple use of equation (15). By summing over K, we conclude that the energy
of -
is bounded independently of N for the union of all elements that touch one of
the edges of the face F k .
To estimate the contribution to the energy from the rest of the substructure, we
consider one subtetrahedron at a time and introduce cylindrical coordinates using the
substructure edge, that belongs to the subtetrahedron, as the z-axis. The bound now
follows from the bound on the gradient given above and elementary considerations.
We refer to [9] for more details.
The following lemma corresponds to Lemma 4.5 in [9]. This lemma and the
previous one are the keys to avoiding H 1=2estimates and extension theorems.
Lemma 7. Let #F k
(x) be the function introduced in the proof of Lemma 6, let F k
be a face of the
I h denote the interpolation operator associated
with the finite element space P h and the image of the GLL points under the mapping
I h (# F k
and
Proof. The first part is trivial from the construction of -
made in the previous
lemma. For the second part, we first estimate the sum of the energy of all the elements
K that touch the wirebasket. The nodal values of the interpolator I - h ( -
in such
an element are 0,0,0,0, -
(c)-u(c) and -
lies between 0 and
1. Moreover, we denote by h 3 the side of K that is larger than the other two sides
Note that this larger side is parallel to the closest wirebasket edge.
using equation (15), we obtain:
Then, by using the expression of the L 2 norm in the two segments that are parallel to
the edge, and lemma 5, we have:
where the sum is taken over all elements K that touch the boundary of the face F k .
We next bound the energy of the interpolant for the other elements. Since r -
C=r where r is the distance between the element K and the nearest edge of -
(see
the proof of the previous lemma), Lemma 3 implies that
Kae
Kae
where the sum is taken over all elements K that do not touch the edges of
-\Omega .
The bound of the first term in the sum is trivial, and to bound the second term
we partition the elements of
-\Omega into groups, in accordance to the closest edge of
the exact rule for the assignment of the elements that are halfway between is of no
importance. For each edge of the wirebasket, we use a local cylindrical coordinate
system with the z axis coinciding with the edge, and the radial direction, normal to
the edge. In cylindrical coordinates, we estimate the sum by an integral
Kae
-\Omega r \Gamma2 jj-ujj 2
Z C
Z
Z
z
drd'dz:
The integral with respect to z can be bounded by using Lemma 5. We obtain
Kae
-\Omega r \Gamma2 jj-ujj 2
Z C
and thus
Kae
This proof is an adaptation of an argument given in [9] for shape regular meshes.
Note that equation (16) replaces the use of the inverse inequality, which cannot be
used here because of the bad aspect ratios of the elements. Equation (16) is analogous
to the L 2 bound of the derivative of a product in terms of L 2 norms of the functions
and L 1 norms of the gradients, which cannot be applied directly to our case because
we have the interpolation operator I h .
Lemma 8. Let -
W k be the averages of u h on @F k ; and W k , respectively.
Then,
The proofs are direct consequences of the Cauchy-Schwarz inequality.
Lemma 9. Let u h be zero on the mesh points of the faces
of\Omega j and discrete
harmonic
This result follows by estimating the energy norm of the zero extension of the
boundary values by means of equation (15) and by noting that the harmonic extension
has a smaller energy.
5. Iterative Substructuring Algorithms. The first algorithm we analyze is
a wirebasket based method, based on Algorithm 6.4 in [9]. This is a block-diagonal
preconditioner after transforming the original matrix to a convenient basis.
According to the abstract framework of Schwarz methods [9], we only need to
prescribe spaces whose union is the whole space, and the corresponding bilinear forms.
Each internal face F k generates a local space VF k
of all the h-discrete harmonic
functions that are zero at all the interface nodes that do not belong to this face. Notice
that the functions belonging to VF k
have support in the union of the two substructures
and\Omega j that share the face F k . The bilinear form used for this space is just a(\Delta; \Delta).
We also define a wirebasket subspace that is the range of the following interpolation
operator:
I h
Here, ' k is the discrete harmonic extension of the standard nodal basis functions OE k ,
W h is the set of nodes in the union of all the wirebaskets, and - u h
@F k is the average of
u h on @F k . The bilinear form for this coarse subspace is given by
These subspaces and bilinear forms define, via the Schwarz framework, a preconditioner
of S h that we call S h;WB .
Theorem 1. For the preconditioner S h;WB , we have
where the constant C is independent of the N , H, and the values k i of the coefficient.
Proof. We apply word by word the proof of theorem 6.4 in [9] to the matrix S h ,
using now the tools developed in Section 4. This gives
The harmonic FEM-SEM equivalence (12) and a Rayleigh quotient argument complete
the proof, as explained in Section 3.
We do not give the complete proof here because it would be a mere restatement
of the proof in [9].
The next algorithm is obtained from the previous one by the discrete harmonic
FEM-SEM equivalence, by which we find a preconditioner SN;WB from the preconditioner
studied above. Each face subspace related to a face F k is composed
of the set of all Q-discrete harmonic functions that are zero at all the interface nodes
that do not belong to the interior of the face F k .
The wirebasket subspaces are defined as before, by prescribing the values at the
GLL nodes on a face to be equal to the average of the function on the boundary of the
face. The bilinear forms used for the face and wirebasket subspaces are aQ and b 0 (\Delta; \Delta),
respectively. Notice that this is the wirebasket method based on GLL quadrature
given in [24].
The following lemma shows the equivalence of the two functions uN and u h with
respect to the bilinear form b 0 (\Delta; \Delta).
Lemma 10. Let u h be a Q 1 finite element function on the GLL mesh of the
interval I = [\Gamma1; +1], and let uN be its polynomial interpolant. Then
Proof. We prove only the - part. The inequality without the infimum is valid
for the constant c r that realizes the inf in the right hand side by the FEM-SEM
equivalence. By taking the inf in the left hand side we preserve the inequality.
Theorem 2. For the preconditioner SN;WB , we have
where the constant is independent of the parameters H, N and the the values k i of the
coefficient.
Proof. In this proof, the functions with indices h and N are all discrete harmonic
functions with respect to the appropriate norms, related in the same way as uN and u h ,
i.e.
According to Section 3, it is enough to analyze one substructure
\Omega i at a time, and prove the following equivalence:
1(\Omega
We prove only the - part, and the other inequality is analogous. Lemma 10 gives
an upper bound of the first term in the left hand side by the corresponding term in
the right hand side.
Each term in the sum on the left hand side can be bounded by
The first term of this expression can be bounded by the corresponding term on the
right hand side by interpolation and the harmonic FEM-SEM equivalence. The second
term is bounded by
where c h;W i
is the average of u h over W i . Here we used the estimate on the energy
norm of ' h;F k
which implies a similar estimate of ' N;F k
. Applying the Cauchy-Schwarz
inequality, as in lemma 8, and the FEM-SEM equivalence, we can bound this last
expression in terms of the first term in the right hand side of equation (19).
The polynomial analogues of the lemmas in Section 4 can be proved using the
harmonic FEM-SEM equivalence. This provides a theory for polynomials, which is
completely parallel to the one we have presented, that can be used to prove this
theorem directly. A variation of this approach is taken in [22] and [24], but without
the use of the FEM-SEM equivalence.
6. Overlapping Schwarz Algorithms. We now consider the additive overlapping
Schwarz methods, which are presented for instance in [10]. We recall that an
abstract framework, theorem 3.1 in [10], is available for the analysis of this type of
algorithm. Here we only discuss the additive version, but the analysis also applies in a
standard way to the multiplicative variant, which is more effective in many practical
problems.
In the abstract framework for the additive Schwarz methods, a preconditioner B h
for K h can be defined by specifying a set of local spaces together with a coarse space.
We can also provide approximate solvers for the elliptic problem restricted to each of
the proposed subspaces. Here we only work with exact solvers, since the extension to
inexact solvers is straightforward by using the abstract framework.
The
domain\Omega is covered by
substructures\Omega i , which are the original spectral
elements. We enlarge each of them to produce overlapping
i , in such a
way that the boundary
i does not cut through any element of the triangulation
generated by the GLL nodes. The overlap ffi is defined as the minimum distance
between the boundaries
i . When ffi is proportional to H the overlap is called
generous, and when ffi is comparable to the size of the Q 1 elements it is called a small
overlap. For the sake of simplicity, we again restrict our analysis to the case when all
the mappings F j are affine mappings. The general situation is treated similarly.
The local spaces are given by P h
i ), the set of functions in P h
that vanish at
all the nodes on or outside
. The coarse space is a Q 1 finite element space given
by the mesh generated by the vertices and edges of the
subregions\Omega i . Each subregion
\Omega i is then one element of this coarse finite element space. We note that this coarse
mesh is regular by assumption. This construction is completely parallel to that of
Section 2.1 of [11] for this particular choice of subregions. This setting incorporates
the small and the generous overlap preconditioners. We use the bilinear form a(\Delta; \Delta)
for the coarse and local spaces.
Theorem 3. For this additive Schwarz algorithm, the condition number of the
preconditioned operator satisfies:
The constant C is independent of the parameters H, N , and ffi .
Proof. As before, we follow the proof of the analogous theorem, theorem 3 in [11].
The proof follows word by word, except for the estimate of aK
where I h is the interpolation operator, f' i g is a partition of unity, w h is a finite
element function, and aK is just the restriction of a(\Delta; \Delta) to one element.
In this case it is known that j' i
and the rest of the proof follows without any change.
Remark 1. Even though the theory does not rule out the possibility of growth of
the constant when the coefficient k has large jumps, such a growth is very moderate in
numerical experiments; see e.g [13]. We note also that when the overlap is generous,
the method is optimal in the sense that the condition number is uniformly bounded
with respect to the parameters of the problem; see [19] for early work on this type of
preconditioner. Our results and techniques allow a very flexible choice of subregions.
We now apply FEM-SEM equivalence to the subspaces used to define B h;AS ; this
is the same technique used to derive the preconditioner SN;WB from S h;WB . The
coarse space is the same, and the local spaces are defined by
N (v N
where I h
N (v N ) interpolates v N at the GLL points and belongs to P h .
These subspaces and the use of the bilinear forms aQ (\Delta; \Delta) and a(\Delta; \Delta) for the local
and coarse spaces, respectively, define our preconditioner BN;AS . Theorem 3 and
a simple application of the FEM-SEM equivalence for each one of the local spaces
immediately give:
Theorem 4. The condition number of the preconditioned operator satisfies:
--R
Polynomial interpolation results in sobolev spaces.
Approximations Spectrales de Probl'emes aux Limites Elliptiques
The construction of preconditioners for elliptic problems by substructuring
Some estimates for a weighted L 2 projection.
Stabilization of spectral methods by finite element bubble functions.
Voigt, editors. Fifth International Symposium on Domain Decomposition Methods for Partial Differential Equations
Multilevel Schwarz methods for elliptic problems with discontinuous coefficients in three dimensions.
Widlund Schwarz analysis of iterative substructuring algorithms for elliptic problems in three dimensions.
Additive Schwarz methods for elliptic finite element problems in three dimensions.
Widlund Domain decomposition algorithms with small overlap.
Parallel Domain Decomposition for Incompressible Fluid Dynamics.
Experiences with domain decomposition in three di- mensions: Overlapping Schwarz methods
Domain Decomposition Methods in Science and Engi- neering
Analysis of iterative methods for the steady and unsteady Stokes problem: Application of spectral element dis- cretization
Spectral element methods for the Navier-Stokes equations
Schwarz type domain decomposition methods for spectral element discretiza- tions
Preconditioning legendre spectral collocation approximation to elliptic problems.
Domain Decomposition Algorithms for the p-version Finite Element Method for Elliptic Problems
Additive Schwarz methods for the p-version finite element method
Some Schwarz algorithms for the p-version finite element method
Iterative substructuring methods for spectral elements in three dimensions.
Iterative substructuring methods for spectral elements: Problems in three dimensions based on numerical quadrature.
Widlund A polylogarithmic bound for an iterative substructuring method for spectral elements in three dimensions.
Preconditioned conjugate gradient methods for spectral elements in three dimensions.
Finite element preconditioning for legendre spectral collocation approximations to elliptic equations and systems.
editors. Domain Decomposition Methods in Science and Engineering: The Sixth International Conference on Domain Decomposition
A domain decomposition algorithm for elliptic problems in three dimensions.
--TR
--CTR
James W. Lottes , Paul F. Fischer, Hybrid Multigrid/Schwarz Algorithms for the Spectral Element Method, Journal of Scientific Computing, v.24 n.1, p.613-646, July 2005
Luca F. Pavarino , Elena Zampieri, Overlapping Schwarz and Spectral Element Methods for Linear Elasticity and Elastic Waves, Journal of Scientific Computing, v.27 n.1-3, p.51-73, June 2006
Dan Stefanica, FETI and FETI-DP Methods for Spectral and Mortar Spectral Elements: A Performance Comparison, Journal of Scientific Computing, v.17 n.1-4, p.629-638, December 2002
V. Korneev , J. E. Flaherty , J. T. Oden , J. Fish, Additive Schwarz algorithms for solving hp-version finite element systems on triangular meshes, Applied Numerical Mathematics, v.43 n.4, p.399-421, December 2002
Marcello Manna , Andrea Vacca , Michel O. Deville, Preconditioned spectral multi-domain discretization of the incompressible Navier-Stokes equations, Journal of Computational Physics, v.201 n.1, p.204-223, 20 November 2004 | schwarz methods;preconditioned conjugate gradients;iterative substructuring;domain decomposition;spectral element method |
273914 | Theory of neuromata. | A finite automatonthe so-called neuromaton, realized by a finite discrete recurrent neural network, working in parallel computation mode, is considered. Both the size of neuromata (i.e., the number of neurons) and their descriptional complexity (i.e., the number of bits in the neuromaton representation) are studied. It is proved that a constraint time delay of the neuromaton output does not play a role within a polynomial descriptional complexity. It is shown that any regular language given by a regular expression of length n is recognized by a neuromaton with &THgr;(n) neurons. Further, it is proved that this network size is, in the worst case, optimal. On the other hand, generally there is not an equivalent polynomial length regular expression for a given neuromaton. Then, two specialized constructions of neural acceptors of the optimal descriptional complexity &THgr;( | Introduction
Neural networks [7] are models of computation motivated by our ideas about brain
functioning. Both their computational power and their efficiency have been traditionally
investigated [4, 14, 15, 19, 21] within the framework of computer science.
One less commonly studied task which we will be addressing is the comparison
of the computational power of neural networks with the traditional finite models
of computation, such as recognizers of regular languages. It appears that a finite
This research was supported by GA -
CR Grant No. 201/95/0976.
discrete recurrent neural network can be used for language recognition in parallel
mode: at each time step one bit of an input string is presented to the network
via an input neuron and an output neuron signals, possibly with a constant time
delay, whether the input string which has been read, so far, belongs to the relevant
language. In this way, a language can be recognized by a neural acceptor
(briefly, by a neuromaton). It is clear that the neuromata recognize just the regular
languages [12].
A similar definition of a neural acceptor appeared in [2, 8], where the problem of
language recognition by neural networks has been explored in the context of finite
automata. It was shown in [2] that every m-state deterministic finite automaton
can be realized as a discrete neural net with O(m 3
neurons and that at least
neurons are necessary for such construction. This upper and lower
bound was improved in [6, 8] by showing that \Theta(m 1
neurons suffice and that,
in the worst case, this network size cannot be decreased assuming either at most
O(logm)-time simulation delay [6] or polynomial weights [8]. Moreover, several
experiments to train the (second-order) recurrent neural networks from examples
to behave like deterministic finite automata either for a practical exploitation or
for a further rule extraction have also been done [5, 13, 20, 22] using the standard
neural learning heuristics back-propagation.
In the present paper we relate the size of neural acceptors to the length of regular
expressions that on one hand are known to possess the same expressive power as
finite automata, but on the other hand they represent a tool whose descriptional
efficiency can exceed that of deterministic finite automata.
First, in section 2 and 3, respectively, we will introduce the basic formalism for
dealing with regular languages and neuromata. We will prove that a constant time
delay of the neuromaton output does not play a role within a linear neuromata
size. Therefore, we will restrict ourselves to neuromata which respond only one
computational time step after the input string is read. Then, in section 4 we will
prove that any regular language described by a regular expression of a length n
can be recognized by a neuromaton consisting of O(n) neurons. Subsequently,
in section 5 we will show that, in general, this result cannot be improved because
there is a regular language given by a regular expression of length n requiring neural
acceptors of
n). Therefore, the respective neuromaton construction from a
regular expression is size-optimal. This can be used, for example, for constructive
neural learning when a learning algorithm for regular expressions from example
strings [3] is first employed. On the other hand, in section 6 this construction is
proven not to be efficiently reversible because there exists a neuromaton for which
every equivalent regular expression is of an exponential length.
Next, in section 7 we will present two specialized constructions of neural acceptors
for single n-bit string recognition that both require O(n 1
neurons and either
O(n) connections with constant weights or O(n 1
weights of the size
O(2
The number of bits required for the entire string acceptor description in
both cases is proportional to the length of the string. This means that these automata
constructions are optimal from the descriptional complexity point of view.
They can be exploited as a part of a more complex neural network design, for exam-
ple, for the construction of a cyclic neural network, with O(2 n
neurons and edges,
which computes any boolean function [9].
In section 8 we will introduce the concept of Hopfield languages as the languages
that are recognized by the so-called Hopfield acceptors (Hopfield neuromata) which
are based on symmetric neural networks (Hopfield networks). Hopfield networks
have been studied widely outside of the framework of formal languages, because of
their convergence properties. From the formal language theoretical point of view we
will prove an interesting fact, namely that the class of Hopfield languages is strictly
contained in the class of regular languages. Hence, they represent a natural proper
subclass of regular languages.
Furthermore, we will formulate the necessary and sufficient, so-called, Hopfield
condition stating when a regular language is a Hopfield language. In section 9 we
will show a construction of a Hopfield neuromaton with O(n) neurons for a regular
language satisfying the Hopfield condition. Thus, we will obtain a complete characterization
of the class of Hopfield languages. As far as, the closure properties of
Hopfield languages are concerned, we will show that the class of Hopfield languages
is closed under union, intersection, concatenation and complement and that it is
not closed under iteration.
Finally, in section 10 we investigate the complexity of the emptiness problem
for regular languages given by neuromata or by Hopfield acceptors. We will prove
that both problems are PSPACE-complete. This is a somewhat surprising because
the identical problems for regular expressions, deterministic and non-deterministic
finite automata are known to only be NL-complete [10, 11]. It confirms the fact
from section 6 that neuromata can be stronger than regular expressions from the
descriptional complexity point of view. As a next consequence we will obtain that
the equivalence problem for neuromata is PSPACE-complete as well.
All previous results jointly point to the fact that neuromata present quite an
efficient tool not only for the recognition of regular languages and of their subclasses
respectively, but also for their description. In addition, the above-mentioned constructions
can be generalized for the analog neural networks [18].
A preliminary version of this paper concerning general neuromata and Hopfield
languages, respectively, appeared in [16] and [17].
Regular Languages
We recall some basic notions from language theory [1]. We introduce the definition
of regular expressions which determine regular languages. The concept of a
(deterministic) finite automaton is defined and Kleene's theorem about the correspondence
between finite automata and regular languages is mentioned as well.
An alphabet is a finite set of symbols. A string over an alphabet \Sigma
is a finite-length sequence of symbols from \Sigma. The empty string, denoted by e, is
the string with no symbols. If x and y are strings, then the concatenation of x and
y is the string xy. The string xx
ntimes
is abbreviated to x n . The length of a string
x, denoted by jxj, is the total number of symbols in x.
language over an alphabet \Sigma is a set of strings over \Sigma. Let L 1 and
L 2 be two languages. The language L 1 \Delta L 2 , called the concatenation of L 1 and L 2 , is
g. Let L be a language. Then define
for n - 1. The iteration of L, denoted L ? , is the language L
n=0 L n . Similarly
the positive iteration
Definition 3 The set RE of regular expressions over an alphabet
defined as the minimal language over an alphabet f0;
the following conditions:
1. ;;
2. if ff; fi 2 RE then also (ff
In writing a regular expression we can omit many parentheses if we assume that
? has higher precedence than concatenation and the latter has higher precedence
than +. For example, ((0(1 ? may be written abbreviate the
ntimes
to ff n but jff n j remains n \Delta jffj.
Definition 4 The set is the set of regular languages [ff] which
are denoted by regular expressions ff as follows:
1.
2. if ff; fi 2 RE then [ff
We also use a regular expression ff corresponding to the positive iteration [ff]
Definition 5 A (deterministic) finite automaton is a 5-tuple
where is a finite set of automaton states, \Sigma is an input alphabet (in our
case \Gamma! Q is the transition function, q 0 2 Q is the initial
state of the automaton, and F ' Q is a set of accepting states.
Definition 6 The generalized transition function of the automaton
is defined in the following way:
1.
2.
Fg is the language recognized by the finite automaton
A.
Theorem 1 (Kleene). A language L is regular if f it is recognized by some finite
automaton A (i.e.,
Neuromata
In this section we formalize the concept of a neural acceptor - the so-called neu-
romaton which is a discrete recurrent neural network (or neural network, for short)
exploited for a language recognition in the following way: During the network com-
putation, an input string is presented bit after bit to the network by means of
a single predetermined input neuron. All neurons of the network work in paral-
lel. Following this, with a possible constant time delay the output neuron shows
whether the input string, that has been already read, is from the relevant language.
A similar definition appeared in [2] and [8].
Definition 7 A neural acceptor (briefly, a neuromaton) is a 7-tuple
out; is the set of n neurons including the input neuron inp 2
, and the output neuron out 2 V , is the set of edges,
is the set of integers) is the weight function (we use the abbreviation
Z is the threshold function (the abbreviation
is the initial state of the network.
The graph (V; E) is called the architecture of the neural network N and
is the size of the neuromaton. The number of bits that are needed for the whole
neuromaton representation (especially for the weight and threshold functions) is
called the descriptional complexity of neuromaton.
formally, due to the notational
consistency, by arbitrary xm+l 2 f0; 1g; l - 1, be the input for the neuroma-
ton Further, assume that all oriented paths from
inp to out in the architecture (V; E) have length at least k 1. The
state of the neural network at the discrete time t is a mapping s 1g. At
the beginning of a neural network computation the state s 0 is set to s 0
. Then at each time step
network computes its new state s t from the old state s t\Gamma1 as follows:
otherwise. For the neural acceptor N and
its input x 2 f0; 1g m we denote the state of the output neuron out 2 V in the time
1g is the
language recognized by the neuromaton N with the time delay k.
First, we show with respect to language recognition capabilities that a constant
time delay of the neuromaton output does not play a role, within a linear neuromata
size. More precisely, all languages recognized with the time delay k by a neuromaton
of size n can be recognized with the time delay 1 by a neuromaton of size O(2 k n).
be a neuromaton of size n such that
all oriented paths from inp to out in the architecture (V; E) have length at least
1. Then there exists a neuromaton N
init ) of the size 2 k (n
Proof: The idea of the proof is to construct N ? in such a way that it checks
ahead all possible computations of N for the next steps and cancels all wrong
computations that do not match the actual input being read with the time delay
For this purpose besides the input and output neurons inp ? , out ? , the neuroma-
ton N ? consists of 2 k blocks N x which foresee the computation of N ,
each of them for one of the possible next k bits x of input. Denote neurons in these
blocks in the same way as in N but each of them indexed by the relevant k bits.
Then the state of the neuron v x of N ? is equal to the state of the neuron v 2 V of
N after the computation over the next bits (i.e., over the first k \Gamma 1 bits
of x 2 f0; 1g k ) is performed. This is achieved as follows.
The block N yb where y 2 f0; 1g its new state from the
old state of the block N ay where a 2 f0; 1g. Therefore neurons in N ay are connected
to neurons of N yb and the corresponding edges are labeled with relevant weights
with respect to the original weight function w of N . The block N yb has the constant
input b which is taken into account by modifying all thresholds of neurons in this
block, especially when
The input bit c 2 f0; 1g of N ? indicates that the computations of N ay , where
a 6= c, are not valid. Therefore the input neuron inp ? cancels these computations
by setting all neuron states in N ay to zero. Thus, all neurons in 2 k\Gamma1 blocks N ay
states when the previous input bit does not match a.
This also enables the block N yb to execute the correct computation because the
influence of one invalid block from either N 0y or N 1y which are connected to N yb ,
is suppressed, due to zero states.
The neurons in the blocks N x which are connected to out x lead to the output
neuron out ? as well. They are labeled with the same weights. We know that half of
them do not have any influence on out ? . Moreover, among the remaining ones, the
corresponding neurons v x (v 2 V ) from all blocks N x have the same
state because the distance between inp and out in N is at least k and the last k
input bits cannot influence the output. It is sufficient to multiply the threshold of
out ? by 2 k\Gamma1 in order to preserve the function computed by out. In this way, the
correct recognition is accomplished with lookahead.
A formal definition of the neuromaton N
follows:
ay
ay
init (v ya
where s k\Gamma1 (v)(y) is the state of neuron v in the neuromaton N for the input y 2
1g k\Gamma1 at the time step k \Gamma 1.
The term bw(hinp; vi) in the threshold definition takes into account the weight
of the constant input in the block N y1 (y 2 f0; 1g corresponding to the original
weight associated with the edge leading from the input inp to the relevant neuron v
in the neuromaton N . The definition of w ensures setting the states of
all neurons in the block N 0y (y 2 f0; 1g k\Gamma1 ) to zero iff the input inp ? is 1. Similarly
the definition of weights w together with the term aw ? (hinp
in the threshold definition cause zero states of all neurons in the block N 1y (y 2
Clearly, the size of the neuromaton N ? is 2. 2
Following Lemma 1, we can restrict ourselves to neuromata which respond only
one computational time step after the input string is read because their size is, up
to a constant multiplication factor, as large as the size of the equivalent neuromata
with a constant time delay. Therefore, in the rest of this paper we will assume
that the time delay in the neuromaton recognition is 1 and that any neuromaton
architecture does not contain an edge from the input to the output neuron. We will
also denote L 1 (N ) by L(N ) for any neuromaton N .
Next, we will prove that the neural acceptor can be viewed as a finite automaton
[12] and therefore, neuromata recognize exactly regular languages due to Theorem
1.
Theorem 2 Let language recognized by some neuromaton N . Then
L is regular.
Proof: Let neural acceptor. We define a
deterministic finite automaton
\Sigmag and q . The transition function is defined
for s 2 Q and x 2 \Sigma as follows:
Finally, \Sigmag. Then the proposition
follows from Theorem 1. 2
4 Upper Bound
We show that any regular language, given by a regular expression of length n,
may be recognized by a neuromaton of size O(n). The idea of recognition by
neuromaton is to compare an input string with all possible strings generated by the
regular expression and to report via the output neuron whether some of these strings
match the input. Therefore, the constructed architecture of the neural network
corresponds to the structure of the regular expression. The neuromaton proceeds
through all oriented network paths that correspond to all strings generated by this
expression and that, at the same time, match the part of the input string that has
been read so far.
Theorem 3 For every regular language L 2 RL denoted by a regular expression
there exists a neuromaton N of the size O (jffj) such that L is recognized
by N (i.e.,
Proof: Let be a regular language denoted by a regular expression ff.
We construct a neuromaton N O (jffj) so that
We first build an architecture (V; E) of the neural network N ff recursively with
respect to the structure of the regular expression ff. For that purpose we define the
sequence of graphs corresponding
to the whole expression ff which is recursively partitioned into shorter regular
subexpressions, so that (V corresponding only to the elementary
subexpressions 0 or 1 of ff (we say, vertices of the type 0 or 1). For the sake of
notational simplicity we identify the subexpressions of ff with the vertices of these
graphs.
1.
2. Assume that have already been constructed and
a subexpression of ff different from 0 or 1. Hence, besides the empty language
and the empty string, the regular expression fi can denote union, concatena-
tion, or iteration of subexpressions of fi. With respect to the relevant regular
operation the vertex fi is fractioned and possibly new vertices corresponding
to the subexpressions of fi arise in the graph (V To be really
rigorous we should first remove the vertex fi and then add the new vertices.
However, due to the notational simplicity we do not insist on such rigor and
therefore, we can identify one of the new vertices with the old fi. That is why
we write rather inexactly, for example, 'fi has the form fi fl'. Moreover, we
ffl fi is ;: V g.
ffl fi is e: V
fiigg.
ffl fi has the form fi
g.
ffl fi has the form fi
g.
ffl fi has the form fiig.
This construction is finished after contains only subexpressions
0 or 1. Then we define the network architecture in the following way:
For Now we can define the weight
function w and the threshold function
is the neuron of type 1:
is the neuron of type 0:
The initial state is defined as s 0
The set V contains three special neurons inp; out; start, as well as, other neurons
of the type 0 or 1 - one for each subexpression 0 or 1 in ff; hence, jV
An example of the neuromaton for the regular language [(1(0
figure 1 (the types of neurons are depicted inside the circles representing neurons;
thresholds are depicted as weights of edges with constant inputs \Gamma1).
inp
start 101
out
Figure
1: Neuromaton for [(1(0
We prove that From the construction of (V
above, it is easy to observe that this graph corresponds to the structure of the
regular expression ff. This means that for every string
there is an oriented path start out leading from
to out 2 V p and containing the vertices of the relevant types (0 or 1).
On the other hand, for any such path there is a corresponding string in L.
The neural acceptor N ff passes through all possible paths that match the network
input. In the beginning the only non-input neuron start 2 V is active (its state is
1). It sends a signal to all connected neurons and subsequently becomes passive (its
state is 0) due to the dominant threshold. The connected neurons of the type 0 or
1 compare their types with the network input and become active only when they
match, otherwise they remain passive. Due to the weight and threshold values, it
follows that any neuron of the type 1 becomes active iff inp 2 V is active and at
least one of j 6= inp, hj; ii 2 E, is active, and any neuron of the type 0 becomes
active iff inp 2 V is passive and at least one of j 6= inp, hj; ii 2 E, is active. This
way all relevant paths are being traversed, and their traverse ends in the neuron
which realizes the logical disjunction, and is active iff the prefix of the
input string, that has been read so far, belongs to L. This completes the proof that
5 Lower Bound
In this section we show the lower
n) for the number of neurons that, in
the worst case, are necessary for the recognition of regular languages which are
described by regular expressions of the length n. As a consequence, it follows that
the construction of the neuromaton from section 4 is size-optimal.
The standard technique is employed for this purpose. For a given length of
regular expression we define a regular language and the corresponding set with an
exponential number of prefixes for this language. We prove that these prefixes
must bring any neuromaton to an exponential number of different states in order
to provide a correct recognition. This will imply the desired lower bound.
Definition 9 For we denote by Ln , \Pi k , and Pn , respectively, the
following regular languages:
hi
It is clear that Pn , n - 1 is the set of prefixes for the language Ln . We prove
several lemmas concerning properties of these regular languages. The regular expression
which defines the language Ln in Definition 9 is in fact of O
length
because the abbreviation for a repeated concatenation is not included when determining
its length. Therefore, we first show that there is a regular expression ff n , of
the linear length only, denoting the same language Ln . The number of prefixes in
Pn is shown to be exponential with respect to n.
Proof:
(i) In the regular expression which denotes the language Ln from Definition 9, we
can subsequently factor out (n \Gamma times the subexpression 1(e + 0) to obtain
the desired regular expression
of the linear length jff which defines the same language
(ii) It follows from Definition 9 that j\Pi k
The following lemma shows how the prefixes in Pn can be completed to strings
from Ln .
Lemma 3
Proof:
(i), (ii) follow from Definition 9.
(iii) Assume . The language Ln is defined via iteration in Definition 9.
Henceforth, we can write
\Pi
Now we prove that any two different prefixes from Pn can be completed by the same
suffix, so that one of the resulting strings is in Ln while the other one is not.
Lemma
Proof: Assume x . Then there exist
We will distinguish two cases:
Without loss of generality, suppose n ?
From (ii) Lemma 3 we obtain x 1
due to (ii) Lemma 3.
2.
We can write x
fe; 0g, for g. Without loss of generality
a
ffl Denote z
\Pi From (i) Lemma 3 z 1 0 2 Ln and z 2 1 2 Ln from (ii) Lemma 3.
This implies that x 1 because Ln is closed under concatenation
. To the contrary suppose that x 2
exist. Hence, z
from (ii) Lemma 3 it follows z
which is a contradiction.
Thus, x 2 y 62 Ln . 2
Now we are ready to prove the following theorem concerning the lower bound.
Theorem 4 Any neuromaton N that recognizes the language
at
n) neurons.
Proof: The neuromaton N , that recognizes the language
must
different states which are reached when taking input
prefixes from Pn because any different x can be completed by y from
Lemma 4, so that x 1 y 2 Ln and x 2 y 62 Ln . This implies that N
needs\Omega\Gamma n) binary
neurons. 2
6 Neuromata Are Stronger
Although from the results of sections 4, 5 it seems that from the descriptional complexity
point of view, neuromata and regular expressions are polynomially equiva-
lent, in this section, we will show that there exists a neuromaton for which every
equivalent regular expression is of an exponential length. This means that the neu-
romata construction from regular expressions that has been described in section 4
is not efficiently reversible.
Theorem 5 For every n - 1 there exists a regular language Ln recognized by a
neuromaton Nn of size O(n) and of descriptional complexity O(n 2 ) such that any
regular expression ff n which defines
We define the finite language to be the set of all
binary strings of length m.
The neuromaton recognizes Ln is a binary
n-bit counter which accepts the input string iff its length is m. The neuron
corresponding to the ith bit of the counter should change its state iff
1. However, the corresponding Boolean function cannot
be computed by only one neuron and, therefore, a small subnetwork of three
neurons a introduced for this purpose. Then v i is a disjunction of a i and b i
where a
and at least one of v its state as required but
it takes two computational steps. This means that the counter is two times slower
and only are required to count till m. Moreover the neuron v 0 generates
the binary sequence (0011) ? and a special neuron rst is introduced to suppress the
firing of the output neuron v n\Gamma2 after m bits are read.
A formal definition of the counter follows:
Clearly, the size of Nn is of order O(n) and its descriptional complexity is O(n 2 ).
Let ff n be a minimal length regular expression which defines the language
We prove that jff n m). The expression ff does not contain
iterations because it defines the finite language (the part containing iterations would
denote the empty language and, thus, could be omitted). To generate strings from
the expression ff n is to be read from the left to the right side without any
returning since no iteration is allowed. But any string of Ln is of length m and that
is why ff n must contain at least m symbols from f0; 1g. Hence jff
Theorem 5 shows that, in some sense, there is a big gap between the descriptive
power of regular expressions and that of neuromata. We will discuss this issue later
in sections 10, 11.
7 Neural String Acceptors
The previous results showed that neuromata have the descriptive capabilities of
regular expressions. In this section we will study how powerful they are when we
confine ourselves to a certain subclass of regular expressions. Here, we will deal with
the simplest case only considering fixed binary strings from f0; 1g ? . We present two
constructions of neural acceptors for a single string recognition. For n-bit strings
they both require O(n 1
neurons and either O(n) connections with constant weights
or O(n 1
weights of the O(2
The number of bits required for
the entire string acceptor description is in both cases proportional to the length of
the string. This means that these constructions are optimal from their descriptional
complexity point of view.
Studying this elementary case is useful because single string recognition is very
often a part of more complicated tasks. The techniques developed for the architectural
neural network design can, for example, sometimes improve the construction
of neuromata from section 4, when a regular expression consists of long binary sub-
strings. The other example of application is a constructive neural learning, where
strings are viewed as training patterns, and the resulting network is composed of
neural acceptors for these strings that work in parallel.
Theorem 6 For any string a = a an 2 f0; 1g n there exists a neural acceptor
of the size O(n 1
neurons with O(n) connections of constant weights. Thus, the
descriptional complexity of the neuromaton is \Theta(n).
Proof: For the sake of simplicity suppose first that positive integer
p. The idea of the neural acceptor construction is to split the string a 2 f0; 1g n
into p pieces of the length p and to encode these p substrings using p 2 (for each p)
binary weights of edges leading to p comparator neurons c . The input string
being gradually stored per p bits into p buffer neurons
. When the buffer is full, the relevant comparator neuron c i compares
the assigned part of the string a with the corresponding part of the
input string x that is stored in the buffer and sends the
result of this comparison to the next comparator neuron. The synchronization of
the comparison is performed by 2p clock neurons, where neurons s tick at
each time step of the network computation, and neurons tick once in a
period of p such steps. The last comparator neuron c p represents the output neuron
and reports at the end whether the input string x matches a.
A formal definition of the neuromaton for the
recognition of the string a follows (see also figure 2). Define
Denote by d
fi fi . Then put
ae
All remaining weights and thresholds are set to 1. Finally, the initial state is
'i
'i
'i
'i
'i
'i
'i
'i
'i
'i
'i
'i x
'i
'i
Z Z Z Z Z Z Z Z Z Z Z Z Z
Z Z Z Z Z Z Z Z Z Z Z Z Z
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
Z Z
Z Z Z
Z Z Z
Z Z Z
Z Z
Z Z Z
Z Z Z
Z Z Z
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
\Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega
inp
out
sp
start
Figure
2: Architecture for a neural string acceptor with O(n) edges.
Note that the weights w are not constant as
required. It is due to the neuron c i which should keep its state 1 after it becomes
active in order to transfer the possible positive result of the comparison to the
next comparator neuron. Therefore, the relevant feedback must exceed the sum of
all other inputs to achieve the threshold value. This can be avoided by inserting
auxiliary neurons, that remember the result of preceding comparisons, between
neighbor comparator neurons. All of these neurons have a constant feedback because
they only have one input, from the previous comparator neuron, to be exceeded.
The construction of the neural acceptor can also be easily adapted for
p. The technique from the proof of theorem 3 is employed for the recognition
of the last r bits of the string a. The resulting architecture of size O(r) is then
connected to the neural string acceptor by identifying the neuron start with the
above-mentioned neuron
'i
'i
'i
'i
'i
'i
'i
'i
'i
x
'i
'i
'i
'i
'i
'i
'i
'i
'i
'i
'i
'i
'i
Z Z Z Z Z Z Z Z Z Z Z Z Z
Z Z Z Z Z Z Z Z Z Z Z Z Z
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
Z
Z Z Z
Z Z Z
Z Z Z
Z
Z
Z Z Z
Z Z Z
Z Z Z
Z
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
ae
\Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega l
l
l
l
Z Z
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
start
out
an
a q+3
a q+2
a q+1
inp
reset
sp
Figure
3: Architecture for a neural string acceptor with O(n 1
Theorem 7 For any string a 2 f0; 1g n there exists a neural acceptor of the size
neurons with O(n 1
weights of the size O(2
Thus, the descriptional
complexity of the neuromaton is \Theta(n).
Proof: The design of the desired neural acceptor is very similar to the construction
from the proof of Theorem 6, except that the number of comparator neurons is
reduced to two neurons c - ; c - . In this case the substrings a
are encoded by O(p) (each by 2) weights of the size O(2 p ) corresponding to
the connection leading from the clock neurons m i to these comparison neurons. The
contents of the input buffer, viewed as a binary number, are first converted into an
integer. This integer is then compared with the relevant encoded part of the string a
by the comparator neuron c - to see whether it is smaller or equal. At the same time,
it is compared by the comparator neuron c - to see whether it is greater or equal.
The neuron which realizes the logical conjunction of comparator outputs is added to
indicate whether the part of the input string matches the corresponding part of the
string a. However, this leads to one more computational step of the neural acceptor.
The correct synchronization can be achieved by exploiting the above-mentioned
additional architecture for the recognition of the last r bits a q+1 ; a
of the string a. Details of the synchronization are omitted, as well as, a
complete formal definition of the neural string acceptor. We only give the definition
of the weights that are relevant for the comparisons:
2 p\Gammaj a (i\Gamma2)p+j for
The weights w are defined as differences
because all clock neurons are active just when the part
of the input is in the buffer. The architecture of the neural string
acceptor is depicted in figure 3 (instead of the above-mentioned conjunction, the
neuron reset is added to realize the negation of comparator conjunction to possibly
terminate the clock). 2
Piotr Indyk [9] pointed out that the latter string acceptor construction from
Theorem 7 can also be exploited for building a cyclic neural network, with O(2 n
neurons and edges, which computes any boolean function. In this case the binary
vector of all function values is encoded into the string acceptor. The position of the
relevant
-bit part of this vector which includes the desired output is given by the
first nbits of input. All possible corresponding 2 n
-bit substrings are generated and
presented to the acceptor to achieve the relevant part of the function value vector
from which the relevant bit is extracted. Its position is determined from the last nbits of input.
8 Hopfield Languages
In section 7 we have restricted ourselves to a special subclass of regular expres-
sions. In this section we will concentrate on a special type of neural acceptors,
the so-called Hopfield neuromata which are based on symmetric neural networks
(Hopfield networks). In these networks the weights are symmetric and therefore,
the architecture of such neuromata can be seen as an undirected graph.
Hopfield networks have been traditionally studied [4, 21] and used, due to their
favorite convergence properties. These networks are also of particular interest,
because their natural physical realizations exist (e.g. Ising spin glasses, 'optical
computers'). Using the concept of Hopfield neuromata, we will define a class of
Hopfield languages that are recognized by these particular acceptors.
The neuromaton that is based on
symmetric neural network (Hopfield network) where hi;
called a Hopfield acceptor (Hopfield
neuromaton). The language recognized by Hopfield neuromaton N is
called Hopfield language.
First, we will show that the class of Hopfield languages is strictly contained
within the class of regular languages. For this purpose we will formulate the necessary
condition - the so-called Hopfield condition when a regular language is a
Hopfield language. Intuitively, Hopfield languages cannot include words with those
potentially infinite substrings which allow the Hopfield neuromaton to converge and
to forget relevant information about the previous part of the input string which is
recognized. The idea of the proof is to find a necessary condition to prevent a
Hopfield neuromaton from converging.
Definition 11 A regular language L is said to satisfy a Hopfield condition if f
for every
Theorem 8 Every Hopfield language satisfies the Hopfield condition.
Proof: Let be a Hopfield language recognized by the Hopfield neuro-
ng and define
be the integer
vectors of size n \Theta 1 and let \Gammafinpg be the integer matrix of size
n \Theta n. Note that the matrix W is symmetric, since N is the Hopfield acceptor.
Let us present an input string v 1 x
to the acceptor N . For a sufficiently large m 0 , the network's computation must
start cycling over this input because the network has only 2 n+1 of possible states.
be the different states in this cycle and x
be the corresponding input bits, so that the state s i is followed by the state s -(i) ,
is the index shifting permutation:
be the inverse permutation of -, and let - r
be the composed permutations, for any r - 1. Further, let ~c
be the binary vectors of size n \Theta 1. Then ( ~
follows from Definition 8.
For each state of the cycle we define an integer
and the symbol T denotes the transposition of a vector. Obviously,
p. Using the fact that the matrix W is symmetric
we obtain
Moreover, x 2. So we can write
~
We know that ( ~
Therefore, (~c T
which implies
p. But from E - p
Then we cannot have both c - 2 n, at the same
time because in this case the inequality of (~c T
is strict. The complementary case of c - 2 simultaneously is
impossible as well. Since then, the number of 1's in ~c - 2 (i) would be greater than
the number of 1's in ~c i for
Therefore, we can conclude that ~c - 2 consequently
This implies that the cycle length p - 2. Hence, for every v 2 2 f0; 1g ? either
This completes the proof
that L satisfies the Hopfield condition. 2
For example, it follows from Theorem 8 that the regular languages [(000) ?
are not Hopfield languages because they do not satisfy the Hopfield condition.
9 The Hopfield Condition Is Sufficient
In this section we will prove that the necessary Hopfield condition from Definition 11,
stating when a regular language is a Hopfield language, is sufficient as well. A
construction of a Hopfield neuromaton is shown for a regular language satisfying
the Hopfield condition.
Theorem 9 For every regular language satisfying the Hopfield condition
there exists a Hopfield neuromaton N of the size O (jffj) such that L is recognized
by N . Hence, L is a Hopfield language.
Proof: The architecture of the Hopfield neuromaton for the regular language
[ff] satisfying the Hopfield condition is given by the general construction from the
proof of Theorem 3. As a result we obtain an oriented network N 0 that corresponds
to the structure of the regular expression ff where each neuron n s of N 0 (besides
the special neurons inp, out, start) is associated with one symbol s 2 f0; 1g from
ff (i.e., is of the type s). The task of n s is to check whether s agrees with the input
bit.
We will transform N 0 into an equivalent Hopfield network N . Supposing that
ff contains iterations of binary substrings with at most two bits, the standard technique
[15, 21] of the transformation of acyclic neural networks to Hopfield networks
can be employed. The idea consists in adjusting weights to prevent propagating a
signal backwards while preserving the original function of neurons. The transformation
starts in the neuron out, it is carried out in the opposite direction to oriented
edges and ends up in the neuron start. For a neuron whose outgoing weights have
been already adjusted, its threshold and incoming weights are multiplied by a sufficiently
large integer which exceeds the sum of absolute values of outgoing weights.
This is sufficient to suppress the influence of outgoing edges on the neuron. After
the transformation is accomplished all oriented paths leading from start to out are
labeled by decreasing sequences in weights.
The problem lies in realizing general iterations using only the symmetric weights.
Consider a subnetwork I of N 0 corresponding to the iteration of some substring of
ff. Let the subnetwork I have arisen from a subexpression fi + of ff in the proof
of theorem 3. After the above-mentioned transformation is performed, any path
leading from the incoming edges of I to the outgoing ones is labeled by a decreasing
sequence of weights in order to avoid the backward signal from spreading. But the
signal should be propagated from any output of the subnetwork I back to each sub-network
input, as the iteration requires. On one hand, an integer weight associated
with such a connection should be small enough in order to suppress the backward
signal propagation. On the other hand, this weight should be sufficiently large
enough to influence the subnetwork input neuron. Clearly, these two requirements
are contradictory.
Consider a simple cycle C in the subnetwork I consisting of an oriented path
passing through I and of one backward edge leading from the end of this path (i.e.,
the output of I) to its beginning (i.e., the input of I). Let the types of neurons in the
cycle C establish an iteration a + where a 2 f0; 1g ? and jaj ? 2. Moreover, suppose
that x 2 f0; 1g 2 and a = x k for some k - 2. In the Hopfield condition set v 2 to
be a postfix of L associated with a path leading from C to out. Similarly set v 1 to
be a prefix of L associated with a path leading from start to C such that for every
which contradicts the Hopfield
condition. Such prefix v 1 exists because, otherwise, the cycle C could be realized
as an iteration of two bits. Therefore a 6= x k . This implies that strings a i ,
contain a substring of the form by - b where b; b. Hence, the string
a has the form either
For notational simplicity we confine ourselves to the former case while the latter
remains similar. Furthermore, we consider a = a 1 by - ba 2 with a minimal ja 2 j.
We shift the decreasing sequence of weights in C to start and to end in the neuron
n y while relevant weights in N 0 are modified by the above-mentioned procedure to
ensure a consistent linkage of C within N 0 . For example, this means that all edges
leading from the output neuron of I in C to the input neurons of I are evaluated by
sufficiently large weights to realize the corresponding iterations. Now the problem
lies in signal propagation from the neuron n b to the neuron n y . Assume
To support the small weight in the connection between n b and n y a new neuron
id that copies the state of the input neuron inp is connected to the neuron n y via
a sufficiently large weight which strengthens the small weight in the connection
from n b . Obviously, both the neuron n b (b = 1) and the new neuron id are active
at the same time and both enable the required signal propagation from n b to n y
together. On the other hand, when the neuron n- b ( - active, the neuron id
is passive due to the fact that it is copying the input. This prevents the neuron n y
from becoming active at that time.
However, for some symbols b in ff there can be neurons n b 0 outside
the cycle C (but within the subnetwork I) to which the edges from n y lead. This
situation corresponds to y being concatenated with a union operation within fi. In
this case the active neurons n b 0 , id would cause the neuron n y to fire. To avoid
this, we add another new neuron n y 0 that behaves identically as the neuron n y for
the symbol y 1g. Thus, the same neurons that are connected to n y are
linked to n y 0 and the edges originally outgoing from n y to n b 0 for all corresponding
b 0 are reconnected to lead only from n y 0 .
A similar approach is used in the opposite case when above-described
procedure is applied for each simple cycle C in the subnetwork I corresponding to
the iteration fi + . These cycles are not necessarily disjoint, but the decomposition
of a = a 1 by - ba 2 with minimal ja 2 j ensures their consistent synthesis. Similarly, the
whole transformation process is performed for each iteration in ff. In this case
some iteration can be a part of another iteration and the magnitude of weights in
the inner iteration will need to be accommodated to embody this into the outer
iteration. It is also possible that the neuron id has to support both iterations in the
same point. Finally, the number of simple cycles in ff is O (jffj). Hence, the size of
the resulting Hopfield neuromaton remains of order O (jffj).
In figure 4 the preceding construction is illustrated by an example of the Hopfield
neuromaton for the regular language [(1(0
A simple cycle consisting of neurons clarified here in details. Notice
the decreasing sequence of weights (7,5,1) in this cycle, starting and ending in the
neuron n y , as well as, the neuron id which enables the signal propagation from n b
to n y . The neuron n y 0 , identical with n y , has also been created because the neuron
originally connected to n y (see figure 1). 2
start
inp
id
id
out
Figure
4: Hopfield neuromaton for [(1(0
Corollary 1 Let L be a regular language. Then L is a Hopfield language if f L
satisfies the Hopfield condition.
Finally, we will briefly investigate the closure properties of the class of Hopfield
languages.
Theorem 10 The class of Hopfield languages is closed under union, concatenation,
intersection, and complement. It is not closed under iteration.
Proof: The closeness of Hopfield languages under union and concatenation follows
from Corollary 1. To obtain the Hopfield neuromaton for the complement, negate
the function of the output neuron out by multiplying the associated weights (and the
threshold) by -2 and by adding 1 to the threshold. Hence, the Hopfield languages
are closed under intersection as well. Finally, due to Theorem 9, [1010] is a Hopfield
language, whereas [(1010) ? ] is not a Hopfield language because it does not satisfy
the Hopfield condition from Theorem 8. 2
Emptiness problem
In order to further illustrate the descriptional power of neuromata we investigate
the complexity of the emptiness problem for regular languages given by neuromata
or by Hopfield acceptors. We will prove that both problems are PSPACE-complete.
Definition 12 Given a (Hopfield) neuromaton N , the (Hopfield) neuromaton emptiness
problem, which is denoted NEP (HNEP), is the issue of deciding whether the
language recognized by the (Hopfield) neuromaton N is nonempty.
Theorem 11 NEP, HNEP are PSPACE-complete.
Proof: To show that both NEP; HNEP 2 PSPACE, an input string for the
(Hopfield) neuromaton N is being guessed bit by bit and its acceptance is checked
by simulating the network computation in a polynomial space to witness the non-emptiness
of
Next, we show that NEP is PSPACE-hard. Let A be an arbitrary language
in PSPACE. For each x 2 f0; 1g ? we will, in a polynomial time, construct the
corresponding neuromaton N such that x 2 A iff N 2 NEP . Further, let M be a
polynomial space bounded Turing machine which recognizes A. First, a cyclic neural
network N 0 which simulates M can be constructed in a polynomial time using the
standard technique [4]. The idea of this construction is that for each tape cell of
M there is a subnetwork which simulates a tape head when it is in this position
during the computation (i.e., the local transition rule). The neighbor subnetworks
are connected to enable the head moves. The input x for M is encoded in the initial
state of N 0 and at the end of the neural network computation one neuron of N 0 ,
called result, signals whether x 2 A. The neural network N 0 is embodied into the
neuromaton N as follows. The input neuron inp of N is not connected to any other
neuron and the output neuron out of N is identified with the neuron result of N 0 .
accepts x iff the neuron out is active at the end of the
simulation iff L(N ) contains all words of the length which is equal to the length
of the computation of M on x iff N 2 NEP . Thus, x 2 A iff N 2 NEP . This
completes the proof that NEP is PSPACE-complete.
For the Hopfield neuromata a similar simulation of M can be achieved using
the symmetric neural network N 0 . It is because any convergent computation of an
arbitrary asymmetric neural network can be simulated by a symmetric network of
a polynomial size [4] and we can assume, without loss of generality, that M stops
for every input. Hence, HNEP is PSPACE-complete as well. 2
Theorem 11 is a somewhat surprising because the identical problems for regular
expressions, deterministic and non-deterministic finite automata are known to only
be NL-complete [10, 11]. In some sense this shows that there is a big gap between
the descriptive power of regular expressions and that of neuromata and confirms
the result from section 6. We will discuss this issue later in section 11.
The difference between the descriptive power of regular expressions and of neu-
romata can also be illustrated by the complement operation. While the emptiness
problem for the complement of the regular expression becomes PSPACE-complete
[1] the emptiness problem complexity of the complement of a neuromaton
does not change because the output neuron can be easily negated.
As a next consequence we will show that the neuromaton equivalence problem
is PSPACE-complete as well.
Given two (Hopfield) neuromata N 1 , N 2 the (Hopfield) neuromaton
equivalence problem which is denoted NEQP (HNEQP) is the issue of deciding
whether the languages recognized by these (Hopfield) neuromata are the same, i.e.
whether
Corollary 2 NEQP, HNEQP are PSPACE-complete.
Proof: To prove that NEQP 2 it is sufficient to show
that its complement is in PSPACE. For this purpose an input string for the neuro-
mata being guessed and its acceptance for one of these two neuromata and
its rejection for the other one are checked by simulating both network computations
in a polynomial space to witness the non-equivalence L(N 1
To show that NEQP is PSPACE-hard, the complement of NEP denoted co-NEP,
which is PSPACE-complete as well, is in a polynomial time reduced to NEQP.
Let the neuromaton N be an instance of co-NEP. We will in a polynomial time
construct the corresponding instance N 1 , N 2 for NEQP so that L(N ) is empty iff
The neuromaton N 1 is identified with N . Let N 2 be an arbitrary
small neuromaton which recognizes the empty language. It is easy to see that
this is a correct reduction. Hence, NEQP is PSPACE-complete. The PSPACE-completeness
of HNEQP can be achieved similarly. 2
Conclusions
In this paper we have explored an alternative formalism for the regular language
representation, based on neural networks. Comparing the so-called neuromata with
the classical regular expressions we have obtained the result that within a polynomial
descriptive complexity, the non-determinism, which is captured in the regular
expressions, can be simulated in a parallel mode by neuromata which have a deterministic
nature.
In our opinion, the descriptive power of the neuromata consists in an efficient
encoding of the transition function. While the transition function of the deterministic
(nondeterministic) finite automata is usually specified by the list of its values
(old state, input symbol)/new state, the same function in the neuromata is given
by the vector of formulae - each for one neuron which evaluates it. This encoding
can be interpreted more like a general program. From this point of view it
is easy to observe that the table of transition rules in the deterministic finite automata
is a special case of such program. Moreover, the neuromata can encode the
nondeterministic transition function efficiently using parallelism. In this way, the
behavior of the neuromata in the exponential number of all possible states can be
described in a polynomial size. However, this process is not reversible. The neural
transition program cannot be generally rewritten as a polynomial size table or a
polynomial length regular expression. Moreover, the number of neuromaton states
(although exponential) is limited and that is why the matching lower bound on
the neuromaton size can be achieved using the standard technique: there exists a
regular language which requires the exponential number of neuromaton states to be
recognized.
Also, a more complex neural transition rule specification makes the emptiness
problem for the neuromata harder than for the classical formalism (finite automata,
regular expressions). On the other hand, while the complement operation in the
nondeterministic case of this formalism can cause an exponential growth of the
descriptional complexity, the complement of the neuromata can be easily achieved.
We have also investigated the Hopfield neuromata which are studied widely due
to their convergence properties. We have shown that Hopfield neuromata determine
the proper subclass of regular languages - the so-called Hopfield languages. Via the
so-called Hopfield condition, we have completely characterized the class of Hopfield
languages.
We can conclude that the neuromata present quite an efficient tool not only for
the recognition of regular languages and of their subclasses respectively, but also
for their description.
Acknowledgement
We are grateful to Markus Holzer, Piotr Indyk, and Petr Savick'y for the stimulating
discussions related to the topics of this paper. Further we thank Tereza Beda-nov'a
who realized the pictures in LaTex environment.
--R
The Design and Analysis of Computer Algorithms.
Efficient Simulation of Finite Automata by Neural Nets.
Learning of Regular Expressions by Pattern Matching.
Complexity Issues in Discrete Hopfield Net- works
Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks
Bounds on the Complexity of Recurrent Neural Network Implementations of Finite State Machines.
Optimal Simulation of Automata by Neural Nets.
Personal Communication.
A Note on the Space Complexity of Some Decision Problems for Finite Automata.
Representation of Events in Nerve Nets and Finite Au- tomata
Computational Complexity of Neural Networks: A Survey.
Circuit Complexity and Neural Networks.
Discrete Neural Computation A Theoretical Foundation.
Complexity Issues in Discrete Neurocomputing.
Learning Finite State Machines with Self-Clustering Recurrent Networks
--TR
Efficient simulation of finite automata by neural nets
Neurocomputing
A note on the space complexity of some decision problems for finite automata
Learning and extracting finite state automata with second-order recurrent neural networks
Circuit complexity and neural networks
Learning finite machines with self-clustering recurrent networks
Discrete neural computation
Learning and extracting initial mealy automata with a modular neural network model
Bounds on the complexity of recurrent neural network implementations of finite state machines
The Design and Analysis of Computer Algorithms
Computational complexity of neural networks
Hopfield Languages
Learning of regular expressions by pattern matching | hopfield networks;finite neural networks;regular expressions;descriptional complexity;emptiness problem;string acceptors |
274068 | Online Learning versus Offline Learning. | We present an off-line variant of the mistake-bound model of learning. This is an intermediate model between the on-line learning model (Littlestone, 1988, Littlestone, 1989) and the self-directed learning model (Goldman, Rivest Schapire, 1993, Goldman & Sloan, 1994). Just like in the other two models, a learner in the off-line model has to learn an unknown concept from a sequence of elements of the instance space on which it makes guess and test trials. In all models, the aim of the learner is to make as few mistakes as possible. The difference between the models is that, while in the on-line model only the set of possible elements is known, in the off-line model the sequence of elements (i.e., the identity of the elements as well as the order in which they are to be presented) is known to the learner in advance. On the other hand, the learner is weaker than the self-directed learner, which is allowed to choose adaptively the sequence of elements presented to him.We study some of the fundamental properties of the off-line model. In particular, we compare the number of mistakes made by the off-line learner on certain concept classes to those made by the on-line and self-directed learners. We give bounds on the possible gaps between the various models and show examples that prove that our bounds are tight.Another contribution of this paper is the extension of the combinatorial tool of labeled trees to a unified approach that captures the various mistake bound measures of all the models discussed. We believe that this tool will prove to be useful for further study of models of incremental learning. | Introduction
The mistake-bound model of learning, introduced by Littlestone [L88, L89], has attracted a considerable
amount of attention (e.g., [L88, L89, LW89, B90a, B90b, M91, CM92, HLL92, GRS93, GS94])
and is recognized as one of the central models of computational learning theory. Basically it models
a process of incremental learning, where the learner discovers the 'labels' of instances one by one.
At any given stage of the learning process, the learner has to predict the label of the next instance
based on his current knowledge, i.e. the labels of the previous instances that it has already seen.
The quantity that the learner would like to minimize is the number of mistakes it makes along
this process. Two variants of this model were considered, allowing the learner different degrees of
freedom in choosing the instances presented to him:
ffl The on-line model [L88, L89], in which the sequence of instances is chosen by an adversary
and the instances are presented to the learner one-by-one.
ffl The self-directed model [GRS93, GS94], in which the learner is the one who chooses the
sequence of instances; moreover, it may make his choices adaptively; i.e., each instance is
chosen only after seeing the labels of all previous instances.
In the on-line model, the learner is faced with two kinds of uncertainties. The first is which function
is the target function, out of all functions in the concept class which are consistent with the data.
The second is what are the instances that it would be challenged on in the future. While the
first uncertainty is common to almost any learning model, the second is particular to the on-line
learning model.
A central aim of this research is to focus on the uncertainty regarding the target function by
trying to "neutralize" the uncertainty that is involved in not knowing the future elements, and to
understand the effect that this uncertainty has on the mistake-bound learning model. One way
of doing this, is to allow the learner a full control of the sequence of instances, as is done in the
self-directed model. Our approach is different: we define the off-line learning model as a one in
which the learner knows the sequence of elements in advance. Since the difference between the
on-line learner and the off-line learner is the uncertainty regarding the order of the instances, this
comparison gives insight into the "information" that is in knowing the sequence.
Once we define the off-line cost of a sequence of elements, we can also define a best sequence
(a sequence in which the optimal learner, knowing the sequence, makes the fewest mistakes) and
a worst sequence (a sequence in which the optimal learner, knowing the sequence, makes the most
mistakes). These are best compared to the on-line and self-directed models if we think of them in
the following way:
ffl The worst sequence (off-line) model is a one in which the sequence of instances is chosen by
an adversary but the whole sequence (without the labels) is presented to the learner before
the prediction process starts.
ffl The best sequence (off-line) model is a one in which the whole sequence of instances is chosen
by the learner before the prediction process starts.
Denote by M on\Gammaline (C); Mworst (C); M best (C) and M sd (C) the number of mistakes made by the best
learning algorithm in the online, worst sequence, best sequence and self-directed model (respec-
tively) on the worst target concept in a concept class C. Obviously, for all C,
Mworst best
The main issue we consider is to what degree these measures can differ from one another. We
emphasize that the only complexity measure is the number of mistakes and that other complexity
measures, such as the computational complexity of the learning algorithms, are ignored. It is known
that in certain cases M sd can be strictly smaller than M on\Gammaline . For example, consider the class
of monotone monomials over n variables. It can be seen that this class has M
In addition, we give examples that show, for certain concept classes,
that M sd may be smaller than M best by a multiplicative factor of O(log n); hence, showing the
power of adaptiveness. The following example shows that there are also gaps between M best and
Mworst . Given n points in the interval [0; 1], consider the class of functions which are a suffix of
this interval (i.e., the functions f a (x) that are 1 for x - a and 0 otherwise). As there are n points,
there are only n+1 possible concepts, and therefore the Halving algorithm [L88, L89] is guaranteed
to make at most O(log n) mistakes, i.e. M on\Gammaline = O(log n). For this class, a best sequence would
be receiving the points in increasing order, in which case the learner makes at most one mistake
(i.e., M best = 1). On the other hand, we show that the worst sequence forces Mworst = \Theta(log n)
mistakes. An interesting question is what is the optimal strategy for a given sequence. We show
a simple optimal strategy and prove that the number of mistakes is exactly the rank of a search
tree (into which we insert the points in the order they appear in the sequence). We generalize
this example, and show that for any concept class, the exact number of mistakes is the rank of a
certain tree corresponding to the concept class and the particular sequence (this tree is based on the
consistent extensions). This formalization of the number of mistakes, in the spirit of Littlestone's
formalization for the on-line case [L88, L89], provides a powerful combinatorial characterization.
All the above examples raise the question of how large can these gaps be. We prove that the
gaps demonstrated by the above mentioned examples are essentially the largest possible; more
precisely, we show that M on\Gammaline can be greater than M sd by at most a factor of log jX j, where X
is the instance space (e.g., in the monomials example X is the set of 2 n boolean assignments).
The above result implies, in particular, that the ratio between the number of mistakes for
the best sequence and the worst sequence is O(log n), where n is the length of the sequence. 1
We also show that Mworst = \Omega\Gamma
log M on\Gammaline ), which implies that either both are constant or
both are non-constant. Finally we show examples in which M on\Gammaline = 3
Mworst , showing that
Mworst 6= M on\Gammaline . In a few cases we are able to derive better bounds: for the cases that Mworst
and Mworst = 2 we show simple on-line algorithms that have at most 1 and 3 mistakes, respectively.
One way to view the relationships among the above models is through the model of "experts"
[CFHHSW93, FMG92, MF93]. For each sequence oe there is an expert, E oe , that makes its predictions
under the assumption that the sequence is oe. Let oe be the sequence chosen by the adversary,
related result by [B90a] implies that if efficiency constraints are imposed on the model, then there are cases in
which some orders are "easy" and others are computationally "hard".
then the expert E oe makes at most Mworst mistakes. The on-line learner does not know the sequence
oe in advance, so the question is how close can it get to the best expert, E oe . The problem
is that the number of experts is overwhelming; initially there are n! experts (although the number
of experts consistent with the elements of the sequence oe seen so far decreases with each element
presented). Therefore, previous results about experts do not apply here.
The rest of this paper is organized as follows: In Section 2, we give formal definitions of the
model and the measures of performance that are discussed in this paper, followed by some simple
properties of these definitions. In Section 3, we give the definition of the rank of a tree (as well as
some other related definitions) and prove some properties of it. In Section 4, we present the various
gaps. Then, we characterize the off-line complexity (for completeness, we present in Section 4.2.1
a characterization of the same nature, based on [L88, L89], for the on-line complexity and for the
self-directed complexity [GS94]) and we use these characterizations to obtain some basic results.
Finally, in Section 5, we use these characterizations to study the gap between the on-line complexity
and off-line complexity.
2 The Model
2.1 Basic Definitions
In this section we formally present our versions of the mistake bound learning model which is the
subject of this work. The general framework is similar to the on-line learning model defined by
Littlestone [L88, L89].
Let X be any set, and let C be a collection of boolean functions defined over the set X (i.e.
We refer to X as the instance space and to C as the concept class.
Let S be a finite subset of X . An on-line learning algorithm with respect to S (and a concept
class C) is an algorithm A that is given (in advance) S as an input; Then, it works in steps as
follows: In the i-th step the algorithm is presented with a new element s i 2 S. It then outputs
its prediction p i 2 f0; 1g and in response it gets the true value c t (s i ), where c t 2 C denotes the
target function. The prediction p i may depend on the set S, the values it has seen so far (and of
course the concept class C). The process continues until all the elements of S have been presented.
denote the order according to which the elements of S are presented to the
learning algorithm. Denote by M(A[S]; oe; c t ) the number of mistakes made by the algorithm on a
sequence oe as above and target function c t 2 C, when the algorithm is given S in advance (i.e., the
number of elements for which p i the mistake bound of the algorithm, for a fixed
S, as M(A[S]) 4
Finally, let
The original definitions of [L88, L89] are obtained (at least for finite X ) by considering
An off-line learning algorithm is an algorithm A that is given (in advance) not only the set S,
but also the actual sequence oe as an input. The learning process remains unchanged (except that
each prediction p i can now depend on the actual sequence, oe, and not only on the set of elements,
S). Denote by M(A[oe]; c t ) the number of mistakes made by an off-line algorithm, A, on a sequence
oe and a target c t . Define M(A[oe]) sequence oe, define
A
A
For a given S, we are interested in the best and worst sequences. Denote by M best (S; C) the smallest
value of M(oe; C) over all oe, an ordering of S, and let oe best be a sequence that achieves this minimum
(if there are several such sequences pick one of them arbitrarily). Similarly, M worst (S; C) is the
maximal value of M(oe; C) and oe worst is a sequence such that M(oe worst ; worst (S; C).
A self-directed learning algorithm A is a one that chooses its sequence adaptively; hence the
sequence may depend on the classifications of previous instances (i.e., on the target function).
Denote by M sd (A[S]; c t ) the number of mistakes made by a self-directed algorithm A on a target
function c t 2 C, when the algorithm is given in advance S (the set from which it is allowed to pick
its queries). Define M sd (A[S]) 4
A
A
The following is a simple consequence of the definitions:
Lemma 1: For any X ; C, and a finite S ' X ,
best (S; C) - M worst (S; C) - M on-line (S; C):
2.2 Relations to Equivalence Query Models
It is well known that the on-line learning model is, basically, equivalent to the Equivalence Query
model [L89]. It is not hard to realize that our versions of the on-line scenario give rise to
corresponding variants of the EQ model. For this we need the following definitions:
ffl An equivalence-query learning algorithm with respect to S (and a concept class C) is an algorithm
A that is given in advance S as an input; Then, it works in steps as follows: In the i-th
step the algorithm outputs its hypothesis, h i ' S, and in response it gets a counterexample;
i.e., an element x denotes the target function. The process goes
on until
ffl Let F denote the function that chooses the counterexamples x i . We denote by EQ(A[S]; F; c t )
the number of counterexamples, x i , presented by F to the algorithm, A, in the course of a
learning process on the target, c t , when A knows S in advance (but does not know F ).
Finally, let EQ(S; C)
Note that the original definitions of Angluin [A89] are obtained by considering . The following
is a well known (and easy to prove) fact:
1: For every
One aspect of the last definition above is that it considers the worst case performance of the
learner over all possible choices, F , of counterexamples to its hypotheses. It turns out that by
relaxing the definition so that the learner is only confronted with F 's of a certain type, one gets
EQ models that match the various offline learning measures presented in the previous subsection.
ffl Let oe denote an ordering of the set S. Let F oe be the following strategy for choosing coun-
terexamples. Given a hypothesis h, the counterexample, F oe (h), is the minimal element of
according to the ordering oe.
ffl Let EQ(oe; C) 4
ffl Let EQ best (S; C) 4
worst (S; C) 4
This variant of the equivalence query model in which the minimal counterexample is provided to
the algorithm is studied, e.g., in [PF88].
2: For every X ; C and S ' X as above, for every ordering oe of S, EQ(oe;
Proof: Given an EQ algorithm for (S; C; oe) construct an off-line algorithm by predicting, on each
element s i , the value that the current hypothesis of the EQ algorithm, h k i
, assigns to s i . Whenever
the teacher's response indicates a prediction was wrong, present that element as a counterexample
to the EQ algorithm (and replace the current hypothesis by its revised hypothesis).
For the other direction, given an offline algorithm, define at each stage of its learning process a
hypothesis by assigning to each unseen element, s 2 S, the value the algorithm would have guessed
for s if it got responses indicating it made no mistakes along the sequence, oe, from the current
stage up to that element. Whenever a counterexample is being presented, pass on to the offline
algorithm the fact that it has erred on that element (and update the hypothesis according to its
new guesses).
Corollary 1: For every
1. EQ best (S; best (S; C).
2. EQ worst (S; worst (S; C).
3 Labeled Trees
A central tool that we employ in the quantitative analysis in this work is the notion of ranks of trees.
We shall consider certain classes of labeled trees, depending upon the classes to be learned and the
type of learning we wish to analyze. The following section introduces these technical notions and
their basic combinatorial properties.
3.1 Rank of Trees
In this subsection we define the notion of the rank of a binary tree (see, e.g., [CLR90, EH89, B92]),
which plays a central role in this paper. We then prove some simple properties of this definition.
For a tree T , if T is empty then rank(T its left subtree and TR be
its right subtree. Then,
For example, the rank of a leaf is 0.
Let
-r
. The following lemma is a standard fact about the rank:
Lemma 2: A depth d rank r tree has at most
-r
leaves.
Proof: By induction on d and r. If there is exactly one leaf (if there were two or
more, then their least common ancestor is of rank 1). If there is one leaf (which is a
special case of r = 0) or two leaves, in which case r must be 1. In all these cases the claim holds.
For the induction step, let T be a depth d rank r tree. Each of and TR are of depth at most
by the definition of rank, in the worst case one of them is of rank r and the other of
Hence, by the induction hypothesis, the number of leaves is bounded by
r
d!
r
r
d
which completes the proof.
If r is small relative to d then it may be convenient to use the weaker d r (-
-r
on the
number of leaves.
A subtree of a tree T is a subset of the nodes of T ordered by the order induced by T .
Lemma 3: The rank of a binary tree T is at least k iff it has a subtree T 0 which is a complete
binary tree of depth k.
Proof: Mark in T the nodes where the rank increases. Those are the nodes of T 0 . For a marked
node with rank i, each of its children in T has rank hence it has a marked descendant with
binary tree. For the other direction, note that the rank of a
tree is at least the rank of any of its subtrees, and that a complete binary tree of depth k has rank
k.
Lemma 4: Let T be a complete binary tree of depth k. Let L be a partition of the leaves
of T into t disjoint subsets. For to be the subtree induced by the leaves in L i
(that is, T i is the tree of all the nodes of T that have a member of L i below them). Then, there
exists such that
Proof: The proof is by induction on t. For hence the claim is obvious.
For consider the nodes in depth bk=tc in T . There are two cases: (a) if all these nodes
belong to all trees T i then each of these trees contains a complete subtree of depth bk=tc and by
Lemma 3 each of them has rank of at least bk=tc. (b) if there exists a node v in depth bk=tc which
does not belong to all the trees T i then we consider the subtree T 0 whose root is v and consists of
all the nodes below v. By the definition of v, the leaves of the tree T 0 belong to at most t \Gamma 1 of
the sets L i . In addition the depth of T 0 is at least (t\Gamma1)k
. Hence, by induction hypothesis, one of
the subtrees T 0
of T 0 is of rank at least
Finally note that T 0
i is a subtree of T i hence T i has the desired rank.
Let us just mention that the above lower bound, on the rank of the induced subtrees, is essentially
the best possible. For example, take T to be a complete binary tree. Each leaf corresponds
to a path from the root to this leaf. Call an edge of such a path a left (right) edge if it goes from
a node to its left (right) son. Let L 0 (L 1 ) be the set of leaves with more left (right) edges. Then,
it can be verified that rank(T 0
3.2 Labeled Trees
Let X denote some domain set, S ' X and C ' f0; 1g X as above.
-labeled tree is a pair, (T ; F ), where T is a binary tree and F a function mapping the
internal nodes of T into X . Furthermore, we impose the following restriction on F :
is an ancestor of t then F
ffl A branch in a tree is a path from the root to a leaf. It follows that the above mapping F is
one to one on branches of T .
ffl A branch realizes a function
if for all 1 son of t i if and only if h(F (t i 1. Note that a branch
can realize more than one function. On the other hand, if then the
branch realizes a single function.
-labeled tree is an (S; C)-tree if the set of functions realized by its branches is exactly
S denote the set of all (S; C)-trees.
ffl For a sequence of elements of X , let T C
oe denote the maximal tree in T C
S for
which every node v in the k-th level is labeled F
Note, that using this notation, a class C shatters the set of elements of a sequence, oe, if and
only if T C
oe is a complete binary tree (recall that a class C shatters a set fs if for every
there exists a function f 2 C that for all
can therefore conclude that, for any class C,
oe is a complete binary treeg:
(We shall usually omit the superscript C when it is clear from the context.)
4 Gaps between the Complexity Measures
Lemma 1 provides the basic inequalities concerning the learning complexity of the different models.
In this section we turn to a quantitative analysis of the sizes of possible gaps between these measures.
We begin by presenting, in Section 4.1, some examples of concept classes for which there exist large
gaps between the learning complexity in different models. In Section 4.3, we prove upper bounds
on the possible sizes of these gaps, bounds that show that the examples of Section 4.1 obtain
the maximal possible gap sizes. A useful tool in our analysis is a characterization of the various
complexity measures as the ranks of certain trees. This characterization is given in section 4.2.
Let us begin our quantitative analysis by stating a basic upper bound on the learning complexity
in the most demanding model (from the student's point of view), namely, M on-line (S; C). Given an
instance space X , a concept class C and a set S we define C S to be the projection of the functions
in C on the set S (note that several functions in C may collide into a single function in C S ). Using
the Halving algorithm [L88, L89] we get,
Theorem 2: For all X ; C and S as above M on-line (S; C) - log jC S j.
4.1 Some Examples
The first example demonstrates that M best (S; C) may be much smaller than M worst (S; C) and
on-line (S; C). This example was already mentioned in the introduction and appears here in more
details.
Example 1: Let X be the unit interval [0; 1]. For, 0 - a - 1 define the function f a (x) to be 0 if
x - a and 1 if x ? a. Let C 4
1g. In other words, the concept class C consists of all
intervals of the form [a; 1] for 0 - a - 1. Let
g. By Theorem 2, it is
easy to see that in this example M on-line (S; C) - log(n 1). We would like to understand how an
off-line algorithm performs in this case.
Clearly, for every sequence oe, an adversary can always force a mistake on the first element of
the sequence. Hence, M best (S; C) - 1. To see that M best (S;
this sequence the following strategy makes at most 1 mistake: predict "0" until a mistake is made.
Then, all the other elements are "1"s of the function. For a worst sequence consider
Figure
1: The concept class of Example 2
It may be seen that the adversary can force the learning algorithm to make one mistake on each of
the sets
, and hence a total of mistakes.
This is the worst possible, as this performance is already granted for an on-line algorithm as
discussed above (and, by Lemma 1 an off-line algorithm can always match the on-line performance).
The next example shows that for all n there exist sets class C,
such that M sd (S; C) - 2 and M best (S; C)
n).
loss of generality, assume that for some value d,
dg. The concept class C (see Figure 1) consists of 2 \Delta 2 d functions
d. Each function f i is defined as follows: f i
all there is a single x i which is assigned 1 and hence can be viewed as an indicator for
the corresponding function f i ). The elements are partitioned into 2 d =d "blocks" each of
size d. In each of these blocks the 2 d functions f get all the 2 d possible combinations of
values (as in Figure 1). The functions are defined similarly by switching the roles of x's
and y's. More precisely, g i serves as an
indicator for the corresponding function g i ). Again, the elements are partitioned into
blocks each of size d. In each of these blocks the 2 d functions get all the 2 d possible
combinations of values.
To see that M sd (S; C) - 2 we describe a self-directed learner for this concept class. The learner
first asks about z and predicts in an arbitrary way. Whether it is right or wrong the answer
indicates whether the target functions is one of the f i 's or one of the g i 's. In each case the learner
looks for the corresponding indicator. That is, if c t asks the x's one by one, predicting 0
on each. A mistake on some x i (i.e., c t immediately implies that the target function is f i
and no mistakes are made anymore. Symmetrically, if c t the learner asks the y's one by one,
predicting 0 on each. A mistake on some y i (i.e., c t (y implies that the target function is g i
and again no mistakes are made anymore. In any case, the total number of mistakes is at most 2.
We now prove that M best (S; C)
n). 2 The idea is that a learner must choose its sequence
in advance, but does not know whether it looks for one of the f i 's or one of the g i 's. Formally, let oe
be the (best) sequence chosen by the learner. We describe a strategy for the adversary to choose a
target function in a way that forces the learner at least d=4
=\Omega\Gamma363 n) mistakes. Let oe 0 be a prefix
of oe of length 2 d . The adversary considers the number of x i 's queried in oe 0 versus the number of
's. Assume, without loss of generality, that the number of x i 's in oe 0 is smaller than the number
of y i 's in oe 0 . The adversary then restricts itself to choosing one of the f i 's. Moreover, it eliminates
all those functions f i whose corresponding element x i appears in oe 0 . Still, there are at least 2 d =2
possible functions to choose from. Now, consider the y's queried in oe 0 . By the construction of C
we can partition the elements "groups" of size 2 d =d such that every function f j
gives the same value for all elements in each group. There are at least 2 d =2 elements y's that are
queried in oe 0 and they belong to ' groups. By simple counting, d- d. We estimate the number
of possible behaviors on these ' groups as follows: originally all 2 d behaviors on the d groups were
possible. Hence, to eliminate one of the behaviors on the ' elements one needs to eliminate 2 d\Gamma'
functions. As we eliminated at most 2 d =2 functions, the number of eliminated behaviors is at most2
In other words, there are at least 12 ' behaviors on these ' elements. On the other
hand, if we are guaranteed to make at most r mistakes it follows from Theorem 6 and Lemma 2
that the number of functions is at most
-r
must be at least '=2 - d=4
n).
4.2 Characterizing M(oe; C) Using the Rank
The main goal of this section is to characterize the measure M(oe; C). As a by-product, we present
an optimal offline prediction algorithm. I.e., an algorithm, A, such that for every sequence oe,
The next theorem provides a characterization of M(oe; C) in terms of the rank of the tree T C
oe (for
any concept class C and any sequence oe). A similar characterization was proved by Littlestone [L88,
L89] for the on-line case (see section 4.2.1 below).
Theorem 3: For all
Proof: To show that M(oe; C) - rank(T oe ) we present an appropriate algorithm. For predicting
on s 1 , the algorithm considers the tree T oe , defined above, whose root is s 1 . Denote by its
left subtree and by TR its right subtree. If rank(T predicts "0", if
predicts "1", otherwise (rank(T L
can predict arbitrarily. Again, recall that in the case that rank(T L by the definition of
rank, both rank(T L ) and rank(T R ) are smaller than rank(T oe ). Therefore, at each step the algorithm
uses for the prediction a subtree of T oe which is consistent with all the values it has seen so far. To
conclude, at each step where the algorithm made a mistake, the rank decreased by (at least) 1, so
no more than rank(T oe ) mistakes are made.
To show that no algorithm can do better, we present a strategy for the adversary for choosing
a target in C so as to guarantee that a given algorithm A makes at least rank(T oe ) mistakes. The
adversary constructs for itself the tree T oe . At step i, it holds a subtree T whose root is a node
Again, by the Halving algorithm, M on-line (S; C) and therefore also M best (S; C) are O(log n).
marked s i which is consistent with the values it already gave to A as the classification of s
After getting A's prediction on s i the adversary decides about the true values as follows: If one
of the subtrees, either TR , has the same rank as the rank of T then it chooses the value
according to this subtree. Note that, by definition of rank, at most one of the subtrees may have
this property, so this is well defined. In this case, it is possible that A guessed the correct value (for
example, the algorithm we described above does this) but the rank of the subtree that will be used
by the adversary in the i 1-th step is not decreased. The second possible case, by the definition
of rank, is that the rank of both and TR is smaller by 1 than the rank of T . In this case, the
adversary chooses the negation of A's prediction; hence, in such a step A makes a mistake and the
rank is decreased by 1. Therefore, the adversary can force a total of rank(T oe ) mistakes.
The above theorem immediately implies:
Corollary 4: For all
worst (S;
oe is an ordering of S
best (S;
oe is an ordering of S
Remark 1: It is worth noting that, by Sauer's Lemma [S72], if the concept class C has V C
dimension d then the size of T C
oe is bounded by n d (where n, as usual, is the length of oe). It follows
that, for C with small V C, the tree is small and therefore, if consistency can be checked efficiently
then the construction of the tree is efficient. This, in turn, implies the efficiency of the generic
(optimal) off-line algorithm of the above proof, for classes with "small" VC dimension.
Example 3: Consider again the concept class of Example 1. Note that in this case, the tree
T oe is exactly the binary search tree corresponding to the sequence Namely, T oe is
the tree constructed by starting with an empty tree and performing the sequence of operations
e.g. [CLR90]). Hence, M(oe; C) is exactly the
rank of this search tree.
4.2.1 Characterizing the On-line and Self-Directed Learning
To complete the picture one would certainly like to have a combinatorial characterization of
on-line (S; C) as well. Such a characterization was given by Littlestone [L88, L89]. We reformulate
this characterization in terms of ranks of trees. The proof remains similar to the one given
by Littlestone [L88, L89] and we provide it here for completeness.
Theorem 5: For all
Proof: To show that M on-line (S; C) - max
we use an adversary argument
similar to the one used in the proof of Theorem 3. The adversary uses the tree that gives the
maximum in the above expression to choose both the sequence and the classification of its elements,
so that at each time that the rank is decreased by 1 the prediction algorithm makes a mistake.
To show that M on-line (S; C) is at most
we present an appropriate
algorithm, which is again similar to the one presented in the proof of Theorem 3. For predicting
on s 2 S, we first define C 0
S to be all the functions in C S consistent with
S to be all
the functions in C S consistent with 1. The algorithm compares max
and
and predicts according to the larger one. The crucial point is that at
least one of these two values must be strictly smaller than m otherwise there is a tree in T C
whose
rank is more than m. The prediction continues in this way, so that the maximal rank is decreased
with each mistake.
Finally, the following characterization is implicit in [GS94]:
Theorem
Proof: Consider the tree T whose rank is the minimal one in T C
S . We will show that M sd (S; C)
is at most the rank of T . For this, we present an appropriate algorithm that makes use of this tree.
At each point, the learner asks for the instance which is the current node in the tree. In addition,
it predicts according to the subtree of the current node whose rank is higher (arbitrarily, if the
ranks of the two subtrees are equal). The true classification determines the child of the current
node from which the learner needs to proceed. It follows from the definition of rank that whenever
the algorithm makes a mistake the remaining subtree has rank which is strictly smaller than the
previous one.
For the other direction, given a strategy for the learner that makes at most M sd (S; C) mistakes
we can construct a tree T that describes this strategy. Namely, at each point the instances that the
learner will ask at the next stage, given the possible classifications of the current instance, determine
the two children of the current node. Now, if the rank of T was more than M sd (S; C) then this gives
the adversary a strategy to fool the learner: at each node classify the current instance according
to the subtree with higher rank. If the ranks of both subtrees are equal then on any answer by the
algorithm the adversary says the opposite. By the definition of rank, this gives rank(T ) mistakes.
Hence, rank(T ) is at most M sd (S; C) and certainly the minimum over all trees can only be smaller.
4.3 A Bound on the Size of the Gap
A natural question is how large can be the gaps between the various complexity measures. For
example, what is the maximum ratio between M worst (S; C) and M best (S; C). In Example 1 the
best is 1 and the worst is log n, which can be easily generalized to k versus \Theta(k log n). The following
theorem shows that the gap between the smallest measure, M sd (S; C), and the largest measure,
the on-line cost, cannot exceed O(log n). This, in particular, implies a similar bound for the gap
between oe best and oe worst . By Example 1, the bound is tight; i.e., there are cases which achieve
this gap. Similarly, the gap between M sd (S; C) and M best (S; C) exhibited by Example 2 is also
optimal.
Theorem 7: For of size n as above,
on-line (S; C) - M sd (S; C) \Delta log n:
We shall present two quite simple but very different proofs for this theorem. The first proof
employs the tool of labeled trees (but gives a slightly weaker result) while the second is by an
information - theoretic argument.
Proof: [using labeled trees] Consider the tree T that gives the minimum in Theorem 6. Its depth
is n and its rank, by Theorem 6, is C). By Lemma 2, this tree contains at most
-m
leaves. That is, jC
-m
. By Theorem 2, M on-line (S; C) - log
-m
Proof: [information theoretic argument] Let C S be the projection of the functions in C on the set
S (note that several functions in C may collide into a single function in C S ). Consider the number
of bits required to specify a function in C S . On one hand, at least log jC S j bits are required. On
the other hand, any self-directed learning algorithm that learns this class yields a natural coding
scheme: answer the queries asked by the algorithm according to the function c 2 C S ; the coding
consists of the list of names of elements of S on which the prediction of the algorithm is wrong.
This information is enough to uniquely identify c. It follows that M sd (S; C) \Delta log n bits are enough.
Hence,
log
Finally, by the Halving algorithm [L88, L89], it is known that
on-line (S; C) - log jC S j:
The theorem follows.
Corollary 8: For worst (S; best (S; C) \Delta log n).
Proof: Combine Theorem 7 with Lemma 1.
worst (S; C) vs. M on-line (S; C)
In this section we further discuss the question of how much can a learner benefit from knowing
the learning sequence in advance. In the terminology of our model this is the issue of determining
the possible values of the gap between M on-line (S; C) and M worst (S; C). We show (in Section 5.2)
that if one of these two measures is non-constant then so is the other. Quantitatively, if the on-line
algorithm makes k mistakes, then any off-line algorithm makes \Omega\Gamma
log mistakes on oe. For the
special cases where M worst (S; C) is either 1 or 2, we prove (in Section 5.1) that M on-line (S; C) is
at most 1 or 3 (respectively).
5.1 Simple Algorithms
In this section we present two simple on-line algorithms, E1 and E2, for the case that the off-line
algorithm is bounded by one and two mistakes (respectively) for any sequence.
Let S be a set of elements of X . If for every sequence oe, which is a permutation of S, the off-line
learning algorithm makes at most one mistake, then we show that there is an on-line algorithm E1
that makes at most one mistake on S, without knowing the actual order in advance. The algorithm
E1 uses the guaranteed off-line algorithm A and works as follows:
ffl Given an element x 2 S, choose any sequence oe that starts with x, and predict according
to A's prediction on oe, i.e. A[oe]. If a mistake is made on x, then A[oe] made a mistake and
it will not make any more mistakes on this sequence oe. Hence, we can use A[oe](c t (x)) to
get the true values for all the elements of the sequence (where by A[oe](c t (x)) we denote the
predictions that A makes on the sequence oe after getting the value c t (x)). In other words,
for any y 2 S there is a unique value that is consistent with the value c t (x) 6= A[oe] (otherwise
A[oe] can make another mistake). Therefore, E1 will make at most one mistake.
In the case that for any sequence the off-line learning algorithm makes at most two mistakes,
we present an on-line algorithm E2 that makes at most three mistakes (which is optimal due to
Call an element x bivalent with respect to y if there exist sequences oe 0 and oe 1 that both start
with xy and for oe 0 the on-line algorithm predicts "c t the on-line algorithm
predicts "c t 1). Otherwise x is univalent with respect to y
(we say that x is 1-univalent with respect to y if the prediction is always 1 and 0-univalent if the
prediction is always 0). Our on-line procedure E2, on input x, works as follows.
ffl So far we made no mistakes:
If there is no y such that x is 1-univalent with respect to y, predict "c t predict
ffl So far we made one mistake on a positive w:
If we made such a mistake then we predicted "c t which implies that there is no y such
that w is 1-univalent with respect to y. In particular, with respect to x, w is either 0-univalent
or bivalent. In both cases there is a sequence oe = wxoe 0 such that A[oe] predicts "c t
and makes a mistake. Use as the prediction on x (where again, A[oe](1) denotes
the prediction that A makes on sequence oe after getting the value c t In case of
another mistake, we have a sequence on which we already made two mistakes so it will not
make any more mistakes. Namely, we can use A[oe](1; - b) to get the value for all elements in
S.
ffl So far we made one mistake on a negative w:
If w is either 1-univalent with respect to x or bivalent with respect to x then this is similar to
the previous case. The difficulty is that this time there is also a possibility that w is 0-univalent
with respect to x. However, in this case, if we made a mistake this means that we predicted
which implies that there exists a y such that w is 1-univalent with respect to
y. Consider a sequence . By the definition of y, A[oe] predicts "c t
therefore makes its first mistake on w. Denote by the prediction on y. If this is
wrong again then all the other elements of the sequence are uniquely determined. Namely,
there is a unique function f 1 that is consistent with c t b. If, on the other hand,
b is indeed the true value of y, we denote by its prediction on x. Again, if this
is wrong, we have a unique function f 2 which is consistent with c t
Therefore, we predict c on x. In case we made a mistake (this is our second mistake) we know
for sure that the only possible functions are f 1 and f 2 (in fact, if we are lucky then f 1
and we are done). To know which of the two functions is the target we will need to make (at
one more mistake (3 in total).
5.2 A General Bound
In this section we further discuss the gap between the measures M on-line (S; C) and M worst (S; C).
We show that if the on-line makes k mistakes, then any off-line algorithm makes \Omega\Gamma
log
on oe. The proof makes use of the properties proved in Section 3.1 and the characterizations of both
the on-line and the off-line mistake bounds as ranks of trees, proved in Section 4. More precisely,
we will take a tree in T C
S with maximum rank (this rank by Theorem 5 exactly characterizes the
number of mistakes made by the on-line algorithm) and use it to construct a tree with rank which
is "not too small" and such that the nodes at each level are labeled by the same element of S. Such
a tree is of the form T oe , for some sequence oe.
Lemma 5: Given a complete labeled binary tree of depth k, (T ; F
S , there is a sequence, oe,
of elements of S, such that the tree T C
oe has
log k).
Proof: We will construct an appropriate sequence oe in phases. At phase i (starting with
we add (at most 2 i ) new elements to oe, so that the rank of T C
oe is increased by at least one (hence,
at the beginning of the ith phase the rank is at least i). At the beginning of the ith phase, we
have a collection of 2 i subtrees of T , each is of rank at least k=2 O(i 2 (in particular, at the
beginning of phase 0 there is a single subtree, T itself, whose rank is k). Each of these subtrees is
consistent with one of the leaves of the tree T C
oe we already built in previous phases (i.e., the subtree
is consistent with some assignment to the elements included in oe in all previous phases). Moreover,
the corresponding 2 i leaves induce a complete binary subtree of depth i.
In the ith phase, we consider each of the 2 i subtrees in our collection. From each such subtree T 0
we add to the sequence oe, an element r such that the rank of the subtree of T 0 rooted at r is rank(T 0 )
and the rank in each of the subtrees TR corresponding to the sons of r is rank(T
remark that the order in which we treat the subtrees within the ith phase is arbitrary and that if
some of the elements r already appear in oe we do not need to add them again. After adding all
the new elements of oe we examine again the trees TR corresponding to each subtree
. For each of them the other partitions the leaves of the tree into 2
according to the possible values for the other Hence, by Lemma 4, there exists
subtrees
R of respectively which have rank at least
and each of them is consistent with one of the leaves of the extended T C
oe . The 2 i+1 subtrees that
we get in this way form the collection of trees for phase i + 1. Finally note that by the choice of
elements r added to oe, we now get in T C
oe a complete binary subtree of depth i + 1.
If before the ith phase the rank of the subtrees in our collection is at least k i then after the ith
phase the rank of the subtrees in our collection is at least k i =2 i . Hence, a simple induction implies
that
Therefore, we can repeat this process
log phases hence obtaining a tree T C
oe of rank
log k).
Theorem 9: Let C be a concept class, X an instance space and S ' X the set of elements. Then
worst (S;
log M on-line (S; C)).
Proof: Assume that M on-line (S; k. By Theorem 5, there is a rank k tree in T C
S , and by
Lemma 3 it contains a complete binary subtree T of depth k. By Lemma 5, there is a sequence oe
for which the tree T C
oe has
log k). Hence, by Theorem 3, M(oe; C) -
log k.
A major open problem is what is the exact relationship between the on-line and the off-line
mistake bounds. The largest gap we could show is a multiplicative factor of 3=2.
worst (S;
Proof: We first give an example for the case be a space of 4 elements,
C 1 be the following 8 functions on the 4 elements: f0000; 0011; 0010; 0111; 1000; 1010; 1100; 1111g.
It can be verified (by inspection) that M on-line (S; worst (S; 2.
For a general k, we just take k independent copies of X 1 and C 1 . That is, let X k be a space of
4k elements partitioned into k sets of 4 elements. Let C k be the 8 k functions obtained by applying
one of the 8 functions in C 1 to each of the k sets of elements. Let . Due to the independence
of the k functions, it follows that M on-line (S; worst (S;
6 Discussion
In this work we analyze the effect of having various degrees of knowledge on the order of elements
in the mistake bound model of learning on the performance (i.e., the number of mistakes) of the
learner. We remark that in our setting the learner is deterministic. The corresponding questions
in the case of randomized learners remain for future research.
We can also analyze quantitatively the advantage that an online algorithm may gain from
knowing just the set of elements, S, in advance (without knowing the order, oe, of their presentation).
That is, we wish to compare the situation where the online algorithm knows nothing a-priori about
the sequence (other than that it consists of elements of X ) and the case that the algorithm knows
the set S from which the elements of the sequence are taken (but has no additional information as
for their order). The following example shows that the knowledge of S gives an advantage to the
learning algorithm:
Consider the intervals concept class of Example 1, with the instance space X
restricted to f 1
g. As proven, M on-line (X 1). On the other hand, for every
set S of size ', we showed that M on-line (S; 1). Therefore, if S is small compared to
(i.e., ' is small compared to n) the number of mistakes can be significantly improved by the
knowledge of S.
Acknowledgment
We wish to thank Moti Frances and Nati Linial for helpful discussions.
--R
"Equivalence Queries and Approximate Fingerprints"
"Separating Distribution-Free and Mistake-Bound Learning Models over the Boolean Domain"
"Learning Boolean Functions in an Infinite Attribute Space"
"Rank-r Decision Trees are a Subclass of r-Decision Lists"
"How to Use Expert Advice"
"On-line Learning of Rectangles"
"Learning Decision Trees from Random Examples"
"Universal Prediction of Individual Sequences"
"Learning Binary Relations and Total Orders"
"The Power of Self-Directed Learning"
"Apple Tasting and Nearly One-Sided Learning"
"Learning when Irrelevant Attributes Abound: A New Linear-Threshold Algo- rithm"
"Mistake Bounds and Logarithmic Linear-Threshold Learning Algorithms"
"The Weighted Majority Algorithm"
"On-line Learning with an Oblivious Environment and the Power of Randomization"
"Universal Sequential Decision Schemes from Individual Sequences"
"Learning Automata from Ordered Examples"
"On the Density of Families of Sets"
--TR
Learning decision trees from random examples needed for learning
Mistake bounds and logarithmic linear-threshold learning algorithms
Introduction to algorithms
Equivalence queries and approximate fingerprints
Learning boolean functions in an infinite attribute space
On-line learning with an oblivious environment and the power of randomization
Learning Automata from Ordered Examples
On-line learning of rectangles
Rank-<italic>r</italic> decision trees are a subclass of <italic>r</italic>-decision lists
Learning binary relations and total orders
How to use expert advice
The weighted majority algorithm
The Power of Self-Directed Learning
Learning Quickly When Irrelevant Attributes Abound
--CTR
Peter Damaschke, Adaptive Versus Nonadaptive Attribute-Efficient Learning, Machine Learning, v.41 n.2, p.197-215, November 2000
Paul Burke , Sue Nguyen , Pen-Fan Sun , Shelley Evenson , Jeong Kim , Laura Wright , Nabeel Ahmed , Arjun Patel, Writing the BoK: designing for the networked learning environment of college students, Proceedings of the 2005 conference on Designing for User eXperience, November 03-05, 2005, San Francisco, California | mistake-bound;On-Line Learning;rank of trees |
274165 | Factorial Hidden Markov Models. | Hidden Markov models (HMMs) have proven to be one of the most widely used tools for learning probabilistic models of time series data. In an HMM, information about the past is conveyed through a single discrete variablethe hidden state. We discuss a generalization of HMMs in which this state is factored into multiple state variables and is therefore represented in a distributed manner. We describe an exact algorithm for inferring the posterior probabilities of the hidden state variables given the observations, and relate it to the forwardbackward algorithm for HMMs and to algorithms for more general graphical models. Due to the combinatorial nature of the hidden state representation, this exact algorithm is intractable. As in other intractable systems, approximate inference can be carried out using Gibbs sampling or variational methods. Within the variational framework, we present a structured approximation in which the the state variables are decoupled, yielding a tractable algorithm for learning the parameters of the model. Empirical comparisons suggest that these approximations are efficient and provide accurate alternatives to the exact methods. Finally, we use the structured approximation to model Bachs chorales and show that factorial HMMs can capture statistical structure in this data set which an unconstrained HMM cannot. | Introduction
Due to its flexibility and to the simplicity and efficiency of its parameter estimation
algorithm, the hidden Markov model (HMM) has emerged as one of the basic statistical
tools for modeling discrete time series, finding widespread application in the
areas of speech recognition (Rabiner & Juang, 1986) and computational molecular
biology (Krogh, Brown, Mian, Sj-olander, & Haussler, 1994). An HMM is essentially
a mixture model, encoding information about the history of a time series in
the value of a single multinomial variable-the hidden state-which can take on
one of K discrete values. This multinomial assumption supports an efficient parameter
estimation algorithm-the Baum-Welch algorithm-which considers each
of the K settings of the hidden state at each time step. However, the multinomial
assumption also severely limits the representational capacity of HMMs. For exam-
ple, to represent bits of information about the history of a time sequence, an
HMM would need distinct states. On the other hand, an HMM with a
distributed state representation could achieve the same task with binary state
Z. GHAHRAMANI AND M.I. JORDAN
variables (Williams & Hinton, 1991). This paper addresses the problem of constructing
efficient learning algorithms for hidden Markov models with distributed
state representations.
The need for distributed state representations in HMMs can be motivated in two
ways. First, such representations let the model automatically decompose the state
space into features that decouple the dynamics of the process that generated the
data. Second, distributed state representations simplify the task of modeling time
series that are known a priori to be generated from an interaction of multiple,
loosely-coupled processes. For example, a speech signal generated by the superposition
of multiple simultaneous speakers can be potentially modeled with such an
architecture.
Williams and Hinton (1991) first formulated the problem of learning in HMMs
with distributed state representations and proposed a solution based on deterministic
learning. 1 The approach presented in this paper is similar to Williams
and Hinton's in that it can also be viewed from the framework of statistical mechanics
and mean field theory. However, our learning algorithm is quite different
in that it makes use of the special structure of HMMs with a distributed state
representation, resulting in a significantly more efficient learning procedure. Anticipating
the results in Section 3, this learning algorithm obviates the need for
the two-phase procedure of Boltzmann machines, has an exact M step, and makes
use of the forward-backward algorithm from classical HMMs as a subroutine. A
different approach comes from Saul and Jordan (1995), who derived a set of rules
for computing the gradients required for learning in HMMs with distributed state
spaces. However, their methods can only be applied to a limited class of architectures
Hidden Markov models with distributed state representations are a particular
class of probabilistic graphical model (Pearl, 1988; Lauritzen & Spiegelhalter, 1988),
which represent probability distributions as graphs in which the nodes correspond
to random variables and the links represent conditional independence relations.
The relation between hidden Markov models and graphical models has recently
been reviewed in Smyth, Heckerman and Jordan (1997). Although exact probability
propagation algorithms exist for general graphical models (Jensen, Lauritzen, &
Olesen, 1990), these algorithms are intractable for densely-connected models such
as the ones we consider in this paper. One approach to dealing with this issue is
to utilize stochastic sampling methods (Kanazawa et al., 1995). Another approach,
which provides the basis for algorithms described in the current paper, is to make
use of variational methods (cf. Saul, Jaakkola, & Jordan, 1996).
In the following section we define the probabilistic model for factorial HMMs
and in Section 3 we present algorithms for inference and learning. In Section 4 we
describe empirical results comparing exact and approximate algorithms for learning
on the basis of time complexity and model quality. We also apply factorial HMMs
to a real time series data set consisting of the melody lines from a collection of
chorales by J. S. Bach. We discuss several generalizations of the probabilistic model
FACTORIAL HIDDEN MARKOV MODELS 247
Y t+1
Y t-1
Y
S (2)
Y
S (2)
Y t+1
S (2)
Y t-1
(a) (b)
Figure
1. (a) A directed acyclic graph (DAG) specifying conditional independence relations for
a hidden Markov model. Each node is conditionally independent from its non-descendants given
its parents. (b) A DAG representing the conditional independence relations in a factorial HMM
with underlying Markov chains.
in Section 5, and we conclude in Section 6. Where necessary, details of derivations
are provided in the appendixes.
2. The probabilistic model
We begin by describing the hidden Markov model, in which a sequence of observations
modeled by specifying a probabilistic relation
between the observations and a sequence of hidden states fS t g, and a Markov
transition structure linking the hidden states. The model assumes two sets of conditional
independence relations: that Y t is independent of all other observations and
states given S t , and that S t is independent of
Markov property). Using these independence relations, the joint probability for the
sequence of states and observations can be factored as
Y
The conditional independencies specified by equation (1) can be expressed graphically
in the form of Figure 1 (a). The state is a single multinomial random variable
that can take one of K discrete values, S t Kg. The state transition
probabilities, are specified by a K \Theta K transition matrix. If the observations
are discrete symbols taking on one of D values, the observation probabilities
can be fully specified as a K \Theta D observation matrix. For a continuous
observation vector, P (Y t jS t ) can be modeled in many different forms, such as a
Gaussian, a mixture of Gaussians, or even a neural network. 2
In the present paper, we generalize the HMM state representation by letting the
state be represented by a collection of state variables
Z. GHAHRAMANI AND M.I. JORDAN
each of which can take on K (m) values. We refer to these models as factorial
hidden Markov models, as the state space consists of the cross product of these state
variables. For simplicity, we will assume that K although all the
results we present can be trivially generalized to the case of differing K (m) . Given
that the state space of this factorial HMM consists of all K M combinations of the
t variables, placing no constraints on the state transition structure would result
in a K M \Theta K M transition matrix. Such an unconstrained system is uninteresting
for several reasons: it is equivalent to an HMM with K M states; it is unlikely
to discover any interesting structure in the K state variables, as all variables are
allowed to interact arbitrarily; and both the time complexity and sample complexity
of the estimation algorithm are exponential in M .
We therefore focus on factorial HMMs in which the underlying state transitions
are constrained. A natural structure to consider is one in which each state variable
evolves according to its own dynamics, and is a priori uncoupled from the other
state variables:
Y
A graphical representation for this model is presented in Figure 1 (b). The transition
structure for this system can be represented as M distinct K \Theta K matrices.
Generalizations that allow coupling between the state variables are briefly discussed
in Section 5.
As shown in Figure 1 (b), in a factorial HMM the observation at time step t can
depend on all the state variables at that time step. For continuous observations,
one simple form for this dependence is linear Gaussian; that is, the observation Y t
is a Gaussian random vector whose mean is a linear function of the state variables.
We represent the state variables as K \Theta 1 vectors, where each of the K discrete
values corresponds to a 1 in one position and 0 elsewhere. The resulting probability
density for a D \Theta 1 observation vector Y t is
ae
oe
where
Each W (m) matrix is a D \Theta K matrix whose columns are the contributions to the
means for each of the settings of S (m)
, C is the D \Theta D covariance matrix, 0 denotes
matrix transpose, and j \Delta j is the matrix determinant operator.
One way to understand the observation model in equations (4a) and (4b) is to
consider the marginal distribution for Y t , obtained by summing over the possible
states. There are K settings for each of the M state variables, and thus there
FACTORIAL HIDDEN MARKOV MODELS 249
are K M possible mean vectors obtained by forming sums of M columns where one
column is chosen from each of the W (m) matrices. The resulting marginal density
of Y t is thus a Gaussian mixture model, with K M Gaussian mixture components
each having a constant covariance matrix C. This static mixture model, without
inclusion of the time index and the Markov dynamics, is a factorial parameterization
of the standard mixture of Gaussians model that has interest in its own right (Zemel,
1993; Hinton & Zemel, 1994; Ghahramani, 1995). The model we are considering in
the current paper extends this model by allowing Markov dynamics in the discrete
state variables underlying the mixture. Unless otherwise stated, we will assume the
Gaussian observation model throughout the paper.
The hidden state variables at one time step, although marginally independent,
become conditionally dependent given the observation sequence. This can be determined
by applying the semantics of directed graphs, in particular the d-separation
criterion (Pearl, 1988), to the graphical model in Figure 1 (b). Consider the Gaussian
model in equations (4a)-(4b). Given an observation vector Y t , the posterior
probability of each of the settings of the hidden state variables is proportional to the
probability of Y t under a Gaussian with mean - t . Since - t is a function of all the
state variables, the probability of a setting of one of the state variables will depend
on the setting of the other state variables. 3 This dependency effectively couples all
of the hidden state variables for the purposes of calculating posterior probabilities
and makes exact inference intractable for the factorial HMM.
3. Inference and learning
The inference problem in a probabilistic graphical model consists of computing
the probabilities of the hidden variables given the observations. In the context
of speech recognition, for example, the observations may be acoustic vectors and
the goal of inference may be to compute the probability for a particular word or
sequence of phonemes (the hidden state). This problem can be solved efficiently
via the forward-backward algorithm (Rabiner & Juang, 1986), which can be shown
to be a special case of the Jensen, Lauritzen, and Olesen (1990) algorithm for
probability propagation in more general graphical models (Smyth et al., 1997). In
some cases, rather than a probability distribution over hidden states it is desirable
to infer the single most probable hidden state sequence. This can be achieved via
the Viterbi (1967) algorithm, a form of dynamic programming that is very closely
related to the forward-backward algorithm and also has analogues in the graphical
model literature (Dawid, 1992).
The learning problem for probabilistic models consists of two components: learning
the structure of the model and learning its parameters. Structure learning is a
topic of current research in both the graphical model and machine learning communities
(e.g. Heckerman, 1995; Stolcke & Omohundro, 1993). In the current paper we
deal exclusively with the problem of learning the parameters for a given structure.
Z. GHAHRAMANI AND M.I. JORDAN
3.1. The EM algorithm
The parameters of a factorial HMM can be estimated via the expectation maximization
(EM) algorithm (Dempster, Laird, & Rubin, 1977), which in the case of
classical HMMs is known as the Baum-Welch algorithm (Baum, Petrie, Soules, &
Weiss, 1970). This procedure iterates between a step that fixes the current parameters
and computes posterior probabilities over the hidden states (the E step) and
a step that uses these probabilities to maximize the expected log likelihood of the
observations as a function of the parameters (the M step). Since the E step of EM
is exactly the inference problem as described above, we subsume the discussion of
both inference and learning problems into our description of the EM algorithm for
factorial HMMs.
The EM algorithm follows from the definition of the expected log likelihood of
the complete (observed and hidden) data:
Q(OE new
log
where Q is a function of the parameters OE new , given the current parameter estimate
OE and the observation sequence fY t g. For the factorial HMM the parameters
of the model are
consists of computing Q. By expanding (5)
using equations (1)-(4b), we find that Q can be expressed as a function of three
types of expectations over the hidden state variables: hS (m)
t i, and
t i, where h\Deltai has been used to abbreviate E f\DeltajOE; fY t gg. In the HMM
notation of Rabiner and Juang (1986), hS (m)
corresponds to fl t , the vector of
state occupation probabilities, hS (m)
corresponds to - t , the K \Theta K matrix of
state occupation probabilities at two consecutive time steps, and hS (m)
t i has
no analogue when there is only a single underlying Markov model. The M step uses
these expectations to maximize Q as a function of OE new . Using Jensen's inequality,
Baum, Petrie, Soules & Weiss (1970) showed that each iteration of the E and M
steps increases the likelihood, P (fY t gjOE), until convergence to a (local) optimum.
As in hidden Markov models, the exact M step for factorial HMMs is simple
and tractable. In particular, the M step for the parameters of the output model
described in equations (4a)-(4b) can be found by solving a weighted linear regression
problem. Similarly, the M steps for the priors, - (m) , and state transition matrices,
are identical to the ones used in the Baum-Welch algorithm. The details
of the M step are given in Appendix A. We now turn to the substantially more
difficult problem of computing the expectations required for the E step.
3.2. Exact inference
Unfortunately, the exact E step for factorial HMMs is computationally intractable.
This fact can best be shown by making reference to standard algorithms for prob-
FACTORIAL HIDDEN MARKOV MODELS 251
abilistic inference in graphical models (Lauritzen & Spiegelhalter, 1988), although
it can also be derived readily from direct application of Bayes rule. Consider the
computations that are required for calculating posterior probabilities for the factorial
HMM shown in Figure 1 (b) within the framework of the Lauritzen and
Spiegelhalter algorithm. Moralizing and triangulating the graphical structure for
the factorial HMM results in a junction tree (in fact a chain) with
cliques of size M+1. The resulting probability propagation algorithm has time complexity
O(TMK M+1 ) for a single observation sequence of length T . We present a
forward-backward type recursion that implements the exact E step in Appendix B.
The naive exact algorithm which consists of translating the factorial HMM into an
equivalent HMM with K M states and using the forward-backward algorithm, has
time complexity O(TK 2M ). Like other models with multiple densely-connected
hidden variables, this exponential time complexity makes exact learning and inference
intractable.
Thus, although the Markov property can be used to obtain forward-backward-
like factorizations of the expectations across time steps, the sum over all possible
configurations of the other hidden state variables within each time step is unavoid-
able. This intractability is due inherently to the cooperative nature of the model:
for the Gaussian output model, for example, the settings of all the state variables
at one time step cooperate in determining the mean of the observation vector.
3.3. Inference using Gibbs sampling
Rather than computing the exact posterior probabilities, one can approximate them
using a Monte Carlo sampling procedure, and thereby avoid the sum over exponentially
many state patterns at some cost in accuracy. Although there are many
possible sampling schemes (for a review see Neal, 1993), here we present one of the
simplest-Gibbs sampling (Geman & Geman, 1984). For a given observation sequence
fY t g, this procedure starts with a random setting of the hidden states fS t g.
At each step of the sampling process, each state vector is updated stochastically
according to its probability distribution conditioned on the setting of all the other
state vectors. The graphical model is again useful here, as each node is conditionally
independent of all other nodes given its Markov blanket, defined as the set of
children, parents, and parents of the children of a node. To sample from a typical
state variable S (m)
t we only need to examine the states of a few neighboring nodes:
t sampled from P (S (m)
Sampling once from each of the TM hidden variables in the model results in a
new sample of the hidden state of the model and requires O(TMK) operations.
The sequence of overall states resulting from each pass of Gibbs sampling defines
a Markov chain over the state space of the model. Assuming that all probabilities
are bounded away from zero, this Markov chain is guaranteed to converge to the
Z. GHAHRAMANI AND M.I. JORDAN
posterior probabilities of the states given the observations (Geman & Geman, 1984).
Thus, after some suitable time, samples from the Markov chain can be taken as
approximate samples from the posterior probabilities. The first and second-order
statistics needed to estimate hS (m)
are collected using
the states visited and the probabilities estimated during this sampling process are
used in the approximate E step of EM. 4
3.4. Completely factorized variational inference
There also exists a second approximation of the posterior probability of the hidden
states that is both tractable and deterministic. The basic idea is to approximate the
posterior distribution over the hidden variables P (fS t gjfY t g) by a tractable distribution
Q(fS t g). This approximation provides a lower bound on the log likelihood
that can be used to obtain an efficient learning algorithm.
The argument can be formalized following the reasoning of Saul, Jaakkola, and
Jordan (1996). Any distribution over the hidden variables Q(fS t g) can be used to
define a lower bound on the log likelihood
log
log
Q(fS t g) log
where we have made use of Jensen's inequality in the last step. The difference
between the left-hand side and the right-hand side of this inequality is given by the
Kullback-Leibler divergence (Cover & Thomas, 1991):
Q(fS t g) log
The complexity of exact inference in the approximation given by Q is determined
by its conditional independence relations, not by its parameters. Thus, we can chose
Q to have a tractable structure-a graphical representation that eliminates some
of the dependencies in P . Given this structure, we are free to vary the parameters
of Q so as to obtain the tightest possible bound by minimizing (6).
We will refer to the general strategy of using a parameterized approximating distribution
as a variational approximation and refer to the free parameters of the
distribution as variational parameters. To illustrate, consider the simplest variational
approximation, in which the state variables are assumed independent given
the observations This distribution can be written as
FACTORIAL HIDDEN MARKOV MODELS 253
S (2)
S (2)
S (2)
S (2)
S (2)
S (2)
(a) (b)
Figure
2. (a) The completely factorized variational approximation assumes that all the state variables
are independent (conditional on the observation sequence). (b) The structured variational
approximation assumes that the state variables retain their Markov structure within each chain,
but are independent across chains.
Y
Y
Q(S (m)
The variational parameters,
t g, are the means of the state variables, where,
as before, a state variable S (m)
t is represented as a K-dimensional vector with a 1
in the k th position and 0 elsewhere, if the m th Markov chain is in state k at time t.
The elements of the vector ' (m)
therefore define the state occupation probabilities
for the multinomial variable S (m)
t under the distribution Q:
Q(S (m)
Y
t;k
t;k
This completely factorized approximation is often used in statistical physics, where
it provides the basis for simple yet powerful mean field approximations to statistical
mechanical systems (Parisi, 1988).
To make the bound as tight as possible we vary ' separately for each observation
sequence so as to minimize the KL divergence. Taking the derivatives of (6) with
respect to ' (m)
t and setting them to zero, we obtain the set of fixed point equations
(see
Appendix
C) defined by
new
ae
Y (m)
oe
where ~
Y (m)
t is the residual error in Y t given the predictions from all the state
variables not including m:
~
Y (m)
'6=m
Z. GHAHRAMANI AND M.I. JORDAN
\Delta (m) is the vector of diagonal elements of W (m) 0
C 'f\Deltag is the softmax
operator, which maps a vector A into a vector B of the same size, with elements
and log P (m) denotes the elementwise logarithm of the transition matrix P (m) .
The first term of (9a) is the projection of the error in reconstructing the observation
onto the weights of state vector m-the more a particular setting of a
state vector can reduce this error, the larger its associated variational parameter.
The second term arises from the fact that the second order correlation hS (m)
evaluated under the variational distribution is a diagonal matrix composed of the
elements of ' (m)
t . The last two terms introduce dependencies forward and backward
in time. 5 Therefore, although the posterior distribution over the hidden variables is
approximated with a completely factorized distribution, the fixed point equations
couple the parameters associated with each node with the parameters of its Markov
blanket. In this sense, the fixed point equations propagate information along the
same pathways as those defining the exact algorithms for probability propagation.
The following may provide an intuitive interpretation of the approximation being
made by this distribution. Given a particular observation sequence, the hidden
state variables for the M Markov chains at time step t are stochastically coupled.
This stochastic coupling is approximated by a system in which the hidden variables
are uncorrelated but have coupled means. The variational or "mean-field" equations
solve for the deterministic coupling of the means that best approximates the
stochastically coupled system.
Each hidden state vector is updated in turn using (9a), with a time complexity
of O(TMK 2 ) per iteration. Convergence is determined by monitoring the KL
divergence in the variational distribution between successive time steps; in practice
convergence is very rapid (about 2 to 10 iterations of (9a)). Once the fixed point
equations have converged, the expectations required for the E step can be obtained
as a simple function of the parameters (equations (C.6)-(C.8) in Appendix C).
3.5. Structured variational inference
The approximation presented in the previous section factors the posterior probability
such that all the state variables are statistically independent. In contrast to
this rather extreme factorization, there exists a third approximation that is both
tractable and preserves much of the probabilistic structure of the original system. In
this scheme, the factorial HMM is approximated by M uncoupled HMMs as shown
in
Figure
(b). Within each HMM, efficient and exact inference is implemented
via the forward-backward algorithm. The approach of exploiting such tractable
substructures was first suggested in the machine learning literature by Saul and
Jordan (1996).
FACTORIAL HIDDEN MARKOV MODELS 255
Note that the arguments presented in the previous section did not hinge on the
the form of the approximating distribution. Therefore, more structured variational
approximations can be obtained by using more structured variational distributions
Q. Each such Q provides a lower bound on the log likelihood and can be used to
obtain a learning algorithm.
We write the structured variational approximation as
Y
Q(S (m)
Y
Q(S (m)
where ZQ is a normalization constant ensuring that Q integrates to one and
Q(S (m)
Y
Q(S (m)
Y
t;k
t;k
Y
t;k
Y
t;k
where the last equality follows from the fact that S (m)
is a vector with a 1 in one position
and 0 elsewhere. The parameters of this distribution are
t gthe
original priors and state transition matrices of the factorial HMM and a time-varying
bias for each state variable. Comparing equations (11a)-(11c) to equation
(1), we can see that the K \Theta 1 vector h (m)
t plays the role of the probability of
an observation (P (Y t jS t ) in (1)) for each of the K settings of S (m)
t . For example,
Q(S (m)
1jOE) corresponds to having an observation at time
that under state S (m)
1;j .
Intuitively, this approximation uncouples the M Markov chains and attaches to
each state variable a distinct fictitious observation. The probability of this fictitious
observation can be varied so as to minimize the KL divergence between Q and P .
Applying the same arguments as before, we obtain a set of fixed point equations
for h (m)
t that minimize KL(QkP ), as detailed in Appendix D:
h (m) new
ae
Y (m)
oe
where \Delta (m) is defined as before, and where we redefine the residual error to be
~
Y (m)
'6=m
Z. GHAHRAMANI AND M.I. JORDAN
The parameter h (m)
t obtained from these fixed point equations is the observation
probability associated with state variable S (m)
t in hidden Markov model m. Using
these probabilities, the forward-backward algorithm is used to compute a new set
of expectations for hS (m)
t i, which are fed back into (12a) and (12b). This algorithm
is therefore used as a subroutine in the minimization of the KL divergence.
Note the similarity between equations (12a)-(12b) and equations (9a)-(9b) for the
completely factorized system. In the completely factorized system, since hS (m)
t , the fixed point equations can be written explicitly in terms of the variational
parameters. In the structured approximation, the dependence of hS (m)
t i on h (m)
is computed via the forward-backward algorithm. Note also that (12a) does not
contain terms involving the prior, - (m) , or transition matrix, P (m) . These terms
have cancelled by our choice of approximation.
3.6. Choice of approximation
The theory of the EM algorithm as presented in Dempster et al. (1977) assumes
the use of an exact E step. For models in which the exact E step is intractable,
one must instead use an approximation like those we have just described. The
choice among these approximations must take into account several theoretical and
practical issues.
Monte Carlo approximations based on Markov chains, such as Gibbs sampling,
offer the theoretical assurance that the sampling procedure will converge to the
correct posterior distribution in the limit. Although this means that one can come
arbitrarily close to the exact E step, in practice convergence can be slow (especially
for multimodal distributions) and it is often very difficult to determine how close
one is to convergence. However, when sampling is used for the E step of EM, there
is a time tradeoff between the number of samples used and the number of EM
iterations. It seems wasteful to wait until convergence early on in learning, when
the posterior distribution from which samples are drawn is far from the posterior
given the optimal parameters. In practice we have found that even approximate
steps using very few Gibbs samples (e.g. around ten samples of each hidden
variable) tend to increase the true likelihood.
Variational approximations offer the theoretical assurance that a lower bound on
the likelihood is being maximized. Both the minimization of the KL divergence in
the E step and the parameter update in the M step are guaranteed not to decrease
this lower bound, and therefore convergence can be defined in terms of the bound.
An alternative view given by Neal and Hinton (1993) describes EM in terms of the
negative free energy, F , which is a function of the parameters, OE, the observations,
Y , and a posterior probability distribution over the hidden variables, Q(S):
where EQ denotes expectation over S using the distribution Q(S). The exact E
step in EM maximizes F with respect to Q given OE. The variational E steps used
FACTORIAL HIDDEN MARKOV MODELS 257
here maximize F with respect to Q given OE, subject to the constraint that Q is
of a particular tractable form. Given this view, it seems clear that the structured
approximation is preferable to the completely factorized approximation since it
places fewer constraints on Q, at no cost in tractability.
4. Experimental results
To investigate learning and inference in factorial HMMs we conducted two experi-
ments. The first experiment compared the different approximate and exact methods
of inference on the basis of computation time and the likelihood of the model obtained
from synthetic data. The second experiment sought to determine whether
the decomposition of the state space in factorial HMMs presents any advantage in
modeling a real time series data set that we might assume to have complex internal
structure-Bach's chorale melodies.
4.1. Experiment 1: Performance and timing benchmarks
Using data generated from a factorial HMM structure with M underlying Markov
models with K states each, we compared the time per EM iteration and the training
and test set likelihoods of five models:
ffl HMM trained using the Baum-Welch algorithm;
ffl Factorial HMM trained with exact inference for the E step, using a straight-forward
application of the forward-backward algorithm, rather than the more
efficient algorithm outlined in Appendix B;
ffl Factorial HMM trained using Gibbs sampling for the E step with the number
of samples fixed at 10 samples per variable; 6
ffl Factorial HMM trained using the completely factorized variational approxima-
tion; and
ffl Factorial HMM trained using the structured variational approximation.
All factorial HMMs consisted of M underlying Markov models with K states each,
whereas the HMM had K M states. The data were generated from a factorial HMM
structure with M state variables, each of which could take on K discrete values.
All of the parameters of this model, except for the output covariance matrix, were
sampled from a uniform [0; 1] distribution and appropriately normalized to satisfy
the sum-to-one constraints of the transition matrices and priors. The covariance
matrix was set to a multiple of the identity matrix
The training and test sets consisted of 20 sequences of length 20, where the observable
was a four-dimensional vector. For each randomly sampled set of parameters, a
separate training set and test set were generated and each algorithm was run once.
Z. GHAHRAMANI AND M.I. JORDAN
Fifteen sets of parameters were generated for each of the four problem sizes. Algorithms
were run for a maximumof 100 iterations of EM or until convergence, defined
as the iteration k at which the log likelihood L(k), or approximate log likelihood if
an approximate algorithm was used, satisfied
At the end of learning, the log likelihoods on the training and test set were computed
for all models using the exact algorithm. Also included in the comparison
was the log likelihood of the training and test sets under the true model that generated
the data. The test set log likelihood for N observation sequences is defined
as
log P (Y (n)
obtained by maximizing the log likelihood
over a training set that is disjoint from the test set. This provides a measure
of how well the model generalizes to a novel observation sequence from the same
distribution as the training data.
Results averaged over 15 runs for each algorithm on each of the four problem sizes
(a total of 300 runs) are presented in Table 1. Even for the smallest problem size
the standard HMM with K M states suffers from overfitting:
the test set log likelihood is significantly worse than the training set log likelihood.
As expected, this overfitting problem becomes worse as the size of the state space
increases; it is particularly serious for
For the factorial HMMs, the log likelihoods for each of the three approximate
EM algorithms were compared to the exact algorithm. Gibbs sampling appeared
to have the poorest performance: for each of the three smaller size problems its
log likelihood was significantly worse than that of the exact algorithm on both the
training sets and test sets (p ! 0:05). This may be due to insufficient sampling.
However, we will soon see that running the Gibbs sampler for more than 10 samples,
while potentially improving performance, makes it substantially slower than the
variational methods. Surprisingly, the Gibbs sampler appears to do quite well on
the largest size problem, although the differences to the other methods are not
statistically significant.
The performance of the completely factorized variational approximation was not
statistically significantly different from that of the exact algorithm on either the
training set or the test set for any of the problem sizes. The performance of the
structured variational approximation was not statistically different from that of the
exact method on three of the four problem sizes, and appeared to be better on one of
the problem sizes 0:05). Although this result may be a fluke
arising from random variability, there is another more interesting (and speculative)
explanation. The exact EM algorithm implements unconstrained maximization of
F , as defined in section 3.6, while the variational methods maximize F subject to
a constrained distribution. These constraints could presumably act as regularizers,
reducing overfitting.
There was a large amount of variability in the final log likelihoods for the models
learned by all the algorithms. We subtracted the log likelihood of the true generative
model from that of each trained model to eliminate the main effect of the randomly
sampled generative model and to reduce the variability due to training and test
sets. One important remaining source of variance was the random seed used in
FACTORIAL HIDDEN MARKOV MODELS 259
Table
1. Comparison of the factorial HMM on four problems of varying size. The negative log
likelihood for the training and test set, plus or minus one standard deviation, is shown for each
problem size and algorithm, measured in bits per observation (log likelihood in bits divided by
NT ) relative to the log likelihood under the true generative model for that data set. 7 True is
the true generative model (the log likelihood per symbol is defined to be zero for this model by
our measure); HMM is the hidden Markov model with K M states; Exact is the factorial HMM
trained using an exact E step; Gibbs is the factorial HMM trained using Gibbs sampling; CFVA
is the factorial HMM trained using the completely factorized variational approximation; SVA is
the factorial HMM trained using the structured variational approximation.
M K Algorithm Training Test
HMM 1.19 \Sigma 0.67 2.29 \Sigma 1.02
Exact 0.88 \Sigma 0.80 1.05 \Sigma 0.72
Gibbs 1.67 \Sigma 1.23 1.78 \Sigma 1.22
CFVA 1.06 \Sigma 1.20 1.20 \Sigma 1.11
SVA 0.91 \Sigma 1.02 1.04 \Sigma 1.01
HMM 0.76 \Sigma 0.67 9.81 \Sigma 2.55
Exact 1.02 \Sigma 1.04 1.26 \Sigma 0.99
Gibbs 2.21 \Sigma 0.91 2.50 \Sigma 0.87
CFVA 1.24 \Sigma 1.50 1.50 \Sigma 1.53
Exact 2.29 \Sigma 1.19 2.51 \Sigma 1.21
Gibbs 3.25 \Sigma 1.17 3.35 \Sigma 1.14
CFVA 1.73 \Sigma 1.34 2.07 \Sigma 1.74
Exact 4.23 \Sigma 2.28 4.49 \Sigma 2.24
Gibbs 3.63 \Sigma 1.13 3.95 \Sigma 1.14
CFVA 4.85 \Sigma 0.68 5.14 \Sigma 0.69
260 Z. GHAHRAMANI AND M.I. JORDAN
iterations of EM
-log
likelihood
(bits)
(a)
iterations of EM
-log
likelihood
(bits)
(b)
iterations of EM
-log
likelihood
(bits)
(c)
iterations of EM
-log
likelihood
(bits)
(d)
Figure
3. Learning curves for five runs of each of the four learning algorithms for factorial HMMs:
(a) exact; (b) completely factorized variational approximation; (c) structured variational approx-
imation; and (d) Gibbs sampling. A single training set sampled from the
size was used for all these runs. The solid lines show the negative log likelihood per observation
(in bits) relative to the true model that generated the data, calculated using the exact algorithm.
The circles denote the point at which the convergence criterion was met and the run ended. For
the three approximate algorithms, the dashed lines show an approximate negative log likelihood. 8
each training run, which determined the initial parameters and the samples used in
the Gibbs algorithm. All algorithms appeared to be very sensitive to this random
seed, suggesting that different runs on each training set found different local maxima
or plateaus of the likelihood (Figure 3). Some of this variability could be eliminated
by explicitly adding a regularization term, which can be viewed as a prior on the
parameters in maximuma posteriori parameter estimation. Alternatively, Bayesian
(or ensemble) methods could be used to average out this variability by integrating
over the parameter space.
The timing comparisons confirm the fact that both the standard HMM and the exact
are extremely slow for models with large state spaces (Fig-
FACTORIAL HIDDEN MARKOV MODELS 261
Time/iteration
HMM
Figure
4. Time per iteration of EM on a Silicon Graphics R4400 processor running Matlab.
ure 4). Gibbs sampling was slower than the variational methods even when limited
to ten samples of each hidden variable per iteration of EM. Since one pass of the
variational fixed point equations has the same time complexity as one pass of Gibbs
sampling, and since the variational fixed point equations were found to converge
very quickly, these experiments suggest that Gibbs sampling is not as competitive
time-wise as the variational methods. The time per iteration for the variational
methods scaled well to large state spaces.
4.2. Experiment 2: Bach chorales
Musical pieces naturally exhibit complex structure at many different time scales.
Furthermore, one can imagine that to represent the "state" of the musical piece
at any given time it would be necessary to specify a conjunction of many different
features. For these reasons, we chose to test whether a factorial HMM would provide
an advantage over a regular HMM in modeling a collection of musical pieces.
The data set consisted of discrete event sequences encoding the melody lines of
J. S. Bach's Chorales, obtained from the UCI Repository for Machine Learning
originally discussed in Conklin and Witten
(1995). Each event in the sequence was represented by six attributes, described
in
Table
2. Sixty-six chorales, with 40 or more events each, were divided into a
training set (30 chorales) and a test set (36 chorales). Using the first set, hidden
Markov models with state space ranging from 2 to 100 states were trained until
convergence (30 \Sigma 12 steps of EM). Factorial HMMs of varying sizes (K ranging
from 2 to 6; M ranging from 2 to were also trained on the same data. To
262 Z. GHAHRAMANI AND M.I. JORDAN
Table
2. Attributes in the Bach chorale data set. The key
signature and time signature attributes were constant over the
duration of the chorale. All attributes were treated as real
numbers and modeled as linear-Gaussian observations (4a).
Attribute Description Representation
pitch pitch of the event int [0; 127]
fermata event under fermata? binary
st start time of event int (1/16 notes)
dur duration of event int (1/16 notes)
approximate the E step for factorial HMMs we used the structured variational ap-
proximation. This choice was motivated by three considerations. First, for the size
of state space we wished to explore, the exact algorithms were prohibitively slow.
Second, the Gibbs sampling algorithm did not appear to present any advantages
in speed or performance and required some heuristic method for determining the
number of samples. Third, theoretical arguments suggest that the structured approximation
should in general be superior to the completely factorized variational
approximation, since more of the dependencies of the original model are preserved.
The test set log likelihoods for the HMMs, shown in Figure 5 (a), exhibited the
typical U-shaped curve demonstrating a trade-off between bias and variance (Ge-
man, Bienenstock, & Doursat, 1992). HMMs with fewer than 10 states did not
predict well, while HMMs with more than 40 states overfit the training data and
therefore provided a poor model of the test data. Out of the 75 runs, the highest
test set log likelihood per observation was \Gamma9:0 bits, obtained by an HMM with
hidden states. 9
The factorial HMM provides a more satisfactory model of the chorales from three
points of view. First, the time complexity is such that it is possible to consider
models with significantly larger state spaces; in particular, we fit models with up to
1000 states. Second, given the componential parametrization of the factorial HMM,
these large state spaces do not require excessively large numbers of parameters relative
to the number of data points. In particular, we saw no evidence of overfitting
even for the largest factorial HMM as seen in Figures 5 (c) & (d). Finally, this
approach resulted in significantly better predictors; the test set likelihood for the
best factorial HMM was an order of magnitude larger than the test set likelihood
for the best HMM, as Figure 5 (d) reveals.
While the factorial HMM is clearly a better predictor than a single HMM, it
should be acknowledged that neither approach produces models that are easily
interpretable from a musicological point of view. The situation is reminiscent of
that in speech recognition, where HMMs have proved their value as predictive
models of the speech signal without necessarily being viewed as causal generative
models of speech. A factorial HMM is clearly an impoverished representation of
FACTORIAL HIDDEN MARKOV MODELS 263
log
likelihood
(bits)
(a)
(b)
Size of state space
(c)
(d)
Figure
5. Test set log likelihood per event of the Bach chorale data set as a function of number of
states for (a) HMMs, and factorial HMMs with (b)
dashed line) and line). Each symbol represents a single run; the lines indicate the
mean performances. The thin dashed line in (b)-(d) indicates the log likelihood per observation
of the best run in (a). The factorial HMMs were trained using the structured approximation. For
both methods the true likelihood was computed using the exact algorithm.
musical structure, but its promising performance as a predictor provides hope that
it could serve as a step on the way toward increasingly structured statistical models
for music and other complex multivariate time series.
5. Generalizations of the model
In this section, we describe four variations and generalizations of the factorial HMM.
5.1. Discrete observables
The probabilistic model presented in this paper has assumed real-valued Gaussian
observations. One of the advantages arising from this assumption is that the
conditional density of a D-dimensional observation, P (Y t jS (1)
t ), can be
compactly specified through M mean matrices of dimension D \Theta K, and one D \Theta D
covariance matrix. Furthermore, the M step for such a model reduces to a set of
weighted least squares equations.
The model can be generalized to handle discrete observations in several ways.
Consider a single D-valued discrete observation Y t . In analogy to traditional HMMs,
the output probabilities could be modeled using a matrix. However, in the case of a
factorial HMM, this matrix would have D \Theta K M entries for each combination of the
state variables and observation. Thus the compactness of the representation would
be entirely lost. Standard methods from graphical models suggest approximating
such large matrices with "noisy-OR" (Pearl, 1988) or "sigmoid" (Neal, 1992) models
of interaction. For example, in the softmax model, which generalizes the sigmoid
model to D ? 2, P (Y t jS (1)
t ) is multinomial with mean proportional to
264 Z. GHAHRAMANI AND M.I. JORDAN
exp
. Like the Gaussian model, this specification is again com-
pact, using M matrices of size D \Theta K. (As in the linear-Gaussian model, the W (m)
are overparametrized since they can each model the overall mean of Y t , as shown in
Appendix
A.) While the nonlinearity induced by the softmax function makes both
the E step and M step of the algorithm more difficult, iterative numerical methods
can be used in the M step whereas Gibbs sampling and variational methods can
again be used in the E step (see Neal, 1992; Hinton et al., 1995; and Saul et al.,
1996, for discussions of different approaches to learning in sigmoid networks).
5.2. Introducing couplings
The architecture for factorial HMMs presented in Section 2 assumes that the underlying
Markov chains interact only through the observations. This constraint can
be relaxed by introducing couplings between the hidden state variables (cf. Saul &
Jordan, 1997). For example, if S (m)
t depends on S (m)
equation (3) is
replaced by the following factorization
Y
Similar exact, variational, and Gibbs sampling procedures can be defined for this
architecture. However, note that these couplings must be introduced with caution,
as they may result in an exponential growth in parameters. For example, the above
factorization requires transition matrices of size K 2 \Theta K. Rather than specifying
these higher-order couplings through probability transition matrices, one can introduce
second-order interaction terms in the energy (log probability) function. Such
terms effectively couple the chains without the number of parameters incurred by
a full probability transition matrix. In the graphical model formalism these correspond
to symmetric undirected links, making the model a chain graph. While
the Jensen, Lauritzen and Olesen (1990) algorithm can still be used to propagate
information exactly in chain graphs, such undirected links cause the normalization
constant of the probability distribution-the partition function-to depend on the
coupling parameters. As in Boltzmann machines (Hinton & Sejnowski, 1986), both
a clamped and an unclamped phase are therefore required for learning, where the
goal of the unclamped phase is to compute the derivative of the partition function
with respect to the parameters (Neal, 1992).
5.3. Conditioning on inputs
Like the hidden Markov model, the factorial HMM provides a model of the unconditional
density of the observation sequences. In certain problem domains, some of
the observations can be better thought of as inputs or explanatory variables, and
FACTORIAL HIDDEN MARKOV MODELS 265
the others as outputs or response variables. The goal, in these cases, is to model
the conditional density of the output sequence given the input sequence. In machine
learning terminology, unconditional density estimation is unsupervised while
conditional density estimation is supervised.
Several algorithms for learning in hidden Markov models that are conditioned on
inputs have been recently presented in the literature (Cacciatore & Nowlan, 1994;
Bengio & Frasconi, 1995; Meila & Jordan, 1996). Given a sequence of input vectors
g, the probabilistic model for an input-conditioned factorial HMM is
Y
\Theta
Y
Y
The model depends on the specification of P (Y t jS (m)
which are conditioned both on a discrete state variable and on a (possibly con-
tinuous) input vector. The approach used in Bengio and Frasconi's Input Output
HMMs (IOHMMs) suggests modeling P (S (m)
separate neural
networks, one for each setting of S (m)
. This decomposition ensures that a valid
probability transition matrix is defined at each point in input space if a sum-to-one
constraint (e.g., softmax nonlinearity) is used in the output of these networks.
Using the decomposition of each conditional probability into K networks, inference
in input-conditioned factorial HMMs is a straightforward generalization of the
algorithms we have presented for factorial HMMs. The exact forward-backward
algorithm in Appendix B can be adapted by using the appropriate conditional
probabilities. Similarly, the Gibbs sampling procedure is no more complex when
conditioned on inputs. Finally, the completely factorized and structured approximations
can also be generalized readily if the approximating distribution has a
dependence on the input similar to the model's. If the probability transition structure
not decomposed as above, but has a complex dependence
on the previous state variable and input, inference may become considerably more
complex.
Depending on the form of the input conditioning, the Maximization step of learning
may also change considerably. In general, if the output and transition probabilities
are modeled as neural networks, the M step can no longer be solved exactly
and a gradient-based generalized EM algorithm must be used. For log-linear
models, the M step can be solved using an inner loop of iteratively reweighted
least-squares (McCullagh & Nelder, 1989).
5.4. Hidden Markov decision trees
An interesting generalization of factorial HMMs results if one conditions on an
input X t and orders the M state variables such that S (m)
t depends on S (n)
t for
266 Z. GHAHRAMANI AND M.I. JORDAN
S (2)
S (2)
S (2)
Figure
6. The hidden Markov decision tree.
Figure
6). The resulting architecture can be seen as a probabilistic
decision tree with Markovian dynamics linking the decision variables. Consider how
this probabilistic model would generate data at the first time step, Given
the top node S (1)
can take on K values. This stochastically partitions X
space into K decision regions. The next node down the hierarchy, S (2), subdivides
each of these regions into K subregions, and so on. The output Y 1 is generated from
the input X 1 and the K-way decisions at each of the M hidden nodes. At the next
time step, a similar procedure is used to generate data from the model, except that
now each decision in the tree is dependent on the decision taken at that node in the
previous time step. Thus, the "hierarchical mixture of experts" architecture (Jordan
Jacobs, 1994) is generalized to include Markovian dynamics for the decisions.
Hidden Markov decision trees provide a useful starting point for modeling time
series with both temporal and spatial structure at multiple resolutions. We explore
this generalization of factorial HMMs in Jordan, Ghahramani, and Saul (1997).
6. Conclusion
In this paper we have examined the problem of learning for a class of generalized
hidden Markov models with distributed state representations. This generalization
provides both a richer modeling tool and a method for incorporating prior structural
information about the state variables underlying the dynamics of the system
generating the data. Although exact inference in this class of models is generally
intractable, we provided a structured variational approximation that can be computed
tractably. This approximation forms the basis of the Expectation step in an
EM algorithm for learning the parameters of the model. Empirical comparisons
to several other approximations and to the exact algorithm show that this approximation
is both efficient to compute and accurate. Finally, we have shown that
FACTORIAL HIDDEN MARKOV MODELS 267
the factorial HMM representation provides an advantage over traditional HMMs in
predictive modeling of the complex temporal patterns in Bach's chorales.
Appendix
A
The M step
The M step equations for each parameter are obtained by setting the derivatives
of Q with respect to that parameters to zero. We start by expanding Q using
equations (1)-(4b):
tr
C
tr
(log P (m) )hS (m)
log Z; (A.1)
where tr is the trace operator for square matrices and Z is a normalization term
independent of the states and observations ensuring that the probabilities sum to
one.
Setting the derivatives of Q with respect to the output weights to zero, we obtain
a linear system of equations for the W
0: (A.2)
Assuming Y t is a D\Theta1 vector, let S t be the MK \Theta1 vector obtained by concatenating
the S (m) vectors, and W be the D \Theta MK matrix obtained by concatenating the
W (m) matrices (of size D \Theta K). Then solving (A.2) results in
where y is the Moore-Penrose pseudo-inverse. Note that the model is overparameterized
since the D \Theta 1 means of each of the W (m) matrices add up to a single mean.
Using the pseudo-inverse removes the need to explicitly subtract this overall mean
from each W (m) and estimate it separately as another parameter.
To estimate the priors, we solve @Q=@- subject to the constraint that
they sum to one, obtaining
268 Z. GHAHRAMANI AND M.I. JORDAN
Similarly, to estimate the transition matrices we solve @Q=@P subject to
the constraint that the columns of P (m) sum to one. For element (i;
new
Finally, the re-estimation equations for the covariance matrix can be derived by
taking derivatives with respect to C \Gamma1
The first term arises from the normalization for the Gaussian density function: Z is
proportional to jCj T=2 and @jCj=@C Substituting (A.2) and re-organizing
we get
For equations reduce to the Baum-Welch re-estimation equations
for HMMs with Gaussian observables. The above M step has been presented for
the case of a single observation sequence. The extension to multiple sequences is
straightforward.
Appendix
Exact forward-backward algorithm
Here we specify an exact forward-backward recursion for computing the posterior
probabilities of the hidden states in a factorial HMM. It differs from a straightforward
application of the forward-backward algorithm on the equivalent K M state
HMM, in that it does not depend on a K M \Theta K M transition matrix. Rather, it
makes use of the independence of the underlying Markov chains to sum over M
transition matrices of size K \Theta K.
Using the notation fY - g r
t to mean the observation sequence Y
ff (1)
ff (M)
FACTORIAL HIDDEN MARKOV MODELS 269
Then we obtain the forward recursions
and
At the end of the forward recursions, the likelihood of the observation sequence is
the sum of the K M elements in ff T .
Similarly, to obtain the backward recursions we define
from which we obtain
The posterior probability of the state at time t is obtained by multiplying ff t and
This algorithm can be shown to be equivalent to the Jensen, Lauritzen and Olesen
algorithm for probability propagation in graphical models. The probabilities
are defined over collections of state variables corresponding to the cliques in the
equivalent junction tree. Information is passed forwards and backwards by summing
over the sets separating each neighboring clique in the tree. This results in
forward-backward-type recursions of order O(TMK M+1 ).
Using the ff t , fi t , and fl t quantities, the statistics required for the E step are
Z. GHAHRAMANI AND M.I. JORDAN
Appendix
Completely factorized variational approximation
Using the definition of the probabilistic model given by equations (1)-(4b), the
posterior probability of the states given an observation sequence can be written as
Z
where Z is a normalization constant ensuring that the probabilities sum to one and
Similarly, the probability distribution given by the variational approximation (7)-
(8) can be written as
expf\GammaH
where
log ' (m)
Using this notation, and denoting expectation with respect to the variational distribution
using angular brackets h\Deltai, the KL divergence is
Three facts can be verified from the definition of the variational approximation:
diagf' (m)
FACTORIAL HIDDEN MARKOV MODELS 271
where diag is an operator that takes a vector and returns a square matrix with
the elements of the vector along its diagonal, and zeros everywhere else. The KL
divergence can therefore be expanded to
log ' (m)
C
tr
C
trf' (m)
log P (m)
Taking derivatives with respect to ' (m)
t , we obtain
log ' (m)
C
\Gamma(log
where \Delta (m) is the vector of diagonal elements of W (m) 0
C c is a term
arising from log ZQ , ensuring that the ' (m)
t sum to one. Setting this derivative
equal to 0 and solving for ' (m)
t gives equation (9a).
Appendix
Structured approximation
For the structured approximation, HQ is defined as
log h (m)
Using (C.2), we write the KL divergence as
tr
C
tr
C diag
log Z: (D.2)
Z. GHAHRAMANI AND M.I. JORDAN
Since KL is independent of - (m) and P (m) , the first thing to note is that these
parameters of the structured approximation remain equal to the equivalent parameters
of the true system. Now, taking derivatives with respect to log h (n)
- , we
get
@ log h (n)
log h (m)
'6=m
C
@ log h (n)
The last term, which we obtained by making use of the fact that
@ log ZQ
@ log h (n)
cancels out the first term. Setting the terms inside the brackets in (D.3) equal to
zero yields equation (12a).
Acknowledgments
We thank Lawrence Saul for helpful discussions and Geoffrey Hinton for support.
This project was supported in part by a grant from the McDonnell-Pew Foundation,
by a grant from ATR Human Information Processing Research Laboratories, by a
gift from Siemens Corporation, and by grant N00014-94-1-0777 from the Office of
Naval Research. Zoubin Ghahramani was supported by a grant from the Ontario
Information Technology Research Centre.
Notes
1. For related work on inference in distributed state HMMs, see Dean and Kanazawa (1989).
2. In speech, neural networks are generally used to model P (S t jY t ); this probability is converted
to the observation probabilities needed in the HMM via Bayes rule.
3. If the columns of W (m) and W (n) are orthogonal for every pair of state variables, m and n, and
C is a diagonal covariance matrix, then the state variables will no longer be dependent given
the observation. In this case there is no "explaining away": each state variable is modeling the
variability in the observation along a different subspace.
4. A more Bayesian treatment of the learning problem, in which the parameters are also considered
hidden random variables, can be handled by Gibbs sampling by replacing the "M step"
with sampling from the conditional distribution of the parameters given the other hidden
variables (for example, see Tanner and Wong, 1987).
5. The first term is replaced by log - (m) for the second term does not appear for
6. All samples were used for learning; that is, no samples were discarded at the beginning of the
run. Although ten samples is too few to even approach convergence, it provides a run-time
roughly comparable to the variational methods. The goal was to see whether this "impatient"
Gibbs sampler would be able to compete with the other approximate methods.
FACTORIAL HIDDEN MARKOV MODELS 273
7. Lower values suggest a better probabilistic model: a value of one, for example, means that
it would take one bit more than the true generative model to code each observation vector.
Standard deviations reflect the variation due to training set, test set, and the random seed of
the algorithm. Standard errors on the mean are a factor of 3.8 smaller.
8. For the variational methods these dashed lines are equal to minus the lower bound on the
log likelihood, except for a normalization term which is intractable to compute and can vary
during learning, resulting in the apparent occasional increases in the bound.
9. Since the attributes were modeled as real numbers, the log likelihoods are only a measure of
relative coding cost. Comparisons between these likelihoods are meaningful, whereas to obtain
the absolute cost of coding a sequence, it is necessary to specify a discretization level.
10.This is analogous to the fully-connected Boltzmann machine with N units (Hinton & Sejnowski,
1986), in which every binary unit is coupled to every other unit using O(N 2 ) parameters, rather
than the O(2 N ) parameters required to specify the complete probability table.
--R
A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains.
An input-outputHMM architecture
Mixtures of controllers for jump linear and non-linear plants
Multiple viewpoint systems for music prediction.
Elements of information theory.
Applications of a general propagation algorithm for probabilistic expert systems.
A model for reasoning about persistence and causation.
Maximum likelihood from incomplete data via the EM algorithm.
Neural networks and the bias/variance dilemma.
Stochastic relaxation
Factorial learning and the EM algorithm.
Learning and relearning in Boltzmann machines.
Bayesian updating in recursive graphical models by local computations.
Hierarchical mixtures of experts and the EM algorithm.
Neural Computation
Stochastic simulation algorithms for dynamic probabilistic networks.
Hidden Markov models in computational biology: Applications to protein modeling
Local computations with probabilities on graphical structures and their application to expert systems.
Generalized linear models.
Learning fine motion by Markov mixtures of experts.
UCI Repository of machine learning databases
Connectionist learning of belief networks.
Probabilistic inference using Markov chain Monte Carlo methods (Technical Report CRG-TR-93-1)
A new view of the EM algorithm that justifies incremental and other variants.
Statistical field theory.
Probabilistic reasoning in intelligent systems: Networks of plausible inference.
CA: Morgan Kaufmann.
An Introduction to hidden Markov models.
Mixed memory Markov models.
Mean Field Theory for Sigmoid Belief Networks.
Journal of Artificial Intelligence Research
Boltzmann chains and hidden Markov models.
Exploiting tractable substructures in Intractable networks.
Probabilistic independence networks for hidden Markov probability models.
Hidden Markov model induction by Bayesian model merging.
The calculation of posterior distributions by data augmentation (with discussion).
bounds for convolutional codes and an asymptotically optimal decoding algorithm.
Mean field networks that learn to discriminate temporally distorted strings.
A minimum description length framework for unsupervised learning.
Received
--TR
Probabilistic reasoning in intelligent systems: networks of plausible inference
A model for reasoning about persistence and causation
Learning and relearning in Boltzmann machines
Elements of information theory
Connectionist learning of belief networks
Neural networks and the bias/variance dilemma
Hierarchical mixtures of experts and the EM algorithm
Probabilistic independence networks for hidden Markov probability models
Hidden Markov Model} Induction by Bayesian Model Merging
A minimum description length framework for unsupervised learning
--CTR
P. Xing , Michael I. Jordan , Stuart Russell, Graph partition strategies for generalized mean field inference, Proceedings of the 20th conference on Uncertainty in artificial intelligence, p.602-610, July 07-11, 2004, Banff, Canada
Ricardo Silva , Jiji Zhang , James G. Shanahan, Probabilistic workflow mining, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Andrew Howard , Tony Jebara, Dynamical systems trees, Proceedings of the 20th conference on Uncertainty in artificial intelligence, p.260-267, July 07-11, 2004, Banff, Canada
Raul Fernandez , Rosalind W. Picard, Modeling drivers' speech under stress, Speech Communication, v.40 n.1-2, p.145-159, April
Robert A. Jacobs , Wenxin Jiang , Martin A. Tanner, Factorial hidden Markov models and the generalized backfitting algorithm, Neural Computation, v.14 n.10, p.2415-2437, October 2002
Terry Caelli , Andrew McCabe , Garry Briscoe, Shape tracking and production using hidden Markov models, Hidden Markov models: applications in computer vision, World Scientific Publishing Co., Inc., River Edge, NJ, 2001
Agnieszka Betkowska , Koichi Shinoda , Sadaoki Furui, Robust speech recognition using factorial HMMs for home environments, EURASIP Journal on Applied Signal Processing, v.2007 n.1, p.10-10, 1 January 2007
Yunhua Hu , Hang Li , Yunbo Cao , Dmitriy Meyerzon , Qinghua Zheng, Automatic extraction of titles from general documents using machine learning, Proceedings of the 5th ACM/IEEE-CS joint conference on Digital libraries, June 07-11, 2005, Denver, CO, USA
Yunhua Hu , Hang Li , Yunbo Cao , Li Teng , Dmitriy Meyerzon , Qinghua Zheng, Automatic extraction of titles from general documents using machine learning, Information Processing and Management: an International Journal, v.42 n.5, p.1276-1293, September 2006
Tony Jebara , Risi Kondor , Andrew Howard, Probability Product Kernels, The Journal of Machine Learning Research, 5, p.819-844, 12/1/2004
Fine , Yoram Singer , Naftali Tishby, The Hierarchical Hidden Markov Model: Analysis and Applications, Machine Learning, v.32 n.1, p.41-62, July 1998
Charles Sutton , Khashayar Rohanimanesh , Andrew McCallum, Dynamic conditional random fields: factorized probabilistic models for labeling and segmenting sequence data, Proceedings of the twenty-first international conference on Machine learning, p.99, July 04-08, 2004, Banff, Alberta, Canada
Wang , Nan-Ning Zheng , Yan Li , Ying-Qing Xu , Heung-Yung Shum, Learning kernel-based HMMs for dynamic sequence synthesis, Graphical Models, v.65 n.4, p.206-221, July
Jie Tang , Hang Li , Yunbo Cao , Zhaohui Tang, Email data cleaning, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Cen Li , Gautam Biswas, A Bayesian approach for structural learning with hidden Markov models, Scientific Programming, v.10 n.3, p.201-219, August 2002
Sophie Deneve, Bayesian spiking neurons i: Inference, Neural Computation, v.20 n.1, p.91-117, January 2008
Lawrence K. Saul , Michael I. Jordan, Mixed Memory Markov Models: Decomposing Complex Stochastic Processes as Mixtures of Simpler Ones, Machine Learning, v.37 n.1, p.75-87, Oct. 1999
Yong Cao , Petros Faloutsos , Frdric Pighin, Unsupervised learning for speech motion editing, Proceedings of the ACM SIGGRAPH/Eurographics symposium on Computer animation, July 26-27, 2003, San Diego, California
Hung H. Bui , Svetha Venkatesh , Geoff West, Tracking and surveillance in wide-area spatial environments using the abstract hidden Markov model, Hidden Markov models: applications in computer vision, World Scientific Publishing Co., Inc., River Edge, NJ, 2001
R. Anderson , Pedro Domingos , Daniel S. Weld, Relational Markov models and their application to adaptive web navigation, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, July 23-26, 2002, Edmonton, Alberta, Canada
Tom ingliar , Milo Hauskrecht, Noisy-OR Component Analysis and its Application to Link Analysis, The Journal of Machine Learning Research, 7, p.2189-2213, 12/1/2006
Martin V. Butz, Kernel-based, ellipsoidal conditions in the real-valued XCS classifier system, Proceedings of the 2005 conference on Genetic and evolutionary computation, June 25-29, 2005, Washington DC, USA
Ying Wu , Thomas S. Huang, Robust Visual Tracking by Integrating Multiple Cues Based on Co-Inference Learning, International Journal of Computer Vision, v.58 n.1, p.55-71, June 2004
Michael I. Jordan , Zoubin Ghahramani , Tommi S. Jaakkola , Lawrence K. Saul, An Introduction to Variational Methods for Graphical Models, Machine Learning, v.37 n.2, p.183-233, Nov.1.1999
Andrea Torsello , Antonio Robles-Kelly , Edwin R. Hancock, Discovering Shape Classes using Tree Edit-Distance and Pairwise Clustering, International Journal of Computer Vision, v.72 n.3, p.259-285, May 2007
Charles Sutton , Andrew McCallum , Khashayar Rohanimanesh, Dynamic Conditional Random Fields: Factorized Probabilistic Models for Labeling and Segmenting Sequence Data, The Journal of Machine Learning Research, 8, p.693-723, 5/1/2007
H. Attias, Independent factor analysis, Neural Computation, v.11 n.4, p.803-851, May 15, 1999
Jinhai Cai , Zhi-Qiang Liu, Hidden Markov Models with Spectral Features for 2D Shape Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.12, p.1454-1458, December 2001
John Binder , Daphne Koller , Stuart Russell , Keiji Kanazawa, Adaptive Probabilistic Networks with Hidden Variables, Machine Learning, v.29 n.2-3, p.213-244, Nov./Dec. 1997
Xiangdong An , Dawn Jutla , Nick Cercone, Privacy intrusion detection using dynamic Bayesian networks, Proceedings of the 8th international conference on Electronic commerce: The new e-commerce: innovations for conquering current barriers, obstacles and limitations to conducting successful business on the internet, August 13-16, 2006, Fredericton, New Brunswick, Canada
Akio Utsugi, Ensemble of Independent Factor Analyzers with Application to Natural Image Analysis, Neural Processing Letters, v.14 n.1, p.49-60, August 2001
Zoubin Ghahramani, An introduction to hidden Markov models and Bayesian networks, Hidden Markov models: applications in computer vision, World Scientific Publishing Co., Inc., River Edge, NJ, 2001
Cristian Sminchisescu , Atul Kanaujia , Dimitris Metaxas, Conditional models for contextual human motion recognition, Computer Vision and Image Understanding, v.104 n.2, p.210-220, November 2006
Hichem Snoussi , Ali Mohammad-Djafari, Bayesian Unsupervised Learning for Source Separation with Mixture of Gaussians Prior, Journal of VLSI Signal Processing Systems, v.37 n.2-3, p.263-279, June-July 2004
Inna Stainvas , David Lowe, A Generative Probabilistic Oriented Wavelet Model for Texture Segmentation, Neural Processing Letters, v.17 n.3, p.217-238, June
Russell Greiner , Christian Darken , N. Iwan Santoso, Efficient reasoning, ACM Computing Surveys (CSUR), v.33 n.1, p.1-30, March 2001 | mean field theory;bayesian networks;hidden markov models;graphical models;time series;EM algorithm |
274791 | Approximate graph coloring by semidefinite programming. | We consider the problem of coloring k-colorable graphs with the fewest possible colors. We present a randomized polynomial time algorithm that colors a 3-colorable graph on n vertices with min{O(&Dgr;1/3 log1/2 &Dgr; log n), O(n1/4 log1/2 n)} colors where &Dgr; is the maximum degree of any vertex. Besides giving the best known approximation ratio in terms of n, this marks the first nontrivial approximation result as a function of the maximum degree &Dgr;. This result can be generalized to k-colorable graphs to obtain a coloring using min{O(&Dgr;1-2/k log1/2 &Dgr; log n), O(n13/(k+1) log1/2 n)} colors. Our results are inspired by the recent work of Goemans and Williamson who used an algorithm for semidefinite optimization problems, which generalize linear programs, to obtain improved approximations for the MAX CUT and MAX 2-SAT problems. An intriguing outcome of our work is a duality relationship established between the value of the optimum solution to our semidefinite program and the Lovsz &thgr;-function. We show lower bounds on the gap between the optimum solution of our semidefinite program and the actual chromatic by duality this also demonstrates interesting new facts about the &thgr;-function. | Introduction
A legal vertex coloring of a graph G(V; E) is an assignment of colors to its vertices such that no
two adjacent vertices receive the same color. Equivalently, a legal coloring of G by k colors is a
partition of its vertices into k independent sets. The minimum number of colors needed for such
a coloring is called the chromatic number of G, and is usually denoted by -(G). Determining the
chromatic number of a graph is known to be NP-hard (cf. [20]).
Besides its theoretical significance as a canonical NP-hard problem, graph coloring arises naturally
in a variety of applications such as register allocation [11, 12, 13] and timetable/examination
scheduling [8, 43]. In many applications which can be formulated as graph coloring problems, it
suffices to find an approximately optimum graph coloring-a coloring of the graph with a small
though non-optimum number of colors. This along with the apparent impossibility of an exact
solution has led to some interest in the problem of approximate graph coloring. The analysis of
approximation algorithms for graph coloring started with the work of Johnson [27] who shows
that a version of the greedy algorithm gives an O(n= log n)-approximation algorithm for k-coloring.
improved this bound by giving an elegant algorithm which uses O(n 1\Gamma1=(k\Gamma1) ) colors
to legally color a k-colorable graph. Subsequently, other polynomial time algorithms were provided
by Blum [9] which use O(n 3=8 log 8=5 n) colors to legally color an n-vertex 3-colorable graph. This
result generalizes to coloring a k-colorable graph with O(n 1\Gamma1=(k\Gamma4=3) log 8=5 n) colors. The best
known performance guarantee for general graphs is due to Halld'orsson [25] who provided a polynomial
time algorithm using a number of colors which is within a factor of O(n(log log n)
of the optimum.
Recent results in the hardness of approximations indicate that it may be not possible to substantially
improve the results described above. Lund and Yannakakis [34] used the results of Arora,
Lund, Motwani, Sudan, and Szegedy [6] and Feige, Goldwasser, Lov'asz, Safra, and Szegedy [17]
to show that there exists a (small) constant ffl ? 0 such that no polynomial time algorithm can
approximate the chromatic number of a graph to within a ratio of n ffl unless NP. The current
hardness result for the approximation of the chromatic number is due to Feige and Kilian [18] and
H-astad [26], who show that approximating it to within n 1\Gammaffi , for any ffi ? 0, would imply NP=RP
(RP is the class of probabilistic polynomial time algorithms making one-sided error). However,
none of these hardness results apply to the special case of the problem where the input graph is
guaranteed to be k-colorable for some small k. The best hardness result in this direction is due to
Khanna, Linial, and Safra [28] who show that it is not possible to color a 3-colorable graph with 4
colors in polynomial time unless
In this paper we present improvements on the result of Blum. In particular, we provide a
randomized polynomial time algorithm which colors a 3-colorable graph of maximum degree \Delta
with log n); O(n 1=4 log 1=2 n)g colors; moreover, this can be generalized to k-
colorable graphs to obtain a coloring using O(\Delta 1\Gamma2=k log 1=2 \Delta log n) or O(n 1\Gamma3=(k+1) log 1=2 n) colors.
Besides giving the best known approximations in terms of n, our results are the first non-trivial
approximations given in terms of \Delta. Our results are based on the recent work of Goemans and
used an algorithm for semidefinite optimization problems (cf. [23, 2]) to obtain
improved approximations for the MAX CUT and MAX 2-SAT problems. We follow their basic
paradigm of using algorithms for semidefinite programming to obtain an optimum solution to a
relaxed version of the problem, and a randomized strategy for "rounding" this solution to a feasible
but approximate solution to the original problem. Motwani and Naor [37] have shown that the
approximate graph coloring problem is closely related to the problem of finding a CUT COVER
of the edges of a graph. Our results can be viewed as generalizing the MAX CUT approximation
algorithm of Goemans and Williamson to the problem of finding an approximate CUT COVER. In
our techniques also lead to improved approximations for the MAX k-CUT problem [19]. We
also establish a duality relationship between the value of the optimum solution to our semidefinite
program and the Lov'asz #-function [23, 24, 33]. We show lower bounds on the gap between the
optimum solution of our semidefinite program and the actual chromatic by duality this
also demonstrates interesting new facts about the #-function.
Alon and Kahale [4] use related techniques to devise a polynomial time algorithm for 3-coloring
random graphs drawn from a "hard" distribution on the space of all 3-colorable graphs. Recently,
Frieze and Jerrum [19] have used a semidefinite programming formulation and randomized rounding
strategy essentially the same as ours to obtain improved approximations for the MAX k-CUT
problem with large values of k. Their results required a more sophisticated version of our analysis,
but for the coloring problem our results are tight up to poly-logarithmic factors and their analysis
does not help to improve our bounds.
Semidefinite programming relaxations are an extension of the linear programming relaxation
approach to approximately solving NP-complete problems. We thus present our work in the style
of the classical LP-relaxation approach. We begin in Section 2 by defining a relaxed version of
the coloring problem. Since we use a more complex relaxation than standard linear programming,
we must show that the relaxed problem can be solved; this is done in Section 3. We then show
relationships between the relaxation and the original problem. In Section 4, we show that (in a
sense to be defined later) the value of the relaxation bounds the value of the original problem.
Then, in Sections 5, 6, and 7, we show how a solution to the relaxation can be "rounded" to make
it a solution to the original problem. Combining the last two arguments shows that we can find
a good approximation. Section 3, Section 4, and Sections 5-7 are in fact independent and can be
read in any order after the definitions in Section 2. In Section 8, we investigate the relationship
between our fractional relaxations and the Lov'asz #-function, showing that they are in fact dual to
one another. We investigate the approximation error inherent in our formulation of the chromatic
number via semi-definite programming in Section 9.
Vector Relaxation of Coloring
In this section, we describe the relaxed coloring problem whose solution is in turn used to approximate
the solution to the coloring problem. Instead of assigning colors to the vertices of a graph,
we consider assigning (n-dimensional) unit vectors to the vertices. To capture the property of a
coloring, we aim for the vectors of adjacent vertices to be "different" in a natural way. The vector
k-coloring that we define plays the role that a hypothetical "fractional k-coloring" would play in a
classical linear-programming relaxation approach to the problem. Our relaxation is related to the
concept of an orthonormal representation of a graph [33, 23].
Definition 2.1 Given a graph E) on n vertices, and a real number k - 1, a vector k-coloring
of G is an assignment of unit vectors u i from the space ! n to each vertex , such that
for any two adjacent vertices i and j the dot product of their vectors satisfies the inequality
The definition of an orthonormal representation [33, 23] requires that the given dot products
be equal to zero, a weaker requirement than the one above.
3 Solving the Vector Coloring Problem
In this section we show how the vector coloring relaxation can be solved using semidefinite pro-
gramming. The methods in this section closely mimic those of Goemans and Williamson [21].
To solve the problem, we need the following auxiliary definition.
Definition 3.1 Given a graph E) on n vertices, a matrix k-coloring of the graph is an
n \Theta n symmetric positive semidefinite matrix M , with m
We now observe that matrix and vector k-colorings are in fact equivalent (cf. [21]). Thus, to
solve the vector coloring relaxation it will suffice to find a matrix k-coloring.
Fact 3.1 A graph has a vector k-coloring if and only if it has matrix k-coloring. Moreover, a vector
ffl)-coloring can be constructed from a matrix k-coloring in time polynomial in n and log(1=ffl).
Note that an exact solution cannot be found, as some of the values in it may be irrational.
Proof: Given a vector k-coloring fv i g, the matrix k-coloring is defined For the
other direction, it is well known that for every symmetric positive definite matrix M there exists a
square matrix U such that UU is the transpose of U ). The rows of U are vectors
that form a vector k-coloring of G.
An ffi-close approximation to the matrix U can be found in time polynomial in n and log(1=ffi)
can be found using the Incomplete Cholesky Decomposition [21, 22]. (Here by ffi-close we mean a
matrix U 0 such that U 0 U has L1 norm less than ffi .) This in turn gives a vector
coloring of the graph, provided ffi is chosen appropriately.
Lemma 3.2 If a graph G has a vector k-coloring then a vector ffl)-coloring of the graph can
be constructed in time polynomial in k, n, and log(1=ffl).
Proof: Our proof is similar to those of Lov'asz [33] and Goemans-Williamson [21]. We construct
a semidefinite optimization problem (SDP) whose optimum is \Gamma1=(k \Gamma 1) when k is the smallest
real number such that a matrix k-coloring of G exists. The optimum solution also provides a matrix
k-coloring of G.
minimize ff
is positive semidefinite
subject to
Consider a graph which has a vector (and matrix) k-coloring. This means there is a solution to the
above semidefinite program with 1). The ellipsoid method or other interior point based
methods [23, 2] can be employed to find a feasible solution where the value of the objective is at
most \Gamma1=(k \Gamma 1)+ ffi in time polynomial in n and log 1=ffi. This implies that for all fi; jg 2
at most which is at most \Gamma1=(k
Thus a matrix ffl)-coloring can be found in time polynomial in k, n and log(1=ffl). From the
matrix coloring, the vector coloring can be found in polynomial time as was noted in the previous
lemma
Relating Original and Relaxed Solutions
In this section, we show that our vector coloring problem is a useful relaxation because the solution
to it is related to the solution of the original problem. In order to understand the quality of the
relaxed solution, we need the following geometric lemma:
Lemma 4.1 For all positive integers k and n such that k - n + 1, there exist k unit vectors in ! n
such that the dot product of any distinct pair is \Gamma1=(k \Gamma 1).
Proof: Clearly it suffices to prove the lemma for other values of n, we make the
coordinates of the vectors 0 in all but the first k \Gamma 1 coordinates.) We begin by proving the claim for
explicitly provide unit vectors v (k)
for j. The vector v (k)
k(k\Gamma1) in all coordinates except the ith coordinate. In the ith
coordinate v (k)
i is
. It is easy to verify that the vectors are unit length and that their dot
products are exactly \Gamma1
.
As given, the vectors are in a k-dimensional space. Note, however, that the dot product of
each vector with the all-1's vector is 0. This shows that all k of the vectors are actually in a
(k-1)-dimensional hyperplane of the k-dimensional space. This proves the lemma.
Corollary 4.2 Every k-colorable graph G has a vector k-coloring.
Proof: Bijectively map the k colors to the k vectors defined in the previous lemma.
Note that a graph is vector 2-colorable if and only if it is 2-colorable. Lemma 4.1 is tight in
that it provides the best possible value for minimizing the maximum dot-product among k unit
vectors. This can be seen from the following lemma.
Lemma 4.3 Let G be vector k-colorable and let i be a vertex in G. The induced subgraph on the
neighbors of i is vector
be a vector k-coloring of G and assume without loss of generality that
Associate with each neighbor j of i a vector v 0
obtained by projecting v j onto
coordinates 2 through n and then scaling it up so that v 0
j has unit length. It suffices to show that
for any two adjacent vertices j and j 0 in the neighborhood of i, hv 0
Observe first that the projection of v j onto the first coordinate is negative and has magnitude
at least 1=(k \Gamma 1). This implies that the scaling factor for v 0
j is at least k\Gamma1
. Thus,
A simple induction using the above lemma shows that any graph containing a 1)-clique is
not k-vector colorable. Thus the "vector chromatic number" lies between between the clique and
chromatic number. This also shows that the analysis of Lemma 4.1 is tight in that \Gamma1=(k \Gamma 1) is
the minimum possible value of the maximum of the dot-products of k vectors.
In the next few sections we prove the harder part, namely, if a graph has a vector k-coloring
then it has an ~
and an ~
O(n )-coloring.
Given the solution to the relaxed problem, our next step is to show how to "round" the solution
to the relaxed problem in order to get a solution to the original problem. Both of the rounding
techniques we present in the following sections produce the coloring by working through an almost
legal semicoloring of the graph, as defined below.
Definition 5.1 A k-semicoloring of a graph G is an assignment of k colors to the at least half it
vertices such that no two adjacent vertices are assigned the same color.
An algorithm for semicoloring leads naturally to a coloring algorithm as shown by the following
lemma. The algorithm uses up at most a logarithmic factor more colors than the semicoloring
algorithm. Furthermore, we do not even lose this logarithmic factor if the semicoloring algorithm
uses a polynomial number of colors (which is what we will show we use).
Lemma 5.1 If an algorithm A can k i -semicolor any i-vertex subgraph of graph G in randomized
polynomial time, where k i increases with i, then A can be used to O(k n log n)-color G. Furthermore,
if there exists ffl ? 0 such that for all i, k can be used to color G with O(k n ) colors.
Proof: We show how to construct a coloring algorithm A 0 to color any subgraph H of G. A 0
starts by using A to semicolor H . Let S be the subset of vertices which have not been assigned
a color by A. Observe that jSj - jV (H)j=2. A 0 fixes the colors of vertices not in S, and then
recursively colors the induced subgraph on S using a new set of colors.
Let c i be the maximum number of colors used by A 0 to color any i-vertex subgraph. Then c i
satisfies the recurrence
It is easy to see that this any c i satisfying this recurrence, must satisfy c i - k i log i. In particular
this implies that c n - O(k n log n). Furthermore for the case where k the above recurrence
is satisfied only when c
Using the above lemma, we devote the next two sections to algorithms for transforming vector
colorings into semicolorings.
6 Rounding via Hyperplane Partitions
We now focus our attention on vector 3-colorable graphs, leaving the extension to general k for later.
Let \Delta be the maximum degree in a graph G. In this section, we outline a randomized rounding
scheme for transforming a vector 3-coloring of G into an O(\Delta log 3 2 )-semicoloring, and thus into an
log 3
log n)-coloring of G. Combining this method with a technique of Wigderson [42] yields an
O(n 0:386 )-coloring of G. The method is based on [21] and is weaker than the method we describe
in the following section; however, it introduces several of the ideas we will use in the more powerful
algorithm.
Assume we are given a vector 3-coloring fv i g n
. Recall that the unit vectors v i and v j associated
with an adjacent pair of vertices i and j have a dot product of at most \Gamma1=2, implying that the
angle between the two vectors is at least 2-=3 radians (120 degrees).
Definition 6.1 Consider a hyperplane H. We say that H separates two vectors if they do not lie
on the same side of the hyperplane. For any edge fi; jg 2 E, we say that the hyperplane H cuts
the edge if it separates the vectors v i and v j .
In the sequel, we use the term random hyperplane to denote the unique hyperplane containing
the origin and having as its normal a random unit vector v uniformly distributed on the unit sphere
. The following lemma is a restatement of Lemma 1.2 of Goemans-Williamson [21].
Lemma 6.1 (Goemans-Williamson [21]) Given two vectors at an angle of ', the probability
that they are separated by a random hyperplane is exactly '=-.
We conclude that give a vector 3-coloring, for any edge fi; jg 2 E, the probability that a random
hyperplane cuts the edge is exactly 2=3. It follows that the expected fraction of the edges in G
which are cut by a random hyperplane is exactly 2=3. Suppose that we pick r random hyperplanes
independently. Then, the probability that an edge is not cut by one of these hyperplanes is (1=3) r ,
and the expected fraction of the edges not cut is also (1=3) r .
We claim that this gives us a good semicoloring algorithm for the graph G. Notice that r
hyperplanes can partition ! n into at most 2 r distinct regions. (For r - n this is tight since r
hyperplanes create exactly 2 r regions.) An edge is cut by one of these r hyperplanes if and only if
the vectors associated with its end-points lie in distinct regions. Thus, we can associate a distinct
color with each of the 2 r regions and give each vertex the color of the region containing its vector.
The expected number of edges whose end-points have the same color is (1=3) r m, where m is the
number of edges in E.
Theorem 6.2 If a graph has a vector 3-coloring, then it has an O(\Delta log 3
2 )-semicoloring which can
be constructed from the vector 3-coloring in polynomial time with high probability.
Proof: We use the random hyperplane method just described. Fix \Deltae, and note
that As noted above, r hyperplanes chosen independently
at random will cut an edge with probability 1=9\Delta. Thus the expected number of edges which
are not cut is since the number of edges is at most n\Delta=2. By Markov's
inequality (cf. [38], page 46), the probability that the number of uncut edges is more than twice
the expected value is at most 1=2. Thus, with probability at least 1/2 we get a coloring with at
most n=4 uncut edges. Delete one endpoint of each such edge leaves a set of 3n=4 colored vertices
with no uncut edges-ie, a semicoloring.
Repeating the entire process t times means that we will find a O(\Delta log 3 2 )-semicoloring with
probability at least 1 \Gamma 1=2 t .
Noting that log 3 2 ! 0:631 and that \Delta - n, this theorem and Lemma 5.1 implies a semicoloring
using O(n 0:631 ) colors.
By varying the number of hyperplanes, we can arrange for a tradeoff between the number of
colors used and the number of edges that violate the resulting coloring. This may be useful in some
applications where a nearly legal coloring is good enough.
6.1 Wigderson's Algorithm
Our coloring can be improved using the following idea due to Wigderson [42]. Fix a threshold
value ffi. If there exists a vertex of degree greater than ffi, pick any one such vertex and 2-color its
neighbors (its neighborhood is vector 2-colorable and hence 2-colorable). The colored vertices are
removed and their colors are not used again. Repeating this as often as possible (or until half the
vertices are colored) brings the maximum degree below ffi at the cost of using at most 2n=ffi colors.
Thus, we can obtain a semicoloring using O(n=ffi colors. The optimum choice of ffi is around
0:613 , which implies a semicoloring using O(n 0:387 ) colors. This semicoloring can be used to legally
color G using O(n 0:387 ) colors by applying Lemma 5.1.
Corollary 6.3 A 3-colorable graph with n vertices can be colored using O(n 0:387 ) colors by a polynomial
time randomized algorithm.
The bound just described is (marginally) weaker than the guarantee of a O(n 0:375 ) coloring due
to Blum [9]. We now improve this result by constructing a semicoloring with fewer colors.
7 Rounding via Vector Projections
In this section we start by proving the following more powerful version of Theorem 6.2. A simple
application of Wigderson's technique to this algorithm yields our final coloring algorithm.
Lemma 7.1 For every integer function vector k-colorable graph with maximum degree
\Delta can be semi-colored with at most O(\Delta 1\Gamma2=k
\Delta) colors in probabilistic polynomial time.
As in the previous section, this has immediate consequences for approximate coloring.
Given a vector k-coloring, we show that it is possible to extract an independent set of size
\Delta)). If we assign one color to this set and recurse on the rest, we will end up
using
\Delta) colors in all to assign colors to half the vertices and the result follows. To
find such a large independent set, we give a randomized procedure for selecting an induced subgraph
with n 0 vertices and m 0 edges such that E[n
\Delta)). It follows that with
a polynomial number of repeated trials, we have a high probability of choosing a subgraph with
\Delta)). Given such a graph, we can delete one endpoint of each edge,
leaving an independent set of size n
\Delta)), as desired.
We now give the details of the construction. Suppose we have a vector k-coloring assigning unit
vectors v i to the vertices. We fix a parameter to be specified later. We choose a random
n-dimensional vector r according to a distribution to be specified soon. The subgraph consists of
all vertices i with Intuitively, since endpoints of an edge have vectors pointing away from
each other, if the vector associated with a vertex has a large dot product with r, then the vector
corresponding to an adjacent vertex will not have such a large dot product with r and hence will
not be selected. Thus, only a few edges are likely to be in the induced subgraph on the selected set
of vertices.
To complete the specification of this algorithm and to analyze it, we need some basic facts about
some probability distributions in ! n .
7.1 Probability Distributions in ! n
Recall that the standard normal distribution has the density function
distribution function \Phi(x), mean 0, and variance 1. A random vector is said to
have the n-dimensional standard normal distribution if the components r i are independent random
variables, each component having the standard normal distribution. It is easy to verify that this
distribution is spherically symmetric, in that the direction specified by the vector r is uniformly
distributed. (Refer to Feller [14, v. II], Knuth [31, v. 2], and R'enyi [39] for further details about
the higher dimensional normal distribution.)
Subsequently, the phrase "random d-dimensional vector" will always denote a vector chosen
from the d-dimensional standard normal distribution. A crucial property of the normal distribution
which motivates its use in our algorithm is the following theorem paraphrased from R'enyi [39] (see
also Section III.4 of Feller [14, v. II]).
Theorem 7.2 (Theorem IV.16.3 [39]) Let n-dimensional vector.
The projections of r onto two lines ' 1 and ' 2 are independent (and normally distributed) if and
only if ' 1 and ' 2 are orthogonal.
Alternatively, we can say that under any rotation of the coordinate axes, the projections of r
along these axes are independent standard normal variables. In fact, it is known that the only
distribution with this strong spherical symmetry property is the n-dimensional standard normal
distribution. The latter fact is precisely the reason behind this choice of distribution 1 in our
algorithm. In particular, we will make use of the following corollary to the preceding theorem.
Corollary 7.3 Let u be any unit vector in ! n . Let be a random vector (of
i.i.d. standard normal variables). The projection of r along u, given by dot product hu; ri, is distributed
according to the standard (1-dimensional) normal distribution.
It turns out that even if r is a random n-dimensional unit vector, the above corollary still holds
in the limit: as n grows, the projections of r on orthogonal lines approach (scaled) independent
normal distributions. Thus using a random unit vectors for our projection turns out to be equivalent
to using random normal vectors in the limit, but is messier to analyze.
Let N(x) denote the tail of the standard normal distribution. I.e.,
x
OE(y) dy:
We will need the following well-known bounds on the tail of the standard normal distribution. (See,
for instance, Lemma VII.2 of Feller [14, v. I].)
Lemma 7.4 For every x ? 0,
x
x
Proof: The proof is immediate from inspection of the following equations relating the three
quantities in the desired inequality to integrals involving OE(x), and the fact OE(x)=x is finite for
every x ? 0.
x
x
OE(y)
\Gammay 4
x
x
OE(y)
dy:
Readers familiar with physics will see the connection to Maxwell's law on the distribution of velocities of molecules
in ! 3 . Maxwell started with the assumption that in every Cartesian coordinate system in ! 3 , the three components
of the velocity vector are mutually independent and had expectation zero. Applying this assumption to rotations of
the axes, we conclude that the velocity components must be independent normal variables with identical variance.
This immediately implies Maxwell's distribution on the velocities.
7.2 The Analysis
We are now ready to complete the specification of the coloring algorithm. Recall that our goal is
to repeatedly identify, color and delete large independent sets from the graph. We actually set an
easier intermediate goal: find an induced subgraph with a large number n 0 of edges and a number
vertices. Since each edge only covers 2 vertices, the induced subgraph has
vertices with no incident edges. These vertices form an independent set that can be colored and
removed.
As discussed above, to find this sparse graph, we choose a random vector r and take all vertices
whose dot product with r exceeds a certain value c. Let the induced subgraph on these vertices
have edges. We show that for sufficiently larger we get an
independent set of size roughly n 0 . Intuitively, this is true for the following reason. Any particular
vertex has some particular probability landing near r and thus being "captured" into
our set. However, if two vertices are adjacent, the probability that they both land near r is quite
small because the vector coloring has placed them far apart.
For example, in the case of 3-coloring, when the probability that a vertex is chosen is p, the
probability that both endpoints of an edge are chosen is roughly p 4 . It follows that we end up
capturing (in expectation) a set of pn vertices that contains (in expectation) only
edges in a degree-\Delta graph. In such a set, at least pn \Gamma p 4 \Deltan of the vertices have no incident edges,
and thus form an independent set. We would like this independent set to be large. Clearly, we need
to make p small enough to ensure p 4 \Deltan - pn, meaning p - \Delta \Gamma1=3 . Taking p much smaller only
decreases the size of the independent set, so it turns out that our best choice is to take
yielding an indpendent set of
Repeating this capture process many times therefore
achieves an ~
We now formalize this intuitive argument. The vector r will be a random n-dimensional vector.
We precisely compute the expectation of n 0 , the number of vertices captured, and the expectation
of m 0 , the number of edges in the induced graph of the captured vertices. We first show that when
r is a random normal vector and our projection threshold is c, the expectation of n
for a certain constant a depending on the vector chromatic number. We also
show that N(ac) grows roughly as N(c) a 2
. (For the case of 3-coloring we have a = 2, and thus if
picking a sufficiently large c, we can find an independent set of size
N(c)). (In the following lemma, n 0 and m 0 are functions of c: we do not make this dependence
explicit.)
Lemma 7.5 Let a =
k\Gamma2 . Then for any c,
Proof: We first bound E [n 0 ] from below. Consider a particular vertex i with assigned vector
. The probability that it is in the selected set is just P
normally distributed and thus this probability is N(c). By linearity of expectations, the expected
number of selected vertices
Now we bound E [m 0 ] from above. Consider an edge with endpoint vectors v 1 and v 2 . The
probability that this edge is in the induced subgraph is the probability that both endpoints are
selected, which is
where the expression follows from Corollary 7.3 applied to the preceding probability expression.
We now observe that
It follows that the probability that both endpoints of an edge are selected is at most N(ac). If the
graph has maximum degree \Delta, then the total number of edges is at most n\Delta=2. Thus the expected
number of selected edges, E [m 0 ], is at most n\DeltaN (ac)=2.
Combining the previous arguments, we deduce that
We now determine the a c such that \DeltaN (ac) ! N(c). This will give us an expectation of at
least N(c)=2 in the above lemma. Using the bounds on N(x) in Lemma 7.4, we find that
N(c)
N(ac)
a
p'
(The last equation holds since a =
2.) Thus if we choose c so that 1 \Gamma 1=c 2 -pand e (a 2 \Gamma1)c 2 =2 - \Delta, then we get \DeltaN (ac) ! N(c). Both conditions are satisfied, for sufficiently
large \Delta, if we set
smaller values of \Delta we can use the greedy 1-coloring algorithm to get a color the graph
with a bounded number of colors, where the bound is independent of n.)
For this choice of c, we find that the independent set that is found has size at least
ne \Gammac 2 =2
c
\Gammac 3
-\Omega /
as desired. This concludes the proof of Lemma 7.1.
7.3 Adding Wigderson's Technique
To conclude, we now determine absolute approximation ratios independent of \Delta. This involves
another application of Wigderson's technique. If the graph has any vertex of large degree, then
we use the fact that its neighborhood is large and is vector 1)-chromatic, to find a large
independent set in its neighborhood. If no such vertex exists, then the graph has small maximum
degree, so we can use Lemma 7.1 to find a large independent set in the graph. After extracting
such an independent set, we recurse on the rest of the graph. The following lemma describes the
details, and the correct choice of the threshold degree.
Lemma 7.6 For every integer function vector k-colorable graph on n vertices can
be semicolored with O(n 1\Gamma3=(k+1) log 1=2 n) colors by a probabilistic polynomial time algorithm.
Proof: Given a vector k-colorable graph G, we show how to find an independent set of size
n) in the graph. Assume, by induction on k, that there exists a constant c ? 0
s.t. we can find an independent set of size ci 3=(k 0 +1) =(log 1=2 i) in any k 0 -vector chromatic graph on
k. We now prove the inductive assertion for k.
If G has a vertex of degree greater than \Delta k (n), then we find
a large independent set in the neighborhood of G. By Lemma 4.3, the neighborhood is vector
Hence we can find in this neighborhood, an independent set of size at least
n). If G does not have a vertex of degree greater than
then by Lemma 7.1, we can find an independent set of size at least
in G. This completes the induction.
By now assigning a new color to each such independent set, we find that we can color at least
n=2 vertices, using up at most O(n 1\Gamma3=(k+1) log 1=2 n) colors.
The semicolorings guaranteed by Lemmas 7.1 and 7.6 can be converted into colorings using
Lemma 5.1, yielding the following theorem.
Theorem 7.7 Any vector k-colorable graph on n nodes with maximum degree \Delta can be colored, in
probabilistic polynomial time, using minfO(\Delta 1\Gamma2=k
8 Duality Theory
The most intensively studied relaxation of a semidefinite programming formulation to date is the
Lov'asz #-function [23, 24, 33]. This relaxation of the clique number of a graph led to the first
polynomial-time algorithm for finding the clique and chromatic numbers of perfect graphs. We
now investigate a connection between # and a close variant of the vector chromatic number.
Intuitively, the clique and coloring problems have a certain "duality" since large cliques prevent
a graph from being colored with few colors. Indeed, it is the equality of the clique and chromatic
numbers in perfect graphs which lets us compute both in polynomial time. We proceed to formalize
this intuition. The duality theory of linear programming has an extension to semidefinite
programming. With the help of Eva Tardos and David Williamson, we have shown that in fact the
#-function and a close variant of the vector chromatic number are semidefinite programming duals
to one another and are therefore equal.
We first define the variant.
Definition 8.1 Given a graph E) on n vertices, a strict vector k-coloring of G is an
assignment of unit vectors u i from the space ! n to each vertex , such that for any two
adjacent vertices i and j the dot product of their vectors satisfies the equality
As usual we say that a graph is strictly vector k-colorable if it has a strict vector k-coloring.
The strict vector chromatic number of a graph is the smallest real number k for which it has a
strict vector k-coloring. It follows from the definition that the strict vector chromatic number of
any graph is lower bounded by the vector chromatic number.
Theorem 8.1 The strict vector chromatic number of G is equal to #(G).
Proof: The dual of our strict vector coloring semidefinite program is as follows (cf. [2]):
is positive semidefinite
subject to
By duality, the value of this SDP is \Gamma1=(k \Gamma 1) where k is the strict vector chromatic number. Our
goal is to prove As before, the fact that fp ij g is positive semidefinite means we can find
vectors v i such that The last constraint says that the vectors v form an orthogonal
labeling [24], i.e. that hv
2 E. We now claim that the above optimization problem
can be reformulated as follows:
over all orthogonal labelings fv i g. To see this, consider an orthogonal labeling and define
this is the value of the first constraint in the first formulation of the dual (that
is, the constraint is - 1) and of the denominator in the second formulation. Then in an optimum
solution to the first formulation, we must have since otherwise we can divide each v i by
- and get a feasible solution with a larger objective value. Thus the optimum of the second
formulation is at least as large as that of the first. Similarly, given any optimum fv i g for the second
feasible solution to the first formulation with the same value. Thus the
optima are equal. We now manipulate the second formulation.
min
It follows from the last equation that the vector chromatic number is
However, by the same argument as used to reformulate the dual, this is equal to problem of
maximizing
orthogonal labelings such that
1. This is simply Lov'asz's
formulation of the #-function [24, page 287].
9 The Gap between Vector Colorings and Chromatic Numbers
The performance of our randomized rounding approach seems far from optimum. In this section
we ask why, and show that the problem is not in the randomized rounding but in the gap between
the original problem and its relaxation. We investigate the following question: given a vector k-
colorable graph G, how large can its chromatic number be in terms of k and n? We will show that
a graph with chromatic number
n\Omega\Gamma23 can have bounded vector chromatic number. This implies
that our technique is tight in that it is not possible to guarantee a coloring with n o(1) colors on all
vector 3-colorable graphs.
Definition 9.1 The Kneser graph K(m; defined as follows: the vertices are all possible r-sets
from a universe of size m; and, the vertices v i and v j are adjacent if and only if the corresponding
r-sets satisfy t.
We will need following theorem of Milner [36] regarding intersecting hypergraphs. Recall that
a collection of sets is called an antichain if no set in the collection contains another.
Theorem 9.1 (Milner) Let S 1 ff be an antichain of sets from a universe of size m such
that, for all i and j,
Then, it must be the case that
Notice that using all q-sets, for example for this theorem.
The following theorem establishes that the Kneser graphs have a large gap between their vector
chromatic number and chromatic numbers.
Theorem 9.2 Let
r
\Delta denote the number of vertices of the graph K(m;
m=8, the graph K(m; colorable but has chromatic number at least n 0:0113 .
Proof: We prove a lower bound on the Kneser graph's chromatic number - by establishing an
upper bound on its independence number ff. It is easy to verify that the ff in Milner's theorem is
exactly the independence number of the Kneser graph. To bound - observe that
ff
r
large enough m.
In the above sequence, the fourth line uses the approximation
for every fi 2 (0; 1), where c fi is a constant depending only on fi. Using the inequality
r
we obtain m - lg n and thus
Finally, it remains to show that the vector chromatic number of this graph is 3. This follows by
associating with each vertex v i an m-dimensional vector obtained from the characteristic vector of
the set S i . In the characteristic vector, +1 represents an element present in S i and \Gamma1 represents
elements absent from S i . The vector associated with a vertex is the characteristic vector of S i
scaled down by a factor of
m to obtain a unit vector. Given vectors corresponding to sets S i
and S j , the dot product gets a contribution of \Gamma1=m for coordinates in S i \DeltaS j and +1=m for the
others. (Here A\DeltaB represents the symmetric difference of the two sets, i.e., the set of elements
which occur in exactly one of A or B.) Thus the dot product of two adjacent vertices, or sets with
intersection at most t, is given by
This implies that the vector chromatic number is 3.
More refined calculations can be used to improve this bound somewhat.
Theorem 9.3 There exists a Kneser graph K(m; which is 3-vector colorable but has chromatic
number exceeding n 0:016101 , where
r
denotes the number of vertices in the graph. Further,
for large k, there exists a Kneser graph K(m; which is k-vector colorable but has chromatic
number exceeding n 0:0717845 .
Proof: The basic idea is to improve the bound on the vector chromatic number of the Kneser
graph using an appropriately weighted version of the characteristic vectors. We use weights a and
to represent presence and absence, respectively, of an element in the set corresponding to a
vertex in the Kneser graph, with appropriate scaling to obtain a unit vector. The value of a which
minimizes the vector chromatic number can be found by differentiation and is
mt
Setting proves that the vector chromatic number is at most
At the same time, using Milner's Theorem proves that the exponent of the chromatic number is at
least
r
By plotting these functions, we have shown that there is a set of values with vector chromatic
number 3 and chromatic number at least n 0:016101 . For large constant vector chromatic numbers,
the limiting value of the exponent of the chromatic number is roughly 0:0717845.
Conclusions
The Lov'asz number of a graph has been a subject of active study due to the close connections between
this parameter and the clique and chromatic numbers. In particular, the following "sandwich
theorem" was proved by Lov'asz [33] (see Knuth [32] for a survey).
This led to the hope that the following question may have an affirmative answer. Do there exist ffl,
graph G on n vertices
Our work in this paper proves a weak but non-trivial upper bound on the the chromatic number of
G in terms of #(G). However, this is far from achieving the bound conjectured above and subsequent
to our work, two results have ended up answering this question negatively. Feige [16] has shown that
for every ffl ? 0, there exist families of graphs for which -(G) ? #(G)n 1\Gammaffl . Interestingly, families of
graphs exhibited in Feige's work use the construction of Section 9 as a starting point. Even more
conclusively, the results of H-astad [26] and Feige and Kilian [18] have shown that no polynomial
time computable function approximates the clique number or chromatic number to within factors
of n 1\Gammaffl , unless NP=RP. Thus no simple modification of the # function is likely to provide a much
better approximation guarantee.
In related results, Alon and Kahale [5] have also been able to use the semidefinite programming
technique in conjunction with our techniques to obtain algorithms for computing bounds on the
clique number of a graph with linear-sized cliques, improving upon some results due to Boppana and
Halldorsson [10]. Independent of our results, Szegedy [41] has also shown that a similar construction
yields graphs with vector chromatic number at most 3 but which are not colorable using n 0:05 colors.
Notice that the exponent obtained from his result is better than the one in Section 9. Alon [3] has
obtained a slight improvement over Szegedy's bound by using an interesting variant of the Kneser
graph construction. Finally, the main algorithm presented here has been derandomized in a recent
work of Mahajan and Ramesh [35].
Acknowledgments
Thanks to David Williamson for giving us a preview of the MAX-CUT result [21] during a visit to
Stanford. We are indebted to John Tukey and Jan Pedersen for their help in understanding multi-dimensional
probability distributions. Thanks to David Williamson and Eva Tardos for discussions
of the duality theory of SDP. We thank Noga Alon, Don Coppersmith, Jon Kleinberg, Laci Lov'asz
and Mario Szegedy for useful discussions, and the anonymous referees for the careful comments.
--R
Probability Approximations via the Poisson Clumping Heuristic.
Interior point methods in semidefinite programming with applications to combinatorial optimization.
Personal Communication
A spectral technique for coloring random 3-colorable graphs
Approximating the independence number via the Theta function.
Proof Verification and Hardness of Approximation Problems.
Improved Non-approximability Results
New approximation algorithms for graph coloring.
Approximating maximum independent sets by excluding subgraphs.
Coloring heuristics for register alloca- tion
Register allocation and spilling via graph coloring.
Register allocation via coloring.
An Introduction to Probability Theory and Its Applications.
Forbidden Intersections.
Randomized graph products
Interactive proofs and the hardness of approximating cliques.
Zero knowledge and chromatic number.
Improved approximation algorithms for MAX k-CUT and MAX BISECTION
Computers and Intractability: A Guide to the Theory of NP-Completeness
Improved approximation algorithms for maximum cut and satisfiability problems.
Matrix Computations.
The ellipsoid method and its consequences in combinatorial optimization.
Geometric Algorithms and Combinatorial Optimization.
A still better performance guarantee for approximate graph coloring.
Clique is hard to approximate within n 1
Worst case behavior of graph coloring algorithms.
On the Hardness of Approximating the Chromatic Number.
On Syntactic versus Computational Views of Approximability.
Aufgabe 300.
The Art of Computer Programming.
The Sandwich Theorem.
On the Shannon capacity of a graph.
On the hardness of approximating minimization problems.
Derandomizing semidefinite programming based approximation algorithms.
A combinatorial theorem on systems of sets.
On Exact and Approximate Cut Covers of Graphs.
Randomized Algorithms.
Probability Theory.
A note on the ' number of Lov'asz and the generalized Delsarte bound.
Personal Communication.
Improving the Performance Guarantee for Approximate Graph Coloring.
A Technique for Coloring a Graph Applicable to Large-Scale Optimization Prob- lems
--TR
Improving the performance guarantee for approximate graph coloring
Coloring heuristics for register allocation
A still better performance guarantee for approximate graph coloring
On the hardness of approximating minimization problems
Approximating maximum independent sets by excluding subgraphs
New approximation algorithms for graph coloring
Improved non-approximability results
A spectral technique for coloring random 3-colorable graphs (preliminary version)
Randomized algorithms
Randomized graph products, chromatic numbers, and Lovasz j-function
Interactive proofs and the hardness of approximating cliques
Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming
An MYAMPERSANDOtilde;(<italic>n</italic><supscrpt>3/14</supscrpt>)-coloring algorithm for 3-colorable graphs
Computers and Intractability
The Art of Computer Programming, 2nd Ed. (Addison-Wesley Series in Computer Science and Information
Zero Knowledge and the Chromatic Number
Derandomizing semidefinite programming based approximation algorithms
Register allocation MYAMPERSANDamp; spilling via graph coloring
Clique is hard to approximate within n1-
--CTR
Yonatan Bilu, Tales of Hoffman: three extensions of Hoffman's bound on the graph chromatic number, Journal of Combinatorial Theory Series B, v.96 n.4, p.608-613, July 2006
Eran Halperin , Ram Nathaniel , Uri Zwick, Coloring k-colorable graphs using relatively small palettes, Journal of Algorithms, v.45 n.1, p.72-90, October 2002
Robert A. Stubbs , Sanjay Mehrotra, Generating Convex Polynomial Inequalities for Mixed 01 Programs, Journal of Global Optimization, v.24 n.3, p.311-332, November 2002
Eran Halperin , Ram Nathaniel , Uri Zwick, Coloring
Amin Coja-Oghlan , Lars Kuhtz, An improved algorithm for approximating the chromatic number of G
Michael Krivelevich , Ram Nathaniel , Benny Sudakov, Approximating coloring and maximum independent sets in 3-uniform hypergraphs, Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms, p.327-328, January 07-09, 2001, Washington, D.C., United States
Uriel Feige , Michael Langberg, The RPR2 rounding technique for semidefinite programs, Journal of Algorithms, v.60 n.1, p.1-23, July 2006
Sanjeev Arora , Eden Chlamtac, New approximation guarantee for chromatic number, Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, May 21-23, 2006, Seattle, WA, USA
Michel X. Goemans , David Williamson, Approximation algorithms for MAX-3-CUT and other problems via complex semidefinite programming, Proceedings of the thirty-third annual ACM symposium on Theory of computing, p.443-452, July 2001, Hersonissos, Greece
Irit Dinur , Elchanan Mossel , Oded Regev, Conditional hardness for approximate coloring, Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, May 21-23, 2006, Seattle, WA, USA
Miroslav Chlebk , Janka Chlebkov, Complexity of approximating bounded variants of optimization problems, Theoretical Computer Science, v.354 n.3, p.320-338, 4 April 2006
Amin Coja-Oghlan, Solving NP-hard semirandom graph problems in polynomial expected time, Journal of Algorithms, v.62 n.1, p.19-46, January, 2007
Michel X. Goemans , David P. Williamson, Approximation algorithms for MAX-3-CUT and other problems via complex semidefinite programming, Journal of Computer and System Sciences, v.68 n.2, p.442-470, March 2004
Moses Charikar, On semidefinite programming relaxations for graph coloring and vertex cover, Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, p.616-620, January 06-08, 2002, San Francisco, California
D. Sivakumar, Algorithmic derandomization via complexity theory, Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, May 19-21, 2002, Montreal, Quebec, Canada
Alon , Konstantin Makarychev , Yury Makarychev , Assaf Naor, Quadratic forms on graphs, Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, May 22-24, 2005, Baltimore, MD, USA
Bernard Chazelle , Carl Kingsford , Mona Singh, The side-chain positioning problem: a semidefinite programming formulation with new rounding schemes, Proceedings of the Paris C. Kanellakis memorial workshop on Principles of computing & knowledge: Paris C. Kanellakis memorial workshop on the occasion of his 50th birthday, p.86-94, June 08-08, 2003, San Diego, California, USA
Arash Behzad , Izhak Rubin, Multiple Access Protocol for Power-Controlled Wireless Access Nets, IEEE Transactions on Mobile Computing, v.3 n.4, p.307-316, October 2004
Amin Coja-Oghlan, Finding Large Independent Sets in Polynomial Expected Time, Combinatorics, Probability and Computing, v.15 n.5, p.731-751, September 2006
Luca Trevisan, Non-approximability results for optimization problems on bounded degree instances, Proceedings of the thirty-third annual ACM symposium on Theory of computing, p.453-461, July 2001, Hersonissos, Greece
Sanjeev Arora , Satish Rao , Umesh Vazirani, Expander flows, geometric embeddings and graph partitioning, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, p.222-231, June 13-16, 2004, Chicago, IL, USA
Per Austrin, Balanced max 2-sat might not be the hardest, Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, June 11-13, 2007, San Diego, California, USA
Ccile Murat , Vangelis Th. Paschos, On the probabilistic minimum coloring and minimum k-coloring, Discrete Applied Mathematics, v.154 n.3, p.564-586, 1 March 2006
Lars Engebretsen , Piotr Indyk , Ryan O'Donnell, Derandomized dimensionality reduction with applications, Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, p.705-712, January 06-08, 2002, San Francisco, California
Martin Skutella, Convex quadratic and semidefinite programming relaxations in scheduling, Journal of the ACM (JACM), v.48 n.2, p.206-242, March 2001
Bernard Chazelle , Carl Kingsford , Mona Singh, A Semidefinite Programming Approach to Side Chain Positioning with New Rounding Strategies, INFORMS Journal on Computing, v.16 n.4, p.380-392, Fall 2004
Amin Coja-oghlan, The Lovsz Number of Random Graphs, Combinatorics, Probability and Computing, v.14 n.4, p.439-465, July 2005
V. Th. Paschos, Polynomial approximation and graph-coloring, Computing, v.70 n.1, p.41-86, March | approximation algorithms;randomized algorithms;NP-completeness;graph coloring;chromatic number |
275325 | Compiler blockability of dense matrix factorizations. | The goal of the LAPACK project is to provide efficient and portable software for dense numerical linear algebra computations. By recasting many of the fundamental dense matrix computations in terms of calls to an efficient implementation of the BLAS (Basic Linear Algebra Subprograms), the LAPACK project has, in large part, achieved its goal. Unfortunately, the efficient implementation of the BLAS results often in machine-specific code that is not portable across multiple architectures without a significant loss in performance or a significant effort to reoptimize them. This article examines wheter most of the hand optimizations performed on matrix factorization codes are unnecessary because they can (and should) be performed by the compiler. We believe that it is better for the programmer to express algorithms in a machine-independent form and allow the compiler to handle the machine-dependent details. This gives the algorithms portability across architectures and removes the error-prone, expensive and tedious process of hand optimization. Although there currently exist no production compilers that can perform all the loop transformations discussed in this article, a description of current research in compiler technology is provided that will prove beneficial to the numerical linear algebra community. We show that the Cholesky and optimized automaticlaly by a compiler to be as efficient as the same hand-optimized version found in LAPACK. We also show that the QR factorization may be optimized by the compiler to perform comparably with the hand-optimized LAPACK version on modest matrix sizes. Our approach allows us to conclude that with the advent of the compiler optimizations dicussed in this article, matrix factorizations may be efficiently implemented in a BLAS-less form | Introduction
The processing power of microprocessors and supercomputers has increased dramatically and continues
to do so. At the same time, the demand on the memory system of a computer is to increase
dramatically in size. Due to cost restrictions, typical workstations cannot use memory chips that
have the latency and bandwidth required by today's processors. Instead, main memory is constructed
of cheaper and slower technology and the resulting delays may be up to hundreds of cycles
for a single memory access.
To alleviate the memory speed problem, machine architects construct a hierarchy of memory
where the highest level (registers) is the smallest and fastest and each lower level is larger but
Research supported by NSF Grant CCR-9120008 and by NSF grant CCR-9409341. The second author was also
supported by the U.S. Department of Energy Contracts DE-FG0f-91ER25103 and W-31-109-Eng-38.
y Department of Computer Science, Michigan Technological University, Houghton MI 49931, carr@cs.mtu.edu.
z Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439
lehoucq@mcs.anl.gov, http://www.mcs.anl.gov/home/lehoucq/index.html.
slower. The bottom of the hierarchy for our purposes is main memory. Typically, one or two levels
of cache memory fall between registers and main memory. The cache memory is faster than main
memory, but is often a fraction of the size. The cache memory serves as a buffer for the most
recently accessed data of a program (the working set). The cache becomes ineffective when the
working set of a program is larger than its size.
The three factorizations considered in this paper, the LU, Cholesky, and QR, are among the
most frequently used by numerical linear algebra and its applications. The first two are used for
solving linear systems of equations while the last is typically used in linear least squares problems.
For square matrices of order n, all three factorizations involve on the order of n 3 floating point
operations for data that needs n 2 memory locations. With the advent of vector and parallel
supercomputers, the efficiency of the factorizations were seen to depend dramatically upon the
algorithmic form chosen for the implementation [16, 18, 32]. These studies concluded that managing
the memory hierarchy is the single most important factor governing the efficiency of the software
implementation computing the factorization.
The motivation of the LAPACK [2] project was to recast the algorithms in the EISPACK [35]
and LINPACK [14] software libraries with block ones. A block form of an algorithm restructures
the algorithm in terms of matrix operations that attempt to minimize the amount of data moved
within the memory hierarchy while keeping the arithmetic units of the machine occupied. LAPACK
blocks many dense matrix algorithms by restructuring them to use the level 2 and 3 BLAS [11,
12]. The motivation for the Basic Linear Algebra Subprograms, BLAS [29], was to provide a set of
commonly used vector operations so that the programmer could invoke the subprograms instead of
writing the code directly. The level 2 and 3 BLAS followed with matrix-vector and matrix-matrix
operations, respectively, that are often necessary for high efficiency across a broad range of high
performance computers. The higher level BLAS better utilize the underlying memory hierarchy. As
with the level 1 BLAS, responsibility for optimizing the higher level BLAS was left to the machine
vendor or another interested party.
This study investigates whether a compiler has the ability to block matrix factorizations. Although
the compiler transformation techniques may be applied directly to the BLAS, it is interesting
to draw a comparison with applying them directly to the factorizations. The benefit is the possibility
of a BLAS-less linear algebra package that is nearly as efficient as LAPACK. For example, in [30],
it was demonstrated that on some computers, the best LU factorization was an inlined approach
even when a highly optimized set of BLAS were available.
We deem an algorithm blockable if a compiler can automatically derive the most efficient block
algorithm (for our study, the one found in LAPACK) from its corresponding machine-independent
point algorithm. In particular, we show that LU and Cholesky factorizations are blockable algo-
rithms. Unfortunately, QR factorization with Householder transformations is not blockable. How-
ever, we show an alternative block algorithm for QR that can be derived using the same compiler
methods as those used for LU and Cholesky factorizations.
This study has yielded two major results. The first, which is detailed in another paper [9],
reveals that the hand loop unrolling performed when optimizing the level 2 and 3 BLAS [11, 12]
is often unnecessary. While the BLAS are useful, the hand optimization that is required to obtain
good performance on a particular architecture may be left to the compiler. Experiments show
that, in most cases, the compiler can automatically unroll loops as effectively as hand optimization.
The second result, which we discuss in this paper, reveals that it is possible to block matrix
factorizations automatically. Our results show that the block algorithms derived by the compiler
are competitive with those of LAPACK [2]. For modest sized matrices (on the order of 200 or less),
the compiler-derived variants are often superior.
We begin our presentation with a review of background material related to compiler optimiza-
tion. Then, we describe a study of the application of compiler analysis to derive the three block
algorithms in LAPACK considered above from their corresponding point algorithms. We present an
experiment comparing the performance of hand-optimized LAPACK algorithms with the compiler-
derived algorithms attained using our techniques. We also briefly discuss other related approaches.
Finally, we summarize our results and provide and draw some general conclusions.
Background
The transformations that we use to create the block versions of matrix factorizations from their
corresponding point versions are well known in the mathematical software community [15]. This
section introduces the fundamental tools that the compiler needs to perform the same transformations
automatically. The compiler optimizes point versions of matrix factorizations through
analysis of array access patterns rather than through linear algebra.
2.1 Dependence
As in vectorizing and parallelizing compilers, dependence is a critical compiler tool for performing
transformations to improve the memory performance of loops. Dependence is necessary for determining
the legality of compiler transformations to create blocked versions of matrix factorizations
by giving a partial order on the statements within a loop nest.
A dependence exists between two statements if there exists a control flow path from the first
statement to the second, and both statements reference the same memory location [26].
ffl If the first statement writes to the location and the second reads from it, there is a true
dependence, also called a flow dependence.
ffl If the first statement reads from the location and the second writes to it, there is an antide-
pendence.
ffl If both statements write to the location, there is an output dependence.
ffl If both statements read from the location, there is an input dependence.
A dependence is carried by a loop if the references at the source and sink (beginning and end) of
the dependence are on different iterations of the loop and the dependence is not carried by an outer
loop [1]. In the loop below, there is a true dependence from A(I,J) to A(I-1,J) carried by the
I-loop, a true dependence from A(I,J) to A(I,J-1) carried by the J-loop and an input dependence
from A(I,J-1) to A(I-1,J) carried by the I-loop.
enhance the dependence information, section analysis can be used to describe the portion
of an array that is accessed by a particular reference or set of references [5, 21]. Sections describe
common substructures of arrays such as elements, rows, columns and diagonals. As an example of
section analysis consider the following loop.
If A were declared to be 100 \Theta 100, the section of A accessed in the loop would be that shown in the
shaded portion of Figure 1.
Matrix factorization codes require us to enhance basic dependence information because only a
portion of the matrix is involved in the block update. The compiler uses section analysis to reveal
that portion of the matrix that can be block updated. Section 3.1.1 discusses this in detail.10100
Figure
1 Section of A
3 Automatic Blocking of Dense Matrix Factorizations
In this section, we show how to derive the block algorithms for the LU and the Cholesky factorizations
using current compiler technology and section analysis to enhance dependence information.
We also show that the QR factorization with Householder transformations is not blockable. How-
ever, we present a performance-competitive version of the QR factorization that is derivable by the
compiler.
3.1 LU Factorization
The LU decomposition factors a non-singular matrix A into the product of two matrices, L and U ,
such that A = LU [20]. L is a unit lower triangular matrix and U is an upper triangular matrix.
This factorization can be obtained by multiplying the matrix A by a series of elementary lower
and A. The pivot matrices are used to make the LU factorization a numerically stable
process.
We first examine the blockability of LU factorization. Since pivoting creates its own difficulties,
we first show how to block LU factorization without pivoting. We then show how to handle pivoting.
3.1.1 No Pivoting
Consider the following algorithm for LU factorization.
This point algorithm is referred to as an unblocked right-looking [13] algorithm. It exhibits poor
cache performance on large matrices. To transform the point algorithm to the block algorithm, the
compiler must perform strip-mine-and-interchange on the K-loop [38, 36]. This transformation is
used to create the block update of A. To apply this transformation, we first strip the K-loop into
fixed size sections (this size is dependent upon the target architecture's cache characteristics and
is beyond the scope of this paper [28, 10]) as shown below.
Here KS is the machine-dependent strip size that is related to the cache size. To complete the
transformation, the KK-loop must be distributed around the loop that surrounds statement 20 and
around the loop nest that surrounds statement 10 before being interchanged to the innermost
position of the loop surrounding statement 10 [37]. This distribution yields:
Unfortunately, the loop is no longer correct. This loop scales a number of values before it updates
them. Dependence analysis allows the compiler to detect and avoid this change in semantics by
recognizing the dependence cycle between A(I,KK) in statement 20 and A(I,J) in statement 10
carried by the KK-loop.
Using basic dependence analysis only, it appears that the compiler would be prevented from
blocking LU factorization due to the cycle. However, enhancing dependence analysis with section
information reveals that the cycle only exists for a portion of the data accessed in both statements.
Figure
2 shows the sections of the array A accessed for the entire execution of the KK-loop. The
section accessed by A(I,KK) in statement 20 is a subset of the section accessed by A(I,J) in
statement 10.
Since the recurrence exists for only a portion of the iteration space of the loop surrounding
statement 10, we can split the J-loop into two loops - one loop iterating over the portion of A
where the dependence cycle exists, and one loop iterating over the portion of A where the cycle
does not exist - using a transformation called index-set splitting [38]. J can be split at the point
to create the two loops as shown below.
Figure
2 Sections of A in LU Factorization
DO
DO
Now the dependence cycle exists between statements 20 and 30, and statement 10 is no longer in
the cycle. Strip-mine-and-interchange can be continued by distributing the KK-loop around the two
new loops as shown below.
DO
DO
DO
To finish strip-mine-and-interchange, we need to move the KK-loop to the innermost position in the
nest surrounding statement 10. However, the lower bound of the I-loop contains a reference to KK.
This creates a triangular iteration space as shown in Figure 3. To interchange the KK and I loops,
the intersection of the line I=KK+1 with the iteration space at the point (K,K+1) must be handled.
Therefore, interchanging the loops requires the KK-loop to iterate over a trapezoidal region with
an upper bound of I-1 until I-1 ? K+KS-1 (see Wolfe, and Carr and Kennedy for more details on
transforming non-rectangular loop nests [38, 8]). This gives the following loop nest.
DO
DO
DO
I
KK
Figure
Iterations Space of LU Factorization
At this point, a right-looking [13] block algorithm has been obtained. Therefore, LU factorization
is blockable. The loop nest surrounding statement 10 is a matrix-matrix multiply that
can be further optimized depending upon the architecture. For superscalar architectures whose
performance is bound by cache, outer loop unrolling on non-rectangular loops can be applied to
the J- and I-loops to further improve performance [8, 9]. For vector architectures, a different loop
optimization strategy may be more beneficial [1].
Many of the transformations that we have used to obtain the block version of LU factorization
are well known in the compiler community and exist in many commercial compilers (e.g., HP,
DEC and SGI). One of the contributions of this study to compiler research is to show how the
addition of section analysis allows a compiler to block matrix factorizations. Note that none of the
aforementioned compilers uses section analysis for this purpose.
3.1.2 Adding Partial Pivoting
Although the compiler can discover the potential for blocking in LU decomposition without pivoting
using index-set splitting and section analysis, the same cannot be said when partial pivoting is added
(see
Figure
4 for LU decomposition with partial pivoting). In the partial pivoting algorithm, a new
recurrence exists that does not fit the form handled by index-set splitting. Consider the following
sections of code after applying index-set splitting to the algorithm in Figure 4.
DO
The reference to A(IMAX,J) in statement 25 and the reference to A(I,J) in statement 10 access the
same sections. Distributing the KK-loop around both J-loops would convert the true dependence
from A(I,J) to A(IMAX,J) into an antidependence in the reverse direction. The rules for the
preservation of data dependence prohibit the reversing of a dependence direction. This would
seem to preclude the existence of a block analogue similar to the non-pivoting case. However,
a block algorithm that ignores the preventing recurrence and distributes the KK-loop can still be
mathematically derived [15].
Consider the following. If
\Gammam 1 I
then
!/
This result shows that we can postpone the application of the eliminator M 1 until after the application
of the permutation matrix P 2 if we also permute the rows of the eliminator. Extending
Equation 1 to the entire formulation we have
In the implementation of the block algorithm, P i cannot be computed until step i of the point
algorithm. P i only depends upon the first i columns of A, allowing the computation of k P i 's and
is the blocking factor, and then the block application of the -
C . pick pivot - IMAX
DO
Figure
4 LU Decomposition with Partial Pivoting
To install the above result into the compiler, we examine its implications from a data dependence
viewpoint. In the point version, each row interchange is followed by a whole-column update in
which each row element is updated independently. In the block version, multiple row interchanges
may occur before a particular column is updated. The same computations (column updates) are
performed in both the point and block versions, but these computations may occur in different locations
(rows) of the array. The key concept for the compiler to understand is that row interchanges
and whole-column updates are commutative operations. Data dependence alone is not sufficient to
understand this. A data dependence relation maps values to memory locations. It reveals the sequence
of values that pass through a particular location. In the block version of LU decomposition,
the sequence of values that pass through a location is different from the point version, although
the final values are identical. Unless the compiler understands that row interchanges and column
updates commute, LU decomposition with partial pivoting is not blockable.
Fortunately, a compiler can be equipped to understand that operations on whole columns are
commutable with row permutations. To upgrade the compiler, one would have to install pattern
matching to recognize both the row permutations and whole-column updates to prove that the
recurrence involving statements 10 and 25 of the index-set split code could be ignored. Forms
of pattern matching are already done in commercially available compilers. Vectorizing compilers
pattern match for specialized computations such as searching vectors for particular conditions
[31]. Other preprocessors pattern match to recognize matrix multiplication and, in turn, output
a predetermined solution that is optimal for a particular machine. So, it is reasonable to believe
that pivoting can be recognized and implemented in commercial compilers if its importance is
emphasized.
3.2 Cholesky Factorization
When the matrix A is symmetric and positive definite, the LU factorization may be written as
and D is the diagonal matrix consisting of the main diagonal of U . The decomposition
of A into the product of a triangular matrix and its transpose is called the Cholesky
factorization. Thus we need only work with the lower triangular half of A and essentially the same
dependence analysis that applies to the LU factorization without pivoting may be used. Note that
with respect to floating point computation, the Cholesky factorization only differs from LU in two
regards. The first is that there are n square roots for Cholesky and the second is that only the
lower half of the matrix needs to be updated.
The strip mined version of the Cholesky factorization is shown below.
As is the case with LU factorization, there is a recurrence between A(I,J) in statement 10 and
A(I,KK) in statement 20 carried by the KK-loop. The data access patterns in Cholesky factorization
are identical to LU factorization (see Figure 2), index-set splitting can be applied to the J-loop at
K+KS-1 to allow the KK-loop to be distributed, achieving the LAPACK block algorithm.
3.3 QR Factorization
In this section, we examine the blockability of the QR factorization. First, we show that the
algorithm from LAPACK is not blockable. Then, we give an alternate algorithm that is blockable.
3.3.1 LAPACK Version
The LAPACK point algorithm for computing the QR factorization consists of forming the sequence
1. The initial matrix A rows and n columns, where for
this study we assume m - n. The elementary reflectors
update A k in order that the
first k columns of A k+1 form an upper triangular matrix. The update is accomplished by performing
the matrix vector multiplication w followed by the rank one update A
Efficiency of the implementation of the level 2 BLAS subroutines determines the rate at which the
factorization is computed. For a more detailed discussion of the QR factorization see the book by
Golub and Van Loan [20].
The LAPACK block QR factorization is an attempt to recast the algorithm in terms of calls to
level 3 BLAS [15]. If the level 3 BLAS are hand-tuned for a particular architecture, the block QR
algorithm may perform significantly better than the point version on large matrix sizes (those that
cause the working set to be much larger than the cache size).
Unfortunately, the block QR algorithm in LAPACK is not automatically derivable by a compiler.
The block application of a number of elementary reflectors involves both computation and storage
that does not exist in the original point algorithm [15]. To block a number of eliminators together,
the following is computed
The compiler cannot derive I \Gamma V TV T from the original point algorithm using dependence infor-
mation. To illustrate, consider a block of two elementary reflectors
!/
The computation of the matrix
is not part of the original algorithm. Hence, the LAPACK version of block QR factorization is a
different algorithm from the point version, rather than just a reshaping of the point algorithm for
better performance. The compiler can reshape algorithms, but, it cannot derive new algorithms
with data dependence information. In this case, the compiler would need to understand linear
algebra to derive the block algorithm.
In the next section, a compiler-derivable block algorithm for QR factorization is presented. This
algorithm gives comparable performance to the LAPACK version on small matrices while retaining
machine independence.
3.3.2 Compiler-Derivable QR Factorization
Consider the application of j matrices V k to A k ,
The compiler derivable algorithm, henceforth called cd-QR, only forms columns k through k
of A k+j and then updates the remainder of matrix with the j elementary reflectors. The final
update of the trailing columns is "rich" in floating point operations that the compiler
organizes to best suit the underlying hardware. Code optimization techniques such as strip-mine-
and-interchange and unroll-and-jam are left to the compiler. The derived algorithm depends upon
the compiler for efficiency in contrast to the LAPACK algorithm that depends on hand optimization
of the BLAS.
Cd-QR can be obtained from the point algorithm for QR decomposition using array section
analysis. For reference, segments of the code for the point algorithm after strip mining of the outer
loop are shown in Figure 5. To complete the transformation of the code in Figure 5 to obtain cd-QR,
the I-loop must be distributed around the loop that surrounds the computation of V i and around
the update before being interchanged with the J-loop. However, there is a recurrence between the
definition and use of A(K,J) within the update section and the definition and use of A(J,I) in
computation of The recurrence is carried by the I-loop and appears to prevent distribution.
* Generate elementary reflector V-i.
ENDDO
* Update A(i:m,i+1:n) with V-i.
ENDDO
ENDDO
ENDDO
ENDDO
ENDDO
Figure
5 Strip-Mined Point QR Decomposition
II
II
Figure
6 Regions of A Accessed by QR Decomposition
Figure
6 shows the sections of the array A(:,:) accessed for the entire execution of the I-loop.
If the sections accessed by A(J,I) and A(K,J) are examined, a legal partial distribution of the
I-loop is revealed (note the similarity to LU and Cholesky factorization). The section accessed
by A(J,I) (the black region) is a subset of the section accessed by A(K,J) (both the black and
gray regions) and the index-set of J can be split at the point to create a new loop
that executes over the iteration space where the memory locations accessed by A(K,J) are disjoint
from those accessed by A(J,I). The new loop that iterates over the disjoint region can be further
optimized by the compiler depending upon the target architecture.
3.3.3 A Comparison of the Two QR Factorizations
The algorithm cd-QR does not exhibit as much cache reuse as the LAPACK version on large matrices.
The reason is that the LAPACK algorithm is able to take advantage of the level 3 BLAS routine
DGEMM, which can be highly optimized. Cd-QR uses operations that are closer to the level 2 BLAS
and that have worse cache reuse characteristics. Therefore, we would expect the LAPACK algorithm
to perform better on larger matrices as it could possibly take advantage of a highly tuned matrix-matrix
multiply kernel.
3.4 Summary of Transformations
In summary, Table 1 lists the analyses and transformations that must be used by a compiler
to block matrix factorizations. Items 1 and 2 are discussed in Section 2. Items 3 through 7 are
discussed in Section 3.1. Item 8 is discussed in the compiler literature [28, 10]. Item 9 is discussed
in Section 3.1.2. Many commercial compilers (e.g. IBM[34], HP, DEC, and SGI) contain items 1, 3,
4, 5, 6, 7 and 8. However, it should be noted that items 2 and 9 are not likely to be found in any
of today's commercial compilers.
Table
1 Summary of the compiler transformations necessary to block matrix factorizations.
Dependence Analysis (Section 2.1 [26, 19])
Array Section Analysis (Section 2.1 [5, 21])
3 Strip-Mine-and-Interchange (Section 3.1 [38, 36])
4 Unroll-and-Jam (Section 3.1 [9])
We measured the performance of each block factorization algorithm on four different architectures:
the IBM POWER2 model 590, the HP model 712/80, the DEC Alpha 21164 and the SGI model Indigo2
with a MIPS R4400. Table 2 summarizes the characteristics of each machine. These architectures
were chosen because they are representative of the typical high-performance workstation.
Table
Machine Characteristics
Machine Clock Speed Peak Mflops Cache Size Associativity Line Size
DEC Alpha 250MHz 500 8KB 1
On all the machines, we used the vendor's optimized BLAS. For example, on the IBM POWER2
and SGI Indigo2, we linked with the libraries -lessl (Engineering and Scientific Subroutine Library
[22]) and -lblas, respectively.
Our compiler-optimized versions were obtained by hand using the algorithms in the literature.
The reason that this process could not be fully automated is because of a current deficiency in the
dependence analyzer of our tool [4, 6]. Table 3 lists the FORTRAN compiler and the flags used to
compile our factorizations.
Table
3 FORTRAN compiler and switches.
Machine Compiler Flags
HP 712 f77 v9.16 -O
DEC Alpha f77 v3.8 -O5
SGI Indigo2 f77 v5.3 -O3 -mips2
In
Tables
4-6, performance is reported in double precision megaflops. The number of floating
point operations for the LU, QR and Cholesky factorizations are 2=3n 3
respectively, where m and n are the number of rows and columns, respectively. We used the LAPACK
subroutines dgetrf,dgeqrf and dpotrf for the LU, QR and Cholesky factorizations, respectively.
Each factorization routine is run with block sizes of 1, 2, 4, 8, 16, 24, 32, 48, and 64. 1 In each
table, the columns should be interpreted as follows:
LABlk: The best blocking factor for the LAPACK algorithm.
LAMf: The best megaflop rate for the LAPACK algorithm (corresponding to LABlk).
CBlk: The best blocking factor for the compiler-derived algorithm.
CMf: The best megaflop rate for the compiler-derived algorithm (corresponding to CBlk).
In order to explicitly set the block size for the LAPACK factorizations, we have modified the LAPACK
integer function ILAENV to include a common block.
All the benchmarks were run when the computer systems were free of other computationally
intensive jobs. All the benchmarks were typically run two or more times. The differences in time
were within 5 %.
4.1 LU Factorization
Table
4 shows the performance of the compiler-derived version of LU factorization with pivoting
versus the LAPACK version.
Table
4 LU Performance on IBM, HP, DEC and SGI
Size LABlk LAMf CBlk CMf Speedup LABk LAMf CBlk CMf Speedup
100x100
200x200
300x300
DEC Alpha SGI Indigo2
Size LABlk LAMf CBlk CMf Speedup LABk LAMf CBlk CMf Speedup
100x100
200x200
300x300
500x500
1 Although the compiler can effectively choose blocking factors automatically, we do not have an implementation of
the available algorithms [28, 10].
The IBM POWER2 results show that as the size of the matrix increases to 100, the compiler
derived algorithm's edge over LAPACK diminishes. And for the remaining matrix sizes, the compiler
derived algorithm stays within 7 % of the LAPACK one. Clearly, the FORTRAN compiler on the
IBM POWER2 is able to nearly achieve the performance of the hand optimized BLAS available in
the ESSL library for the block matrix factorizations.
For the HP 712, Table 4 indicates an unexpected trend. The compiler-derived version performs
better on all matrix sizes except 50x50, with dramatic improvements as the matrix size increases.
This indicates that the hand-optimized DGEMM does not efficiently use the cache. We have optimized
for cache performance in our compiler derived algorithm. This is evident when the size of the
matrices exceeds the size of the cache.
The significant performance degradation for the 50x50 case is interesting. For a matrix this
small, cache performance is not a factor. We believe the performance difference comes from the
way code is generated. For superscalar architectures like the HP, a code generation scheme called
software pipelining is used to generate highly parallel code [27, 33]. However, software pipelining
requires a lot of registers to be successful. In our code, we performed unroll-and-jam to improve
cache performance. However, unroll-and-jam can significantly increase register pressure and cause
software pipelining to fail [7]. On our version of LU decomposition, the HP compiler diagnostics
reveal that software pipelining failed on the main computational loop due to high register pressure.
Given that the hand-optimized version is highly software pipelined, the result would be a highly
parallel hand-optimized loop and a not-as-parallel compiler-derived loop. At matrix size 25x25,
there are not enough loop iterations to expose the difference. At matrix size 50x50, the difference is
significant. At matrix sizes 75x75 and greater, cache performance becomes a factor. At this time,
there are no known compiler algorithms that deal with the trade-offs between unroll-and-jam and
software pipelining. This is an important area of future research.
For the DEC Alpha, Table 4 shows that our algorithm performs as well as or better than the
LAPACK version on matrices of order 100 or less. After size 100x100, the second-level cache on the
Alpha, which is 96K, begins to overflow. Our compiler-derived version is not blocked for multiple
levels of cache, while the LAPACK version is blocked for 2 levels of cache [25]. Thus, the compiler-
derived algorithm suffers many more cache misses in the level-2 cache than the LAPACK version. It
is possible for the compiler to perform the extra blocking for multiple levels of cache, but we know
of no compiler that currently does this. Additionally, the BLAS algorithm utilized the following
architectural features that we do not [25]:
ffl The use of temporary arrays to eliminate conflicts in the level-1 direct-mapped cache and the
translation lookaside buffer [28, 10].
ffl The use of the memory-prefetch feature on the Alpha to hide latency between cache and
memory.
Although each of these optimizations could be done in the DEC product compiler, they are not. Each
optimization would give additional performance to our algorithm. Using a temporary buffer may
provide a small improvement, but prefetching can provide a significant performance improvement
because the latency to main memory is on the order of 50 cycles. Prefetches cannot be issued in
the source code, so we were unable to try this optimization.
The results on the SGI are roughly similar to those for the DEC Alpha. It is difficult for us to
determine exactly why our performance is lower on smaller matrices because we have no diagnostic
tools. It could again be software pipelining or some architectural feature of which we are not aware.
We do note that the code generated by the SGI compiler is worse than expected. Additionally, the
2-level cache comes into play on the larger matrices.
Comparing the results on the IBM POWER2 and the multi-level cache hierarchy systems (DEC
and SGI), shows that our compiler-derived versions are very effective for a single-level cache. It is
evident that more work needs to be done in optimizing the update portion of the factorizations to
obtain the same relative performance as a single-level cache system on a multi-level cache system.
4.2 Cholesky Factorization
Table
5 shows the performance of the compiler-derived version of Cholesky factorization versus the
version.
The IBM POWER2 results show that as the size of the matrix increases to 200, the compiler
derived algorithm's edge over the LAPACK diminishes. And for the remaining matrix sizes, the
compiler derived algorithm stays within 8% of the LAPACK one. As was the case for the LU
factorization, the compiler version performs very well. Only for the large matrix sizes does the
highly tuned BLAS used by the LAPACK factorization cause LAPACK to be faster. Table 5 shows
a slightly irregular pattern for the block size used by the compiler derived algorithm. We remark
that for matrix sizes 50 through 200, the MFLOP rate for the two block sizes 8 and 16 were nearly
equivalent.
On the HP, we observe the same pattern as we did for LU factorization. When cache performance
is critical, we outperform the LAPACK version. When cache performance is not critical, the LAPACK
version gives better results, except when the matrix is small. Our algorithm performed much better
on the 25x25 matrix most likely due to the high overhead associated with software pipelining on
short loops. Since Cholesky factorization has fewer operations than LU factorization in the update
portion of the code, we would expect a high overhead associated with small matrices. Also, the
effects of cache are not seen until larger matrix sizes (compared to LU factorization). This is again
due to the smaller update portion of the factorization.
Table
5 Cholesky Performance on IBM, HP, DEC and SGI
Size LABlk LAMf CBlk CMf Speedup LABlk LAMf CBlk CMf Speedup
50x50
100x100
200x200
300x300
On the DEC, we outperform the LAPACK version up until the 500x500 matrix. This is the same
pattern as seen in LU factorization except that it takes longer to appear. This is due to the smaller
size of the update portion of the factorization.
The results on the SGI show that the compiler derived version performs better than the LAPACK
for matrix sizes up to 100. As the matrix size increases to 500 from 150, the compiler derived
algorithm's performance decreases by 15% compared to that of the LAPACK factorization. We
believe that this has to do with the 2-level cache hierarchy.
We finally remark that although Table 5 shows a similar pattern as Table 4, there are differences.
Recall, that as explained in x 3.2, the Cholesky factorization only has approximately half of the the
floating point operations of LU since it neglects the strict (above the diagonal) upper triangular
portion of the matrix during the update phase. Moreover, there is the computation of the square
root of the diagonal element during each of the n iterations.
4.3 QR Factorization
Table
6 shows the performance of the compiler-derived version of QR factorization versus the LAPACK
version. Since the compiler-derived algorithm for block QR factorization has worse cache
performance than the LAPACK algorithm, but O(n 2 ) less computation, we expect worse performance
when the cache performance becomes critical. In plain words, the LAPACK algorithm uses
the level 3 BLAS matrix multiply kernel DGEMM but the compiler derived algorithm can only utilize
operations similar to the level 2 BLAS.
On the HP, we see the same pattern as before. However, since the cache performance of our
algorithm is not as good as the LAPACK version, we see a much smaller improvement when our
algorithm has superior performance. Again, we also see that when the matrix sizes stay within the
limits of the cache, LAPACK outperforms our algorithm.
Table
6 QR Performance on IBM and HP
Size LABlk LAMf CBlk CMf Speedup LABlk LAMf CBlk CMf Speedup
28 0.75
300x300
For the other three machines, we see the same pattern as on the previous factorizations except
that our degradations are much larger for large matrices. Again, this is due to the inferior cache
performance of cd-QR. An interesting trend revealed by Table 6 is that the IBM POWER2 has a
slightly irregular block size pattern as the matrix size increases. We remark that only for matrix
sizes less than or equal to 75, is there interesting behavior. For the first two matrix sizes, the
optimal block size is larger than the dimension of the matrix. This implies that no blocking was
performed; the level 3 BLAS was not used by the LAPACK algorithm. For the matrix size 75,
the rate achieved by the LAPACK algorithm with block size 8 was within 4-6 % of the unblocked
factorization.
5 Related Work
We briefly review and summarize other investigations parallel to ours. It is evident that there is an
active amount of work to remove the substantial hand coding associated with efficient dense linear
algebra computations.
5.1 Blocking with a GEMM based Approach
Since LAPACK depends upon a set of highly tuned BLAS for efficiency, there remains the practical
question of how they should be optimized. As discussed in the introduction, an efficient set of BLAS
requires a non-trivial effort in software engineering. See [23] for a discussion on software efforts to
provide optimal implementations of the level 3 BLAS.
An approach that is both efficient and practical is the GEMM-based one proposed by K-agstr-om,
Ling and Van Loan [23] in a recent study. Their approach advocates optimizing the general matrix
multiply and add kernel GEMM and then rewriting the remainder of the level 3 BLAS in terms of
calls to this kernel. The benefit of their approach is that only this kernel needs to be optimized-
whether by hand or the compiler. Their thorough analysis highlights the many issues that must
be considered when attempting to construct a set of highly tuned BLAS. Moreover, they provide
high quality implementations of the BLAS for general use as well as a performance evaluation
benchmark [24].
We emphasize that our study examines only whether the necessary optimizations may be left
to the compiler, and, also whether they should be applied directly to the matrix factorizations
themselves. What is beyond the ability of the compiler is that of recasting the level 3 BLAS in
terms of calls to GEMM.
5.2 PHiPAC
Another recent approach is the methodology expressed for developing a Portable High-Performance
matrix-vector libaries in ANSI C (PHiPAC) [3]. The project is motivated by many of the reasons
as outlined in our introduction. There is a major difference in approaches which does not make
it a parallel study. As in the GEMM based approach, they seek to support the BLAS and aim
to be more efficient than the vendor supplied BLAS. However, unlike our study or the GEMM
one, PHiPAC assumes that ANSI C is the programming language. Because of various C semantics
PHiPAC instead seeks to provide parameterized generators that produce the optimized code. See
the report [3] for a discussion on the inhibitors in C that prevent an optimizing compiler from
generating efficient code.
5.3 Auto-blocking Matrix Multiplication
Frens and Wise present an alternative algorithm for matrix-matrix multiply that is based upon a
quadtree representation of matrices [17]. Their solution is recursive and suffers from the lack of
interprocedural optimization in most commercial compilers. Their results show that when paging
becomes a problem on SGI multiprocessor systems, the quadtree algorithm has superior performance
to the BLAS 3. On smaller problems, the quadtree algorithm has inferior performance. In relation
to our work, we could not expect the compiler to replace the BLAS 3 with the quadtree approach
when appropriate as it is a change in algorithm rather than a reshaping. In addition, the specialized
storage layout used by Frens and Wise calls into question the effect on an entire program.
6
Summary
We have set out to determine whether a compiler can automatically restructure matrix factorizations
well enough to avoid the need for hand optimization. To that end, we have examined a
collection of implementations from LAPACK. For each of these programs, we determined whether a
plausible compiler technology could succeed in obtaining the block version from the point algorithm.
The results of this study are encouraging: we have demonstrated that there exist implementable
compiler methods that can automatically block matrix factorization codes to achieve algorithms
that are competitive with those of LAPACK. Our results show that for modest-sized matrices on
advanced microprocessors, the compiler-derived variants are often superior. These matrix sizes are
typical on workstations.
Given that future machine designs are certain to have increasingly complex memory hierarchies,
compilers will need to adopt increasingly sophisticated memory-management strategies so that
programmers can remain free to concentrate on program logic. Given the potential for performance
attainable with automatic techniques, we believe that it is possible for the user to express machine-independent
point matrix factorization algorithms without the BLAS and still get good performance
if compilers adopt our enhancements to already existing methods.
Acknowledgments
Ken Kennedy and Richard Hanson provided the original motivation for this work. Ken Kennedy,
Keith Cooper and Danny Sorensen provided financial support for this research when it was begun
at Rice University.
We also wish to thank Tomas Lofgren and John Pieper of DEC for their help with obtaining
the DXML libraries and diagnosing the compiler's performance, respectively. We also thank Per
Ling of the University of Ume-a, Ken Stanley of the University of California Berkeley for their help
with the benchmarks and discussions.
--R
Automatic translation of Fortran programs to vector form.
A portable
A parallel programming environment.
Analysis of interprocedural side effects in a parallel programming environment.
Improving software pipelining with unroll-and-jam
Compiler blockability of numerical algorithms.
Improving the ratio of memory operations to floating-point operations in loops
Tile size selection using cache organization.
A set of level 3 Basic Linear Algebra Subprograms.
An extended set of Fortran Basic Linear Algebra Subprograms.
Solving Linear systems on Vector and shared memory computers.
Solving Linear Systems on Vector and Shared-Memory Computers
Implementing linear algebra algorithms for dense matrices on a vector pipeline machine.
Parallel algorithms for dense linear algebra computations.
Practical dependence testing.
Matrix Computations.
An implementation of interprocedural bounded regular section analysis.
The Structure of Computers and Computations Volume
Software pipelining: An effective scheduling technique for vliw machines.
The cache performance and optimizations of blocked algorithms.
Basic linear algebra subprograms for fortran usage.
Implementing efficient and portable dense matrix factorizations.
A comparative study of automatic vectorizing compilers.
Introduction to Parallel and Vector Solutions of Linear Systems.
Register allocation for software pipelined loops.
Automatic Selection of High Order Transformations in the IBM XL Fortran Compilers.
A data locality optimizing algorithm.
Advanced loop interchange.
Iteration space tiling for memory hierarchies.
--TR
Automatic translation of FORTRAN programs to vector form
An extended set of FORTRAN basic linear algebra subprograms
Software pipelining: an effective scheduling technique for VLIW machines
Introduction to Parallel MYAMPERSANDamp; Vector Solution of Linear Systems
Analysis of interprocedural side effects in a parallel programming environment
Parallel algorithms for dense linear algebra computations
A set of level 3 basic linear algebra subprograms
The cache performance and optimizations of blocked algorithms
Practical dependence testing
A data locality optimizing algorithm
Register allocation for software pipelined loops
Compiler blockability of numerical algorithms
Memory-hierarchy management
Improving the ratio of memory operations to floating-point operations in loops
Tile size selection using cache organization and data layout
Matrix computations (3rd ed.)
Auto-blocking matrix-multiplication or tracking BLAS3 performance from source code
Automatic selection of high-order transformations in the IBM XL FORTRAN compilers
Basic Linear Algebra Subprograms for Fortran Usage
Solving Linear Systems on Vector and Shared Memory Computers
Structure of Computers and Computations
An Implementation of Interprocedural Bounded Regular Section Analysis
Iteration Space Tiling for Memory Hierarchies
Implementing Efficient and Portable Dense Matrix Factorizations
Improving Software Pipelining With Unroll-and-Jam
--CTR
Mahmut Kandemir , J. Ramanujam , Alok Choudhary, Improving Cache Locality by a Combination of Loop and Data Transformations, IEEE Transactions on Computers, v.48 n.2, p.159-167, February 1999
Nikolay Mateev , Vijay Menon , Keshav Pingali, Fractal symbolic analysis, Proceedings of the 15th international conference on Supercomputing, p.38-49, June 2001, Sorrento, Italy
Kgstrm , Per Ling , Charles van Loan, GEMM-based level 3 BLAS: high-performance model implementations and performance evaluation benchmark, ACM Transactions on Mathematical Software (TOMS), v.24 n.3, p.268-302, Sept. 1998
Steve Carr , Soner nder, A case for a working-set-based memory hierarchy, Proceedings of the 2nd conference on Computing frontiers, May 04-06, 2005, Ischia, Italy
Jeremy D. Frens , David S. Wise, Auto-blocking matrix-multiplication or tracking BLAS3 performance from source code, ACM SIGPLAN Notices, v.32 n.7, p.206-216, July 1997
Qing Yi , Vikram Adve , Ken Kennedy, Transforming loops to recursion for multi-level memory hierarchies, ACM SIGPLAN Notices, v.35 n.5, p.169-181, May 2000
Qing Yi , Ken Kennedy , Haihang You , Keith Seymour , Jack Dongarra, Automatic blocking of QR and LU factorizations for locality, Proceedings of the 2004 workshop on Memory system performance, June 08-08, 2004, Washington, D.C.
Vijay Menon , Keshav Pingali , Nikolay Mateev, Fractal symbolic analysis, ACM Transactions on Programming Languages and Systems (TOPLAS), v.25 n.6, p.776-813, November
Qing Yi , Ken Kennedy , Vikram Adve, Transforming Complex Loop Nests for Locality, The Journal of Supercomputing, v.27 n.3, p.219-264, March 2004 | LAPACK;LU decomposition;QR decomposition;BLAS;cholesky decomposition;cache optimization |
275339 | Component Based Design of Multitolerant Systems. | AbstractThe concept of multitolerance abstracts problems in system dependability and provides a basis for improved design of dependable systems. In the abstraction, each source of undependability in the system is represented as a class of faults, and the corresponding ability of the system to deal with that undependability source is represented as a type of tolerance. Multitolerance thus refers to the ability of the system to tolerate multiple fault-classes, each in a possibly different way. In this paper, we present a component based method for designing multitolerance. Two types of components are employed by the method, namely detectors and correctors. A theory of detectors, correctors, and their interference-free composition with intolerant programs is developed, that enables stepwise addition of components to provide tolerance to a new fault-class while preserving the tolerances to the previously added fault-classes. We illustrate the method by designing a fully distributed multitolerant program for a token ring. | Introduction
Dependability is an increasingly relevant system-level requirement that encompasses the ability of
a system to deliver its service in a desirable manner, in spite of the occurrence of faults, security
intrusions, safety hazards, configuration changes, load variations, etc. Achieving this ability is
difficult, essentially because engineering a system for the sake of one dependability property, say
availability in the presence of faults, often interferes with another desired dependability property,
say security in presence of intrusions.
In this paper, to effectively reason about multiple dependability properties, we introduce the concept
of multitolerance. Each source of undependability is treated as a class of "faults" and each
dependability property is treated as a type of "tolerance". Thus, multitolerance refers to the ability
of a system to tolerate multiple classes of faults, each in a possibly different way.
Although there are many examples of multitolerant systems in practice [1 \Gamma 3] and there exists a
growing body of research that presents instances of multitolerant systems [4 \Gamma 9], we are not aware
of previous work that has considered the systematic design of multitolerance. Towards redressing
this deficiency, we present in this paper a formal method for the design of multitolerant systems.
To deal with the difficulty of interference between multiple types of tolerances, our method is based
on the use of components. More specifically, a multitolerant system designed using the method
consists of an intolerant system and a set of components, one for each desired type of tolerance.
Thus, the method reduces the complexity of design to that of designing the components and that of
correctly adding them to the intolerant system. Moreover, it enables reasoning about each type of
tolerance and the interferences between different types of tolerance at the level of the components
themselves, as opposed to involving the whole system.
The method further reduces the complexity of adding multiple components to an intolerant system
by adding each component in a stepwise fashion. In other words, the method considers the fault-
classes that an intolerant system is subject to in some fixed total order, say F1 :: Fn. A component
is added to the intolerant system so that it tolerates F1 in a desirable manner. The resulting system
is then augmented with another component so that it tolerates F2 in a desirable manner and its
tolerance to F1 is preserved. This process of adding a new tolerance and preserving all old tolerances
is repeated until all n fault-classes are accounted for. It follows that the final system is multitolerant
with respect to F1 :: Fn.
Components used in our method are built out of two building blocks, namely detectors and correctors,
that occur -albeit implicitly- in fault-tolerant systems. Intuitively, a detector "detects" whether
some predicate is satisfied by the system state; and a corrector detects whether some predicate is
satisfied by the system state and also "corrects" the system state in order to satisfy that predicate
whenever the predicate is not satisfied. Detectors can be used to ensure that each step of the system
is "safe" with respect to its "problem specification", while correctors can be used to ensure that
the system eventually reaches a state from where its problem specification is (re)satisfied. Thus,
in this paper, we are also able to show that components built out of detectors are sufficient for
designing "fail-safe" tolerance in programs, that components built out of correctors are sufficient
for designing "nonmasking" tolerance in programs, and that components built out of both detectors
and correctors are sufficient for designing "masking" tolerance in programs. (We will formally define
each of these terms shortly.)
The rest of this paper is organized as follows. In Section 2, we give formal definitions of programs,
their problem specifications, their faults, and their fault-tolerances. In Sections 3 and 4, we define
detectors and correctors, discuss their role in the design of fault-tolerant systems, and illustrate
how they can be designed hierarchically and efficiently. In Section 5, we present a theory of non-interference
for composing detectors and correctors with intolerant programs. In Section 6, we
define multitolerance and present our formal method for designing multitolerance. In Section 7, we
illustrate our method by designing a multitolerant token ring program. Finally, we discuss some
issues raised by our methodology in Section 8 and make concluding remarks in Section 9.
Preliminaries
In this section, we give formal definitions of programs, problem specifications, faults, and fault-
tolerances. The formalization of programs is a standard one, that of specifications is adapted from
Alpern and Schneider [9], and that of faults and fault-tolerances is adapted from earlier work of
the first author with Mohamed Gouda [10] (with the exception of the portion on fail-safe tolerance
which is new).
Programs.
A program is a set of variables and a finite set of actions. Each variable has a predefined
nonempty domain. Each action has a unique name, and is of the form:
The guard of each action is a boolean expression over the program variables. The statement of
each action is such that its execution atomically and instantaneously updates zero or more program
variables.
Notation. To conveniently write an action as a restriction of another action, we use the notation
hname
to define an action hname 0 i whose guard is obtained by restricting the guard of action hnamei
with hguard 0 i, and whose statement is identical to the statement of action hnamei. Operationally
speaking, hname 0 i is executed only if the guard of hnamei and the guard hguard 0 i are both true.
Likewise, to conveniently write a program as a restriction of another program, we use the notation
to define a program consisting of the set of actions hguardi - ac for each action ac of hprogrami.
Let p be a program.
Definition (State). A state of p is defined by a value for each variable of p, chosen from the
predefined domain of the variable.
Definition (State predicate). A state predicate of p is a boolean expression over the variables of p.
Note that a state predicate may be characterized by the set of all states in which the boolean
expression (of the state predicate) is true.
Definition (Enabled). An action of p is enabled in a state iff its guard (which is a state predicate)
is true in that state.
Definition (Computation). A computation of p is a fair, maximal sequences of states s
that for each j, j ? 0, s j is obtained from state s j \Gamma1 by executing an action of p that is enabled in
the state s j \Gamma1 . Fairness of the sequence means that each action in p that is continuously enabled
along the states in the sequence is eventually chosen for execution. Maximality of the sequence
means that if the sequence is finite then the guard of each action in p is false in the final state, i.e.,
the sequence cannot be further extended by executing an enabled action in its final state.
Problem specification.
problem specification is a set of sequences of states that is suffix closed and fusion
closed. Suffix closure of the set means that if a state sequence oe is in that set then so are all the
suffixes of oe. Fusion closure of the set means that if state sequences hff; x; fli and hfi; x; ffii are in that
set then so are the state sequences hff; x; ffii and hfi; x; fli, where ff and fi are finite prefixes of state
sequences, fl and ffi are suffixes of state sequences, and x is a program state.
Note that the state sequences in a problem specification may be finite or infinite. Following Alpern
and Schneider [9], it can be shown that any problem specification is the intersection of some "safety"
specification that is suffix closed and fusion closed and some "liveness" specification, defined next.
Definition (Safety). A safety specification is a set of state sequences that meets the following
condition: for each state sequence oe not in that set, there exists a prefix ff of oe, such that for all
state sequences fi, fffi is not in that set (where fffi denotes the concatenation of ff and fi).
Definition (Liveness). A liveness specification is a set of state sequences that meets the following
condition: for each finite state sequence ff, there exists a state sequence fi such that fffi is in that
set.
Defined below are some examples of problem specifications, namely, generalized pairs, closures,
and converges to. Let S and R be state predicates.
Definition (Generalized Pairs). A generalized pair (fSg; fRg) is a set of all state sequences, s
such that for each j; j - 0, if S is true at s j then R is true at s j+1 .
Definition (Closure). The closure of S, S , is the set of all state sequences s
is true at s j then for all k, k - j, S is true at s k .
Definition (Converges to). S converges to R is the set of all state sequences s in the
intersection of S and R such that for all is true at s i then there exists k, k- i, where
R is true at s k .
Note that (fSg; converges to S.
Program correctness with respect to a problem specification.
Let SPEC be a problem specification.
Definition (Satisfies). p satisfies SPEC for S iff each computation of p that starts at a state where
S is true is in SPEC.
Definition (Violates). p violates SPEC for S iff it is not the case that p satisfies SPEC for
there exists a computation of p that starts at a state where S is true and is not in SPEC.
For convenience in reasoning about programs that satisfy special cases of problem specifications, we
introduce the following notational abbreviations.
Definition (Generalized Hoare-triples). fSg p fRg iff p satisfies the generalized pair (fSg; fRg) for
true.
Definition (Closed in p). S is closed in p iff p satisfies S for true.
Note that it is trivially true that the state predicates true and false are closed in p.
Definition (Converges to in p). S converges to R in p iff p satisfies S converges to R for true.
Informally speaking, proving the correctness of p with respect to SPEC involves showing that p
satisfies SPEC for some state predicate S. (Of course, to be useful, the predicate S should not
be false.) Now, since problem specifications are suffix closed, we may without loss of generality
restrict our attention to proving that p satisfies the problem specification for some closed state
predicate S. We call such a state predicate S an invariant of p. Invariants enable proofs of program
correctness that eschew operational arguments about long (sub)sequences of states, and are thus
methodologically advantageous.
Definition (Invariant). S is an invariant of p for SPEC iff S is closed in p and p satisfies SPEC for
S.
Notational remark. Henceforth, whenever the problem specification is clear from the context, we
will omit it; thus, "S is an invariant of p" abbreviates "S is an invariant of p for SPEC ".
One way to calculate an invariant of p is to characterize the set of states that are reachable under
execution of p starting from some designated "initial" states. Experience shows, however, that for
ease of proofs of program correctness one may prefer to use invariants of p that properly include
such a reachable set of states. This is a key reason why we have not included initial states in the
definition of programs.
Techniques for the design of invariants have been articulated by Dijkstra [11], using the notion of
auxiliary variables, and by Gries [12], using the heuristics of state predicate ballooning and shrinking.
Techniques for the mechanical calculation of invariants have been discussed by Alpern and Schneider
[13].
Faults.
The faults that a program is subject to are systematically represented by actions whose execution
perturbs the program state. We emphasize that such representation is possible notwithstanding the
type of the faults (be they stuck-at, crash, fail-stop, omission, timing, performance, or Byzantine),
the nature of the faults (be they permanent, transient, or intermittent), or the ability of the program
to observe the effects of the faults (be they detectable or undetectable).
Definition (Fault-class). A fault-class for p is a set of actions over the variables of p.
Let T be a state predicate, S an invariant of p, and F a fault-class for p.
Definition (Preserves). An action ac preserves a state predicate T iff execution of ac in any state
where T is true results in a state where T is true.
is an F -span of p for S iff S ) T , T is closed in p, and each action of F
preserves T .
Thus, at each state where an invariant S of p is true, an F -span T of p for S is also true. Also, like
S, T is also closed in p. Moreover, if any action in F is executed in a state where T is true, the
resulting state is also one where T is true. It follows that for all computations of p that start at
states where S is true, T is a boundary in the state space of p up to which (but not beyond which)
the state of p may be perturbed by the occurrence of the actions in F .
Notational remark. Henceforth, we will ambiguously abbreviate the phrase "each action in F
preserves T " by "T is closed in F ". And, whenever the program p is clear from the context, we
will omit it; thus, "S is an invariant" abbreviates "S is an invariant of p" and "F is a fault-class"
abbreviates "F is a fault-class for p".
Fault-tolerances.
We are now ready to define what it means for a program p with an invariant S to tolerate a fault-class
F .
Definition -tolerant for S). p is F -tolerant for S iff there exists a state predicate T that satisfies
the following three conditions:
ffl At any state where S is true, T is also true. (In other words, S ) T .)
ffl Starting from any state where T is true, if any action in p or F is executed, the resulting state
is also one where T is true. (In other words, T is closed in p and each action in F preserves
.)
ffl Starting from any state where T is true, every computation of p alone eventually reaches a
state where S is true. (In other words, since S and T are closed in p, T converges to S in
p.)
This definition may be understood as follows. The state predicate T is an F -span of p for S- a
boundary in the state space of p up to which (but not beyond which) the state of p may be perturbed
by the occurrence of faults in F . If faults in F continue to occur, the state of p remains within this
boundary. When faults in F stop occurring, p converges from this boundary to the stricter boundary
in the state space where the invariant S is true.
It is important to note that there may be multiple such state predicates T for which p meets the above
three requirements. Each of these multiple T state predicates captures a (potentially different) type
of fault-tolerance of p. We will exploit this multiplicity in Section 6 in order to define multitolerance.
Types of fault-tolerances. We now classify three types of fault-tolerances that a program
can exhibit, namely masking, nonmasking, and fail-safe tolerance, using the above definition of
F -tolerance.
Informally speaking, this classification is based upon the extent to which the program satisfies its
problem specification in the presence of faults. Of the three, masking is the strictest type of tolerance:
in the presence of faults, the program always satisfies its safety specification, and the execution
of p after execution of actions in F yields a computation that is in both the safety and liveness
specification of p, i.e., the computation is in the problem specification of p. Nonmasking is less strict
than masking: in the presence of faults, the program need not satisfy its safety specification but,
when faults stop occurring, the program eventually resumes satisfying both its safety and liveness
specification; i.e., the computation has a suffix that is in the problem specification. Fail-safe is also
less strict than masking: in the presence of faults, the program always satisfies its safety specification
but, when faults stop occurring, the program need not resume satisfying its liveness specification;
i.e., the computation is in the safety specification -but not necessarily in the liveness specification- of
p. Formally, these three types of tolerance may be expressed in terms of the definition of F -tolerance,
as follows:
Definition (masking tolerant). p is masking tolerant to F for S iff p is F -tolerant for S and S is
closed in F . (In other words, if a fault in F occurs in a state where S is true, p continues to be in a
state where S is true.)
Definition (nonmasking tolerant). p is nonmasking tolerant to F for S iff p is F -tolerant for S and
S is not closed in F . (In other words, if a fault in F occurs in a state where S is true, p may be
perturbed to a state where S is violated. However, p then recovers to a state where S is true.)
Definition (fail-safe tolerant). p is fail-safe tolerant to F for S iff there exists a state predicate R
such that p is F -tolerant for S - R, S - R is closed in p and in F , and p satisfies the safety
specification (of the problem specification) for S - R. (In other words, if a fault in F occurs in
a state where S is true, p may be perturbed to a state where S or R is true. In the latter case,
the subsequent execution of p yields a computation that is in the safety specification of p but not
necessarily in the liveness specification.)
Notation. In the sequel, whenever the fault-class F and invariant S are clear from the context, we
omit them; thus, "masking tolerant" abbreviates "masking tolerant to F for S", and so on.
Detectors
In this section, we define the first of two building blocks which are sufficient for the design of
fault-tolerant programs, namely detectors. We also present the properties of detectors, show how
to construct them in a hierarchical and efficient manner, and discuss their role in the design of
fault-tolerance.
As mentioned in the introduction, intuitively, a detector is a program that "detects" whether a given
state predicate is true in the current system state. Implementations of detectors abound in practice:
Wellknown examples include comparators, error detection codes, consistency checkers, watchdog
programs, snoopers, alarms, snapshot procedures, acceptance tests, and exception conditions.
Z be state predicates of a program d and U be a state predicate that is
closed in d. We say that "Z detects X in d for U " iff the following three conditions hold:
ffl (Safeness) At any state where U is true, if Z is true then X is also true. (In other words,
U
ffl (Progress) Starting from any state where U -X is true, every computation of d either reaches
a state where Z is true or a state where X is false.
ffl (Stability) Starting from any state where U - Z is true, d falsifies Z only if it also falsifies X .
(In other words, fU - Zg d fZ - :Xg.)
The Safeness condition implies that a detector d never lets the predicate Z "witness" the detection
predicate X incorrectly when executed in states where U is true. The Progress condition implies
that in any computation of d starting from a state where U is true, if X is true continuously then d
eventually detects X by truthifying Z. The Stability condition implies that once Z is truthified, it
continues to be true unless X is falsified.
Remark. If the detection predicate X is closed in d, our definition of the detects relation reduces
to one given by Chandy and Misra [14]. We have considered this more general definition to accommodate
the case -which occurs for instance in nonmasking tolerance- where X denotes that
"something bad has happened"; in this case, X is not supposed to be closed since it has to be
subsequently corrected. (End of Remark.)
In the sequel, we will implicitly assume that the specification of a detector d (dn) is "Z detects X
in d for U " (respectively, "Zn detects Xn in dn for Un"). Also, we will implicitly assume that U
(Un) is closed in d (respectively, dn).
Properties. The detects relation is reflexive, antisymmetric, and transitive in its first two arguments
Lemma 3.0 Let X , Y , and Z be state predicates of d and U be a state predicate that is closed
in d. The following statements hold.
detects X in d for U
ffl If Z detects X in d for U , and X detects Z in d for U
then U
ffl If Z detects Y in d for U , and Y detects X in d for U
then Z detects X in d for U
Lemma 3.1 Let V be a state predicate such that U - V is closed in d. The following statements
hold.
ffl If Z detects X in d for U
then Z detects X in d for U-V
ffl If Z detects X in d for U , and V ) X
then Z-V detects X in d for U
ffl If Z detects X in d for U , and Z
then Z detects X-V in d for U
Compositions. Regarding the construction of detectors, there are cases where a detection
predicate X cannot be witnessed atomically, i.e., by executing at most one action of a detector
program. To detect such predicates, we give compositions of "small" detectors that yield "large"
detectors in a hierarchical and efficient manner. In particular, given two detectors, d1 and d2, we
compose them in two ways: (i) in parallel and (ii) in sequence.
Parallel composition of detectors. In the parallel composition of d1 and d2, denoted by d1[]d2, both
d1 and d2 execute in an interleaved fashion. Formally, the parallel composition of d1 and d2 is the
union of the (variables and actions of) programs d1 and d2.
Observe that '[]' is commutative: d1[](d2[]d3), and that
'-' distributes over `[]': g -
Theorem 3.2 Let Z1 detect X1 in d1 for U and Z2 detect X2 in d2 for U .
If the variables of d1 and d2 are mutually exclusive
then Z1-Z2 detects X1-X2 in d1[]d2 for U
Proof. The Safeness condition of d1[]d2 follows trivially from the Safeness of d1 and of d2. For
the Progress condition, we consider two cases, (i) a computation of d1[]d2 falsifies X1 - X2 and (ii)
a computation of d1[]d2 never falsifies X1 - X2: In the first case, Progress is satisfied trivially. In
the second case, eventually Z1 is truthified, and by Stability of d1, Z1 continues to be true in the
execution of d1. moreover, since the variables of d1 and d2 are disjoint, Z1 continues to be true in d2.
Likewise, Z2 is eventually truthified and continues to be true. Thus, Progress is satisfied. Finally,
the Stability condition is satisfied since Z1-Z2 can be falsified only if X1 or X2 are violated.
In d1[]d2, since d1 and d2 perform their detection concurrently, the time required for detection of
X1-X2 is the maximum time taken to detect X1 and to detect X2. (We are assuming that the
unit for measuring time allows both d1 and d2 to attempt execution of an action each.) Also, the
space complexity of d1[]d2 is the sum of the space complexity of d1 and d2, since the state space of
d1[]d2 is the union of the state space of d1 and of d2.
Sequential composition of detectors. In the sequential composition of d1 and d2, denoted by d1; d2,
d2 executes only after d1 has completed its detection, i.e., after the witness predicate Z1 is true.
Formally, the sequential composition of d1 and d2 is the program whose set of variables is the union
of the variables of d1 and d2 and whose set of actions is the union of the actions of d1 and of Z1-d2.
We postulate the axiom that ';' is left-associative: d1; d2;
Observe that ';' is not commutative, that `;' distributes over '[]': d1;
and that '-' distributes over `;': g - (d1;
Suppose, again, that the variables of d1 and d2 are mutually exclusive. In this case, starting from
any state where X1-X2 is true continuously, d1 eventually truthifies Z1. Only after Z1 is truthified
are the actions of d2 executed; these actions eventually truthify Z2. Since Z2 is truthified only
when Z1 (and, hence, X1) and X2 are true, it also follows that U
assume U
Theorem 3.3 Let Z1 detect X1 in d1 for U and Z2 detect X2 in d2 for U-X1.
If the variables of d1 and d2 are mutually exclusive, and U
then Z2 detects X1-X2 in d1; d2 for U
In d1; d2, the time (respectively, space) taken to detect X1-X2 is the sum of the time (respectively,
space) taken to detect X1 and to detect X2. The extra time taken by d1; d2 as compared to d1[]d2 is
warranted in cases where the witness predicate Z2 can be witnessed atomically but Z1-Z2 cannot.
Example: Memory access. Let us consider a simple memory access program that obtains the
value stored at a given address (cf. Figure 1). The program is subject to two fault-classes: The first
consists of protection faults which cause the given address to be corrupted so that it falls outside the
valid address space, and the second consists of page faults which remove the address and its value
from the memory.
For tolerance to the first fault-class, there is a detector d1 that uses the page table TBL to detect
whether the address addr is valid (X1). For tolerance to the second fault-class, there is another
detector d2 that uses the memory MEM to detect whether the given address is in memory (X2).
d2 MEM
addr
Figure
1: Memory access
Formally, these detectors are as follows (for simplicity, we assume that TBL is a set of valid addresses
and MEM is a set of objects of the form haddr; vali):
Thus, we may observe:
ffl Z1 detects X1 in d1 for U1 (1)
ffl Z2 detects X2 in d2 for U1 - X1 (2)
Note that an appropriate choice of initial state in U1 would be one where both Z1 and Z2 are false.
Note also that, in U1, Z1 is truthified only when X1 is true and that Z2 is truthified only when X1
and X2 are both true.
To detect X1-X2, we may compose d1 and d2 sequentially: d1 would first detect X1, and then d2
would detect X2. From Theorem 3.3, (1) and (2) we get:
ffl Z2 detects X1 - X2 in d1; d2 for U1 (3)
Application to design of fault-tolerance. Detectors suffice to ensure that a program satisfies
its safety specification. To see this, recall that a safety specification essentially rules out certain
finite prefixes of program computation. Now, consider any prefix of a computation that is not ruled
out by the safety specification. Execution of a program action starting from this prefix does not
violate the safety specification iff the elongated prefix is not ruled out by the safety specification.
In other words, for each program action ac there exists a set of computation prefixes from which
execution of ac does not violate the safety specification. It follows that there exists a "detection"
state predicate such that execution of ac in any state where that state predicate is true does not
violate the safety specification. (From the fusion closure of the safety specification, it suffices that
this state predicate characterize a set of states, each state st of which yields upon executing ac
a successor state st 0 such that there is some state sequence in the safety specification in which st
and st 0 occur consecutively in that order.) Now, if detectors can be added to the program so that
for each program action ac a detection predicate of ac is witnessed, and each program action can
be restricted to execute only if its corresponding witness predicate is true, the resulting program
satisfies the safety specification.
To design fail-safe tolerance to F for S, we need to ensure that upon starting from a state where
S is true, the execution of p in the presence of F always yields a computation that is in the safety
specification of p. It follows that detectors suffice for the design of fail-safe tolerance.
Likewise, to design masking tolerance to F for S, we need to ensure that upon starting from a state
where S is true, the execution of p in the presence of F never violates the safety specification and
the execution of p after execution of actions in F always yields a computation that is in both the
safety and the liveness specification of p, i.e., that computation is in the problem specification of
p. (From the fusion closure of the problem specification, it follows that a computation of p that is
in the safety specification and that has a suffix in the problem specification is itself in the problem
specification.) Now, regarding safety, it suffices that detectors be added to p. (Regarding liveness,
it suffices that correctors be added to p, as discussed in the next section.)
Detectors can also play a role in the design of nonmasking tolerance: They may be used to detect
whether the program is perturbed to a state where its invariant is false. As discussed in the next
section, such detectors can be systematically composed with correctors that restore the program to
a state where its invariant is true.
In this section, we discuss the second set of building blocks, namely correctors, in a manner analogous
to our discussion of detectors.
As mentioned in the introduction, intuitively, a corrector is a detector that also "corrects" the
program state whenever it detects that its detection predicate is false in the current system state.
Implementations of correctors also abound in practice: Wellknown examples include voters, error
correction codes, reset procedures, rollback recovery, rollforward recovery, constraint (re)satisfaction,
exception handlers, and alternate procedures in recovery blocks.
Z be state predicates of a program c and U be closed in c. We say that "Z
corrects X in c for U " iff the following four conditions holds:
ffl (Safeness) At any state where U is true, if Z is true then X is also true. (In other words,
U
ffl (Convergence) Starting from any state where U is true, every computation of c eventually
reaches a state where X is true, and subsequently, X continues to be true thereafter. (In other
words, U converges to X in c.)
Starting from any state where U - X are true, every computation of c either
reaches a state where Z is true or a state where X is false.
ffl (Stability) Starting from any state where U -Z is true, c falsifies Z only if it also falsifies X .
(In other words, fU - Zg c fZ - :Xg.)
From the above definition, it follows that if Z corrects X in c for U , then Z detects X in c for U .
It also follows, that U - X is closed in c (cf. Convergence). Consequently, starting from any state
where U -X is true, every computation of c reaches a state where Z is true (cf. Progress). moreover,
U - Z is closed in c (cf. Stability).
Remark. If the witness predicate Z is identical to the correction predicate X , our definition of
the corrects relation reduces to one given by Arora and Gouda [10]. We have considered this more
general definition to accommodate the case -which occurs for instance in masking tolerance- where
the witness predicate Z can be checked atomically but the correction predicate X cannot.
(End of Remark.)
Properties. The corrects relation is antisymmetric and transitive in its first two arguments:
Lemma 4.0 Let X , Y , and Z be state predicates of c and U be a state predicate that is closed in
c. The following statements hold.
ffl If Z corrects X in c for U , and X corrects Z in c for U
then U
ffl If Z corrects Y in c for U , and Y corrects X in c for U
then Z corrects X in c for U
Lemma 4.1 Let V be a state predicate such that U - V is closed in c. Then the following
statements hold.
ffl If Z corrects X in c for U
then Z corrects X in c for U - V
ffl If Z corrects X in c for U and V ) X
then Z-V corrects X in c for U
Compositions. Analogous to detection predicates that cannot be witnessed atomically, there exist
cases where a correction predicate X cannot be corrected atomically, i.e., by executing at most one
step (action) of a corrector. To correct such predicates, we construct "large" correctors from "small"
correctors just as we did for detectors.
Parallel composition of correctors. The parallel composition of two correctors c1 and c2, denoted
by c1[]c2, is the union of the (variables and actions of) programs c1 and c2.
Theorem 4.2 Let Z1 correct X1 in c1 for U and Z2 correct X2 in c2 for U .
If the variables of c1 and c2 are mutually exclusive
then Z1-Z2 corrects X1-X2 in c1[]c2 for U
The time taken by c1[]c2 to correct X1-X2 is the maximum of the time taken to correct X1 and to
correct X2. The space taken is the corresponding sum.
Sequential composition of correctors. The sequential composition of correctors c1 and c2, denoted
by c1; c2, is the program whose set of variables is the union of the variables of c1 and c2 and whose
set of actions is the union of the actions of c1 and of Z1 - c2.
Theorem 4.3 Let Z1 correct X1 in c1 for U and Z2 correct X2 in c2 for (U -X1).
If the variables of c1 and c2 are mutually exclusive, and U
then Z2 corrects X1-X2 in c1; c2 for U
The time (respectively, space) taken by c1; c2 to correct X1-X2 is the sum of the time (respectively,
space) taken to correct X1 and to correct X2.
As mentioned in the previous section, one way to design a corrector for X is by sequential composition
of a detector and a corrector: the detector first detects whether :X is true and, using this witness,
the corrector then establishes X .
Theorem 4.4 Let Z detect :X in d for U , Z
, and X correct X in c for U - Z 0
If X is closed in d, and fU - Zg c fZ - Xg
then X corrects X in (:Z - d); c for U
If c is atomic, i.e., c satisfies Progress and Convergence in at most one step, the following corollary
holds.
Corollary 4.5 Let Z detect :X in d for U , Z
, and X correct X in c for U - Z 0
If X is closed in d, and c is atomic
then X corrects X in d; c for U
Another way to design a corrector for X is by sequential composition of a corrector and a detector:
the corrector first satisfies its correction predicate X and then the detector satisfies the desired
witness predicate Z.
Theorem 4.6 Let X correct X in c for U , and Z detect X in d for U .
If X is closed in d
then Z corrects X in (:X - c); d for U
Again, if d is atomic, the following corollary holds.
Corollary 4.7 Let X correct X in c for U , and Z detect X in d for U .
If X is closed in d and c is atomic
then Z corrects X in c; d for U
Example: Memory access (continued). If the given address is valid but is not in memory,
an object of the form haddr; \Gammai has to be added to the memory. (We omit the details of how this
object is obtained, e.g., from a disk, a remote memory, or a network.) Thus, there is a corrector, c,
which is formally specified as follows:
addr
d2
c
Figure
2: Memory access
Thus, we may observe:
ffl X2 corrects X2 in c for true (4)
ffl X2 corrects X2 in c for U1 (5)
ffl X2 corrects X2 in c for U1 - X1 (6)
Before detector d2 can witness that the value of the address is in memory, corrector c should execute.
Hence, we compose c and d2 sequentially. From Corollary 4.7, (2) and (6) we have:
ffl Z2 corrects X2 in c; d2 for U1 - X1 (7)
moreover, detector d1 should witness that the address is valid, before either corrector c or detector
d2 execute. Hence, we compose d1 and c; d2 sequentially. Recall from Section 3 that Z1 detects X1
in d1 for U1. Also, X1 is closed in d1. Hence, if d1 is started in a state satisfying U1 - X1, it will
eventually satisfy Z1. It follows that Z1 corrects X1 in d1 for U1 - X1. Therefore, from Theorem
4.3 and (9), we have:
ffl Z2 corrects X1 - X2 in d1; (c; d2) for U1 - X1 (8)
Application to design of fault-tolerance. Correctors suffice to ensure that computations
of a program have a suffix in the problem specification. To see this, observe that if the correction
predicate of a corrector is chosen to be an invariant of the program, the corrector ensures the program
will eventually reach a state where that invariant is true and henceforth the program computation
is in the problem specification.
To design nonmasking tolerance to F for an invariant S, we need to ensure that upon starting from a
state where S is true, execution of p will, after execution of actions in F , always yield a computation
that has a suffix in the problem specification. It follows that correctors whose correction predicate
is the invariant S suffice for the design of nonmasking tolerance.
Likewise, to design masking tolerance to F for S, we need to ensure that upon starting from a
state where S is true, execution of p in the presence of F never violates its safety specification and
execution of p after execution of actions in F always yields a computation that is in the problem
specification of p. For the latter guarantee, it suffices that correctors be added to p (and, for the
former, it suffices that detectors be added to p, as discussed in the previous section).
5 Composition of Detector/Corrector Components and Programs
In this section, we discuss how a detector/corrector component is correctly added to a program
so that the resulting program satisfies the specification of the component. As far as possible, the
proof of preservation should be simpler than explicitly proving all over again that the specification
is satisfied in the resulting program. This is achieved by a compositional proof that shows that
the program does not "interfere" with the component, i.e., the program and the component when
executed concurrently do not violate the specification of the component.
Compositional proofs of interference-freedom have received substantial attention in the formal methods
community [15\Gamma19] in the last two decades. Drawing from these efforts, we identify several simple
sufficient conditions to ensure that when a program p is composed with a detector (respectively a
corrector) q, the safety specification of q, viz Safeness and Stability, and liveness specification, viz
Progress and Convergence, are not violated.
Sufficient conditions for satisfying the safety specification of a detector. To demonstrate that
p does not interfere with Safeness and Stability, a straightforward sufficient condition is that the
actions of p be a subset of the actions of q; this occurs, for instance, when program itself acts as
a detector. Another straightforward condition is that the variables of p and q be disjoint. A more
general condition is that p only reads (but not writes) the variables of q; in this case, p is said to be
"superposed" on q.
Sufficient conditions for satisfying the liveness specification of a detector. The three conditions
given above also suffice to demonstrate that p does not interfere with Progress of q, provided that
the actions of p and q are executed fairly. Yet another condition for satisfying Progress of q is to
require that q be "atomic", i.e., that q achieves its Progress in at most one step. It follows that even
if p and q execute concurrently, Progress of q is satisfied.
Alternatively, require that p executes only after Progress of q is achieved. It follows that p cannot
interfere with Progress of q. Likewise, require that p terminates eventually. It follows that after p
has terminated, execution of q in isolation satisfies its Progress.
More generally, require that there exists a variant function f (whose range is over a well-founded
set) such that execution of any action in p or q reduces the value of f until Progress of q is achieved.
It follows that even if q is executed concurrently with p, Progress of q is satisfied.
The sufficient conditions outlined above are formally stated in Table 1. Sufficient conditions for the
case of a corrector are similar.
In the following theorems, let Z detect X in q for U , and let U be closed in p.
Theorem 5.0 (Superposition)
If q does not read or write any variable written by p, and
only reads the variables written by q
then Z detects X in q[]p for U
Theorem 5.1 (Containment)
If actions of p are a subset of q
then Z detects X in q[]p for U
Theorem 5.2 (Atomicity)
If fU - Zg p fZ - :Xg, and q is atomic
then Z detects X in q[]p for U
Theorem 5.3 (Order of execution)
If fU - Zg p fZ - :Xg
then Z detects X in q; p for U
Theorem 5.4 (Termination)
If fU - Zg p fZ - :Xg, and U converges to V in p[]q
then Z detects X in (:V - p)[]q for U
Theorem 5.5 (Variant function)
If fU - (0!f =K)g q f(0!f
then Z detects X in q[]p for U
Table
Sufficient conditions for interference-freedom
The discussion above has addressed how to prove that a program does not interfere with a component,
but not how a component does not interfere with a program. Standard compositional techniques
suffice for this purpose. In practice, detectors such as snapshot procedures, watchdog programs, and
snooper programs typically read but not write the state of the program to which they are added.
Thus, these detectors do not interfere with the program. Likewise, correctors such as reset, rollback
recovery, and forward recovery procedures are typically restricted to execute only in states where
the invariant at hand is false. Thus, these correctors do not interfere with the program.
Example: Memory access (continued). Consider an intolerant program p for memory access
that assumes that the address is valid and is currently present in the memory. For ease of exposition,
we let p access only one memory location instead of multiple locations. Thus, p is as follows:
For p to not interfere with the specification of the corrector d1; (c; d2), it suffices that p execute only
after Z2, the witness predicate of d1; (c; d2), is satisfied. Hence, d1; (c; d2) and p should be composed
in sequence. From the analogue of Theorem 5.3 for the case of correctors, we have that p does not
interfere with d1; (c; d2):
ffl Z2 corrects X1 - X2 in d1; (c; d2); p for U1 - X1
6 Designing Multitolerance
In this section, we first define multitolerance and then present our method for compositional, stepwise
design of multitolerant programs.
Let p be a program with invariant S, F1::Fn be n fault-classes, and l1; l2; ::; ln be
types of tolerance (i.e., masking, nonmasking or fail-safe). We say that p is multitolerant to F1::Fn
for S iff for each fault-class F j; 1-j- n, p is lj-tolerant to F j for S.
The definition may be understood as follows: In the presence of faults in class F j, p is perturbed
only to states where some F j-span predicate for S, T j, is true. (Note that there exists a potentially
different fault-span for each fault-class.) After faults in F j stop occurring, subsequent execution
of p always yields a computation that is in the problem specification as prescribed by the type of
tolerance lj. For example, if lj is fail-safe, each computation of p starting from a state where T j is
true is in the safety specification.
Example: Memory access (continued). Observe that the memory access program, d1; (c; d2); p,
discussed in Section 5, is multitolerant to the classes of protection faults and page faults: it is fail-safe
tolerant to the former and masking tolerant to latter. In particular, in the presence of a page
fault, it always obtains the correct data from the memory. And in the presence of a protection fault,
it obtains no data value.
Compositional and stepwise design method. As outlined in the introduction, our method
starts with a fault-intolerant program and, in a stepwise manner, considers the fault-classes in some
fixed total order, say F1::Fn. In the first step, the intolerant program is augmented with detector
and/or corrector components so that it is l1-tolerant to F 1. The resulting program is then augmented
with other detector/corrector components, in the second step, so that it is l2-tolerant to F2 and its
l1-tolerance to F1 is preserved. And so on until, in the n-th step, the ln-tolerance to Fn is added
while preserving the l1::ln\Gamma1tolerances to F1::Fn\Gamma1. The multitolerant program designed thus has
the structure shown in Figure 3.
detectors and/or correctors for Fn
Fault-intolerant program
detectors and/or correctors for F1
detectors and/or correctors for F2
Figure
3: Structure of a multitolerant program designed using our method
First step. Let p be the intolerant program with invariant S. By calculating an F 1-span of p for S,
detector and corrector components can be designed for satisfying l1-tolerance to F 1. As discussed
in Section 3 and 4, it suffices to add detectors to design fail-safe tolerance to F 1, correctors to design
nonmasking tolerance to F 1, and both detectors and correctors to design masking tolerance to F 1.
tolerant
program
Nonmasking
program
Failsafe
correctors
detectors and correctors
Intolerant program
Masking tolerant program
detectors
tolerant
Figure
4: Components that suffice for design of various tolerances
Note that the detectors and correctors added to p are also subject to F . Hence, they themselves have
to be tolerant to F . But it is not necessary that they be masking tolerant to F . More specifically, it
suffices that detectors added to design fail-safe tolerance be themselves fail-safe tolerant to F ; this is
because if the detectors do not satisfy their liveness specification in the presence of F , the resulting
program can be made to not satisfy the liveness specification of p in the presence of F . Likewise, it
suffices that correctors added to design nonmasking tolerance be themselves nonmasking tolerant to
F ; this is because as long as the computations of the correctors have suffixes that are in their safety
and liveness specification, the computations of the resulting program can be made to have suffixes
in the safety and liveness specification of p. Lastly, as can be expected, it suffices that the detectors
and correctors added to design masking tolerance be themselves masking tolerant to F . (See Figure
5.)
In practice, the detectors and correctors added to p often possess the desired tolerance to F trivially.
But if they do not, one way to design them to be tolerant to F is by the analogous addition of more
detectors and correctors. Another way is to design them to be self tolerant, without using any more
detector and corrector components, as is exemplified by self-checking, self-stabilizing, and inherently
fault-tolerant designs.
tolerant
program
nonmasking components
tolerant
program
Intolerant program
Failsafe
Masking tolerant program
Nonmasking
masking components
Figure
5: Tolerance requirements for the components
With the addition of detector and/or corrector components to p, it remains to show that, in the
resulting program p1, the components do not interfere with p and that p does not interfere with
the components. Note that p1 may contain variables and actions that were not in p and, hence,
invariants and fault-spans of p1 may differ from those of p. Therefore, letting S1 be an invariant of
p1 and T1 be an F 1-span of p1 for S1, we show the following.
1. In the absence of F 1, i.e., in states where S1 is true, the components do not interfere with p,
i.e., each computation of p is in the problem specification even if it executes concurrently with
the new components.
2. In the presence of F 1, i.e., in states where T1 is true, p does not interfere with the components,
i.e., each computation of the components is in the components' specification (in the sense
prescribed by its type of tolerance) even if they execute concurrently with p.
The addition of the detectors and correctors may itself be simplified by using a stepwise
For instance, to design masking tolerance, we may first augment the program with detectors, and
then augment the resulting fail-safe tolerant program with correctors. Alternatively, we may first
augment the program with correctors, and then augment the resulting nonmasking tolerant program
with detectors. (See Figure 6.) For reasons of space, we refer the interested reader to [20] for the
formal details of this two-stage approach for designing masking tolerance.000000000000000000000000000000000000000000000000000000111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000000000000000000000000000000111111111111111111111111111111111111111111111111111111tolerant
Masking tolerant program
Intolerant program
correctors detectors
program
correctors
program
detectors
Failsafe
Nonmasking
tolerant
Figure
Two approaches for stepwise design of masking tolerance
Second step. This step adds l2-tolerance to F2 and preserves the l1-tolerance to F 1. To add l2-
tolerance to F 2, just as in the first step, we add new detector and corrector components to p1. Then,
we account for the possible interference between the executions of these added components and of
p1. More specifically, letting S2 be an invariant of the resulting program p2, T21 be an F 1-span of
p2 for S2, and T22 denote an F 2-span of p2 for S2, we show the following.
1. In the absence of F1 and F 2, i.e., in states where S2 is true, the newly added components
do not interfere with p1, i.e., each computation of p1 is in the problem specification even if it
executes concurrently with the new components.
2. In the presence of F 2, i.e., in states where T22 is true, p1 does not interfere with the new com-
ponents, i.e., each computation of the new components is in the new components' specification
(in the sense prescribed by its type of tolerance) even if they execute concurrently with p1.
3. In the presence of F 1, i.e., in states where T21 is true, the newly added components do not
interfere with the l1-tolerance of p1 for F 1, i.e., each computation of p1 is in the specification,
l1-tolerant to F 1, even if p1 executes concurrently with the new components.
Remaining steps. For the remaining steps of the design, where we add tolerance to F3::Fn, the
procedure of the second step is generalized accordingly.
7 Case Study in Multitolerance Design : Token Ring
Recall the mutual exclusion problem: Multiple processes may each access their critical section
provided that at any time at most one process is accessing its critical section. moreover, no process
should wait forever to access its critical section, assuming that each process leaves its critical section
in finite time.
Mutual exclusion is readily achieved by circulating a token among processes and letting each process
enter its critical section only if it has the token. In a token ring program, in particular, the processes
are organized in a ring and the token is circulated along the ring in a fixed direction.
In this case study, we design a multitolerant token ring program. The program is masking tolerant
to any number, K, of faults that each corrupt the state of some process detectably. Its tolerance
is continuous in the sense that if K state corruptions occur, it corrects its state within \Theta(K) time.
Thus, a quantitatively unique measure of tolerance is provided to each FK, where FK is the fault-
class that causes at most K state corruptions of processes.
By detectable corruption of the state of a process, we mean that the corrupted state is detected
by that process before any action inadvertently accesses that state. The state immediately before
the corruption may, however, be lost. (For our purposes, it is irrelevant as to what caused the
corruption; i.e., whether it was due to the loss of a message, the duplication of a message, timing
faults, the crash and subsequent restart of a process, etc.)
We proceed as follows: First, we describe a simple token ring program that is intolerant to detectable
state corruptions. Then, we add detectors and correctors so as to achieve masking tolerance to the
fault that corrupts the state of one process. Progressively, we add more detectors and correctors so
as to achieve masking tolerance to the fault-class that corrupts process states at most K, K ? 1,
times.
7.1 Fault-Intolerant Binary Token Ring
Processes 0::N are organized in a ring. The token is circulated along the ring such that process j,
the token to its successor j +1. (In this section, + and \Gamma are in modulo N+1
Each process j maintains a binary variable x:j. Process j; j 6= N , has the token iff x:j
differs from its successor x:(j +1) and process N has the token iff x:N is the same as its successor
x:0.
The program, TR, consists of two actions for each process j. Formally, these actions are as follows
(where
Invariant. Consider a state where process j has the token. In this state, since no other process
has a token, the x value of all processes 0::j is identical and the x value of all processes (j+1)::N is
identical. Letting X denote the string of binary values x:0; x:1; :::; x:N , we have that X satisfies the
regular expression (0 l 1 (N+1\Gammal) [ 1 l 0 (N+1\Gammal) ), which denotes a sequence of length N+1 consisting of
zeros followed by ones or ones followed by zeros. Thus, an invariant of the program TR is
7.2 Adding Tolerance to 1 State Corruption
Based on our assumption that state corruption is detectable, we introduce a special value ?, such
that when any process j detects that its state (i.e., the value of x:j) is corrupted, it resets x:j to ?.
We can now readily design masking tolerance to a single corruption of state at any process j by
ensuring that (i) the value of x:j is eventually corrected so that it is no longer ? and (ii) in the
interim, no process (in particular, j+1) inadvertently gets the token as a result of the corruption of
x:j.
For (i), we add a corrector at each process j: it corrects x:j from ? to a value that is either 0 or
1. The corrector at j, j 6= 0, copies x:(j \Gamma 1); the corrector at
the corrector action at j has the same statement as the action of TR at j, and we can merge the
corrector and TR actions.
For (ii), we add a detector at each process Its detection predicate is and it has no
actions. The witness predicate of this detector (which, in this case, is the detection predicate itself)
is used to restrict the actions of program TR at j. Hence, the actions of TR at j execute only when
As a result, the execution of actions of TR is always safe (i.e., these actions
cannot inadvertently generate a token).
The augmented program, PTR, is
Fault Actions. When the state of x:j is corrupted, x:j is set to ?. Hence, the fault action is
Proof of interference-freedom. Starting from a state where S TR is true, in the presence of faults
that set the x value of a process to ?, string X always satisfies the regular expression (0 [ ?) l (1 [
?) (N+1\Gammal) or (1 [ ?) l (0 [ ?) (N+1\Gammal) . Thus, an invariant of PTR is SPTR , where
Consider the detector at j: Both its detection and witness predicates are x:(j \Gamma 1) 6= ?. Since the
detects relation is trivially reflexive in its first two arguments, it follows that
in PTR. In other words, the detector is not interfered by any other actions.
Consider the corrector at j: Both its correction and witness predicates are x:j 6= ?. Since the
program actions are identical to the corrector actions, by Theorem 5.1, the corrector actions are not
interfered by the actions of TR. Also, since the detectors have no actions, the detectors at processes
other than j do not interfere with the corrector at j; moreover, since at most one x value is set to
?, when x:j =? and thus the corrector at j is enabled, the witness predicate of the detector at j is
true and hence the corrector at j is not interfered by the detector at j.
Consider the program actions of TR: Their safety follows from the safety of the detectors, described
above. And, their progress follows from the progress of the correctors, which ensure that starting
from a state where SPTR is true and a process state is corrupted every computation of PTR reaches
a state where S TR is true, and the progress of the detectors, which ensures that no action of TR is
indefinitely blocked from executing.
Observe that our proof of mutual interference-freedom illustrates that we do not have to re-prove
the correctness of TR for the new invariant. Observe, also, that if the state of process j is corrupted
then within \Theta(1) time the corrector at j corrects the state of j.
7.3 Adding Tolerance to 2::N State Corruptions
The proof of non-interference of program PTR can be generalized to show that PTR is also masking
tolerant to the fault-class that twice corrupts process state.
The generalization is self-evident for the case where the state corruptions are separated in time so
that the first one is corrected before the second one occurs. For the case where both state corruptions
occur concurrently, say at processes j and k, we need to show that the correctors at j and k truthify
interference by each other and the other actions of the program. Let
us consider two subcases: (i) j and k are non-neighboring, and (ii) j and k are neighboring.
For the first subcase, j and k correct x:j and x:k from their predecessors j\Gamma1 and k\Gamma1, respectively.
This execution is equivalent to the parallel composition of the correctors at j and k. By Theorem
4.2, PTR reaches a state where x:j and x:k are not ?.
For the second subcase (letting j be the predessor of k), j corrects x:j from its predecessor
truthifies x:j 6=? and then terminates. Since the corrector at j does not read any variables written by
the corrector at k. Thus, from the analogue of Theorem 5.0 for the case of correctors, the corrector
at j is not interfered by the corrector at k. After x:j 6= ? is truthified, the corrector at k corrects
x:k from its predecessor j. By Theorem 4.4, the corrector at k is not interfered by the corrector
at j. Since the correctors at j and k do not interfere with each other, it follows that the program
reaches a state where x:j and x:k are not ?.
In fact, as long as the number of faults is at most N , there exists at least one process j with x:j 6=?.
PTR ensures that the state of such a j eventually causes j+1 to correct its state to x:(j
Such corrections will continue until no process has its x value set to ?. Hence, PTR tolerates up to
N faults and the time required to converge to S TR is \Theta(K), where K is the number of faults.
7.4 Adding Tolerance to More Than N State Corruptions
Unfortunately, if more than N faults occur, program PTR deadlocks iff it reaches a state where the
x value of all processes is ?. To be masking tolerant to the fault-classes that corrupt the state of
processes more than N times, a corrector is needed that detects whether the state of all processes is
? and, if so, corrects the program to a state where the x value of some process (say 0) to be equal
to 0 or 1.
Since the x values of all processes cannot be accessed simultaneously, the corrector detects in a
sequential manner whether the x values of all processes are ?. Let the detector added for this
purpose at process j be denoted as dj and the (sequentially composed) detector that detects whether
the x values of all processes is corrupted be dN
To design dj, we add a value ? to the domain of x:j. When dN detects that x:N is equal to ?, it
sets x:N to ?. Likewise, when dj, detects that x:j is equal to ?, it sets x:j to ?. Note that
since dj is part of the sequential composition, it is restricted to execute only after j+1 has completed
its detection, i.e., when x:(j+1) is equal to ?. It follows that when j completes its detection, the x
values of processes j::N are corrupted. In particular, when d0 completes its detection, the x values
of all processes are corrupted. Hence, when x:0 is set to ?, it suffices for the corrector to reset x:0
to 0.
To ensure that while the corrector is executing, no process inadvertently gets the token as a result
of the corruption of x:j, we add detectors that restrict the actions of PTR at j+1 to execute only
in states where x:j 6=? is true.
Actions. Program FTR consists of five actions at each process j. Like PTR, the first two actions,
FTR1 and FTR2, pass the token from j to j+1 and are restricted by the trivial detectors to execute
only when x:(j \Gamma1) is neither ? nor ?. Action FTR3 is dN ; it lets process N change x:N from ? to
?. Action FTR4 is dj for j ! N . Action FTR5 is the corrector action at process 0: it lets process
correct x:0 from ? to 0. Formally, these actions are as follows:
Invariant. Starting from a state where SPTR is true, the detector can change the trailing ? values
in X to ?. Thus, FTR may reach a state where X satisfies the regular expression (1 [ ?) l (0 [
Subsequent state corruptions may perturb X to
the form (1 [?) l (0 [?) m (? [?) (N+1\Gammal\Gammam) [ (0 [?) l (1 [?) m (?[?) (N+1\Gammal\Gammam) . Since all actions
preserve this last predicate, an invariant of FTR is
(0
Proof of interference-freedom. To design FTR, we have added a corrector (actions FTR3 \Gamma 5) to
program PTR to ensure that for some j, x:j is not corrupted, i.e., the correction predicate of this
corrector is V , where This corrector is of the form dN ; d(N\Gamma1); :::; d0; c0,
where each dj is an atomic detector at process j and c0 is an atomic corrector at process 0.
The detection predicate of dN is :V and its witness predicate is x:0 =?. To show
that this detector in isolation satisfies its specification, observe that
1. x:N =? detects in dN for SFTR .
2.
for (SFTR
From (1) and (2), by Theorem 3.3, x:(N \Gamma1) =? detects
. Using the same argument, x:0=? detects
in dN
Now, observe that SFTR converges to V in dN violated execution of
eventually truthify x:0 =?, and execution of c0 will truthify V . Thus, V
corrects V in dN
The corrector is not interfered by the actions FTR1 and FTR2. This follows from the fact that
FTR1 and FTR2 do not interfere with each dj and c0 (by using Theorem 5.2).
In program FTR, we have also added a detector at process j that detects x:(j\Gamma1) 6=?. As described
above (for the 1 fault case), this detector does not interfere with other actions, and it is not interfered
by other actions.
Finally, consider actions of program PTR: their safety follows from the safety of the detector
described above. Also, starting from any state in SFTR , the program reaches a state where x value
of some process is not corrupted. Starting from such a state, as in program PTR, eventually the
program reaches a state where S TR is truthified, i.e., no action of PTR is permanently blocked.
Thus, the progress of these actions follows.
Theorem 7.0 Program FTR is masking tolerant for invariant SFTR to the fault-classes FK,
K- 1, where FK detectably corrupts process states at most K times. moreover, SFTR converges
to S TR in FTR within \Theta(K) time.
Remark. We emphasize that the program FTR is masking tolerant to the fault-classes FK for the
invariant SFTR and not for S TR . Thus, in the presence of faults in FK, SFTR continues to be true
although S TR may be violated. Process j, j !N , has a token iff x:j differs from x:(j+1) and neither
x:j nor x:(j+1) is corrupted, and process N has a token iff x:N is the same as x:0 and neither x:N
nor x:0 is corrupted. Thus, in a state where SFTR is true at most one process has a token. Also
starting from such a state eventually the program reaches a state where S TR is true. Starting from
such a state, each process can get then token. Thus, starting from any state in SFTR , computations
of FTR are in the problem specification of the token ring.
In this section, we address some of the issues that our method for design of multitolerance has raised.
We also discuss the motivation for the design decisions made in this work.
Our formalization of the concept of multitolerance uses the abstractions of closure and convergence.
Can other abstractions be used to formalize multitolerance? What are the advantages of using closure
and convergence?
In principle, one can formulate the concept of multitolerance using abstractions other than closure
and convergence. As pointed out by John Rushby [21], the approaches to formulate fault-tolerance
can be classified into two: specification approaches and calculational approaches.
In specification approaches, a system is regarded as a composition of several subsystems, each with a
standard specification and one or more failure specifications. A system is fault-tolerant if it satisfies
its standard specification when all components do, and one of its failure specifications if some of its
components depart from their standard specification. One example of this approach is due to Herlihy
and Wing [22] who thus formulate graceful degradation, which is a special case of multitolerance.
In calculational approaches, the set of computations permissible in the presence of faults is calculated.
A system is said to be fault-tolerant if this set satisfies the specification of the system (or an
acceptably degraded version of it). Our approach is calculational since we compute the set of states
that are potentially reachable in the presence of faults (fault-span).
While other approaches may be used to formulate the design of multitolerance, we are not aware
of any formal methods for design of multitolerance using them. moreover, in our experience, the
structure imposed by abstractions of closure and convergence has proven to be beneficial in several
ways: (1) it has enabled us to discover the role of detectors and correctors in the design of all tolerance
properties (cf. Sections 3 and 4); (2) it has yielded simple theorems for composing tolerance actions
and underlying actions in an interference-free manner (cf. Sections 5 and 6); (3) it has facilitated
our design of novel and complex distributed programs whose tolerances exceed those of comparable
programs designed otherwise [5, 10, 20, 23, 24, 25].
We have represented faults as state perturbations. This representation readily handles transient
faults, but does it also handle permanent faults? intermittent faults? detectable faults? undetectable
All these faults can indeed be represented as state perturbations. The token ring case study illustrates
the use of state perturbations for various classes of transient faults. In an extended version
of this paper [24], we present a case study of tree-based mutual exclusion which illustrates the
analogous representation for permanent faults and for detectable or undetectable faults.
It is worth pointing out that representing permanent and intermittent faults, such as Byzantine
faults and fail-stop and repair faults, may require the introduction of auxiliary variables [5, 10].
For example, to represent Byzantine faults that affects a process j, we may introduce an auxiliary
boolean variable byz:j that is true iff j is Byzantine. If j is not Byzantine, it executes its "normal"
actions. Otherwise, it executes some "abnormal" actions. When the Byzantine fault occurs, byz:j is
truthified, thus, permitting j to execute its abnormal actions. Similarly, to represent fail-stop and
repair faults that affects a process j, we may introduce an auxiliary boolean variable down:j that is
true iff j has fail-stopped. All actions of j are restricted to be executed only when down:j is false.
When a fail-stop fault occurs, down:j is truthified, thus preventing j from executing its actions.
When a repair occurs, down:j is falsified.
We have assumed that problem specifications are suffix closed and fusion closed. Where are these
assumptions exploited in the design method? Do these assumptions restrict the applicability of the
method?
We have used these assumptions in three places: (1) Suffix closure of problem specifications implies
the existence of invariant state predicates. (2) Fusion closure of problem specifications implies the
existence of correction state predicates. (3) Suffix closure and fusion closure of problem specifications
imply that the corresponding safety specifications are fusion closed, which, in turn, implies the
existence of detection state predicates.
These assumptions are not restrictive in the following sense: Let L be a set of state sequences that
is not suffix closed and/or not fusion closed and let p be a program. Then, it can be shown that
by adding history variables to the variables of p, there exists a problem specification L 0 such that
the following condition holds: all computations of p that start at states where some "initial" state
predicate is true are in L iff p satisfies L 0 for some state predicate. Thus, the language of problem
specifications is not restrictive.
How would our method of considering the fault-classes one-at-a-time compare with a method that
considers them altogether?
There is a sense in which the one-at-a-time and the altogether methods are equivalent: programs
designed by the one method can also be designed by the other method. To justify this informally,
let us consider a program p designed by using the altogether method to tolerate fault-classes F 1,
F 2, . , Fn. Program p can also be designed using the one-at-a-time method as follows: Let p1
be a subprogram of p that tolerates F 1. This is the program designed in the first stage of the
one-at-a-time method. Likewise, let p2 be a subprogram of p that tolerates F1 and F 2. This is the
program designed in the second stage of the one-at-a-time method. And so on, until p is designed.
To complete the argument of equivalence, it remains to observe that a program designed by the
one-at-a-time n-stage method can trivially be designed by the altogether method.
In terms of software engineering practice, however, the two methods would exhibit differences.
Towards identifying these differences, we address three issues: (i) the structure of the programs
designed using the two methods, (ii) the complexity of using them, and (iii) the complexity of the
programs designed using them.
On the first issue, the stepwise method may yield programs that are better structured. This is
exemplified by our hierarchical token ring program which consists of three layers: the basic program
that transmits the token, a corrector for the case when at least one process is not corrupted, and a
corrector for the case when all processes are corrupted
On the second issue, since we consider one fault-class at a time, the complexity of each step is less
than the complexity of the altogether program. For example, in the token ring program, we first
handled the case where the state of some process is not corrupted. Then, we handled the only case
where the state of all processes is corrupted. Thus, each step was simpler than the case where we
would need to consider both these cases simultaneously.
On the third issue, it is possible that considering all fault-classes at a time may yield a program
whose complexity is (in some sense) optimal with respect to each fault-class, whereas the one-at-
a-time approach may yield a program that is optimal for some, but not all, fault-classes. This
suggests two considerations for the use of our method. One, the order in which the fault-classes
are considered should be chosen with care. (Again, in principle, programs designed with one order
can be designed by any other order. But, in practice, different orders may yield different programs,
and the complexity of these programs may be different.) And, two, in choosing how to design the
tolerance for a particular fault-class, a "lookahead" may be warranted into the impact of this design
choice on the design of the tolerances to the remaining fault-classes.
How does our compositional method affect the trade-offs between dependability properties?
Our method makes it possible to reason about the trade-offs locally, i.e., focusing attention only
on the components corresponding to those dependability properties, as opposed to globally, i.e., by
considering the entire program. Thus, our method facilitates reasoning about trade-offs between
dependability properties.
moreover, as can be expected, if the desired dependability properties are impossibility to cosatisfy,
it will follow that there do not exist components that can be added to the program while complying
with the interference-freedom requirements of our method.
How does our compositional design method compare with the existing methods for designing fault-tolerant
programs?
Our compositional design method is rich in the sense that it subsumes various existing fault-tolerance
design methods such as replication, checkpointing and recovery, Schneider's state machine approach,
exception handling, and Randell's recovery blocks. (The interested reader is referred to [20, 24] for
a detailed discussion of how properties such as replication, agreement, and order are designed by
interference-free composition within our method.)
How are fault-classes derived? Can our method be used if it is difficult to characterize the faults the
system is subject to?
Derivation of fault-classes is application specific. It begins with the identification of the faults that
the program may be subject to. Each of these faults is then formally characterized using state
perturbations. (As mentioned above, auxiliary variables may be introduced in this formalization.)
The desired type of tolerance for each fault is then specified. Finally, the faults are grouped into
(possibly overlapping) fault-classes, based on the characteristics of the faults or their corresponding
types of tolerance.
If it is difficult to characterize the faults in an application, a user of our method is obliged to guess
some large enough fault-class that would accommodate all possible faults. It is often for this reason
that designers choose weak models such as self-stabilization (where the state may be perturbed
arbitrarily) or Byzantine failure (where the program may behave arbitrarily).
9 Concluding Remarks and Future Work
In this paper, we formalized the notion of multitolerance to abstract a variety of problems in de-
pendability. It is worthwhile to point out that multitolerance has other related applications as well.
One is to reason about graceful degradation with respect to progressively increasing fault-classes.
Another is to guarantee different qualities of service (QoS) with respect to different user requirements
and traffics. A third one is to reason about adaptivity of systems with respect to different
modes of environment behavior.
We also presented a simple, compositional method for designing multitolerant programs, that added
detector and corrector components for providing each desired type of tolerance. The addition of
multiple components to an intolerant program was made tractable by adding tolerances to fault-
classes one at a time. To avoid re-proving the correctness of the program in every step, we provided
a theory for ensuring mutual interference-freedom in compositions of detectors and correctors with
the intolerant program.
To our knowledge, this is the first formal method for the design of multitolerant programs. Our
method is effective for the design of quantitative as well as qualitative tolerances. As an example
of quantitative tolerance, we presented a token ring protocol that recovers from upto K faults in
time. For examples of qualitative tolerances, we refer the interested reader to our designs of
multitolerant programs for barrier computations, repetitive Byzantine agreement, mutual exclusion,
tree maintenance, leader election, bounded-space distributed reset, and termination detection [23,
To apply our design method in practice, we are currently developing SIEFAST, a simulation and
implementation environment that enables stepwise implementation and validation of multitolerant
distributed programs. We are also studying the mechanical synthesis of multitolerant concurrent
programs.
Acknowledgments
. We are indebted to the anonymous referees for their detailed and constructive
comments on earlier versions of this paper, which significantly improved the presentation. Thanks
also to Laurie Dillon for all her help during the review process.
--R
Reliable Computer Systems: Design and Evaluation.
The AT&T Case
The Galileo Case
A foundation of fault-tolerant computing
Superstabilzing protocols for dynamic distributed systems.
Maximal flow routing.
A highly safe self-stabilizing mutual exclusion algorithm
Defining liveness.
Closure and convergence: A foundation of fault-tolerant computing
A Discipline of Programming.
The Science of Programming.
Proving boolean combinations of deterministic properties.
Parallel Program Design: A Foundation.
The existence of refinement mappings.
A proof technique for communicating sequential processes.
Stepwise refinement of parallel programs.
Proofs of networks of processes.
An axiomatic proof technique for parallel programs.
Designing masking fault-tolerance via nonmasking fault-tolerance
Critical system properties: Survey and taxonomy.
Specifying graceful degradation.
Multitolerance in distributed reset.
Multitolerance and its design.
Constraint satisfaction as a basis for designing nonmasking fault-tolerance
Multitolerant barrier synchronization.
Compositional design of multitolerant repetitive byzantine agreement.
--TR
--CTR
Anil Hanumantharaya , Purnendu Sinha , Anjali Agarwal, A component-based design and compositional verification of a fault-tolerant multimedia communication protocol, Real-Time Imaging, v.9 n.6, p.401-422, December
Orna Raz , Mary Shaw, An Approach to Preserving Sufficient Correctness in Open Resource Coalitions, Proceedings of the 10th International Workshop on Software Specification and Design, p.159, November 05-07, 2000
Anish Arora , Sandeep Kulkarni , Murat Demirbas, Resettable vector clocks, Proceedings of the nineteenth annual ACM symposium on Principles of distributed computing, p.269-278, July 16-19, 2000, Portland, Oregon, United States
Anish Arora , Sandeep S. Kulkarni , Murat Demirbas, Resettable vector clocks, Journal of Parallel and Distributed Computing, v.66 n.2, p.221-237, February 2006
Paul C. Attie , Anish Arora , E. Allen Emerson, Synthesis of fault-tolerant concurrent programs, ACM Transactions on Programming Languages and Systems (TOPLAS), v.26 n.1, p.125-185, January 2004
Robyn R. Lutz, Software engineering for safety: a roadmap, Proceedings of the Conference on The Future of Software Engineering, p.213-226, June 04-11, 2000, Limerick, Ireland
Anish Arora , Marvin Theimer, On modeling and tolerating incorrect software, Journal of High Speed Networks, v.14 n.2, p.109-134, April 2005
I-Ling Yen , Farokh B. Bastani , David J. Taylor, Design of Multi-Invariant Data Structures for Robust Shared Accesses in Multiprocessor Systems, IEEE Transactions on Software Engineering, v.27 n.3, p.193-207, March 2001
Anish Arora , Paul C. Attie , E. Allen Emerson, Synthesis of fault-tolerant concurrent programs, Proceedings of the seventeenth annual ACM symposium on Principles of distributed computing, p.173-182, June 28-July 02, 1998, Puerto Vallarta, Mexico
Axel van Lamsweerde , Emmanuel Letier, Handling Obstacles in Goal-Oriented Requirements Engineering, IEEE Transactions on Software Engineering, v.26 n.10, p.978-1005, October 2000
Vina Ermagan , Jun-ichi Mizutani , Kentaro Oguchi , David Weir, Towards Model-Based Failure-Management for Automotive Software, Proceedings of the 4th International Workshop on Software Engineering for Automotive Systems, p.8, May 20-26, 2007
Felix C. Grtner, Fundamentals of fault-tolerant distributed computing in asynchronous environments, ACM Computing Surveys (CSUR), v.31 n.1, p.1-26, March 1999
Anish Arora , Sandeep S. Kulkarni, Designing Masking Fault-Tolerance via Nonmasking Fault-Tolerance, IEEE Transactions on Software Engineering, v.24 n.6, p.435-450, June 1998 | formal methods;stepwise design;correctors;graceful degradation;dependability;compositional design;fault-tolerance;detectors;interference-freedom |
275347 | Generalized Queries on Probabilistic Context-Free Grammars. | AbstractProbabilistic context-free grammars (PCFGs) provide a simple way to represent a particular class of distributions over sentences in a context-free language. Efficient parsing algorithms for answering particular queries about a PCFG (i.e., calculating the probability of a given sentence, or finding the most likely parse) have been developed and applied to a variety of pattern-recognition problems. We extend the class of queries that can be answered in several ways: (1) allowing missing tokens in a sentence or sentence fragment, (2) supporting queries about intermediate structure, such as the presence of particular nonterminals, and (3) flexible conditioning on a variety of types of evidence. Our method works by constructing a Bayesian network to represent the distribution of parse trees induced by a given PCFG. The network structure mirrors that of the chart in a standard parser, and is generated using a similar dynamic-programming approach. We present an algorithm for constructing Bayesian networks from PCFGs, and show how queries or patterns of queries on the network correspond to interesting queries on PCFGs. The network formalism also supports extensions to encode various context sensitivities within the probabilistic dependency structure. | Introduction
pattern-recognition problems start from observations generated by some structured
stochastic process. Probabilistic context-free grammars (PCFGs) [1], [2]
have provided a useful method for modeling uncertainty in a wide range of structures,
including natural languages [2], programming languages [3], images [4], speech signals [5],
and RNA sequences [6]. Domains like plan recognition, where non-probabilistic grammars
have provided useful models [7], may also benefit from an explicit stochastic model.
Once we have created a PCFG model of a process, we can apply existing PCFG
parsing algorithms to answer a variety of queries. For instance, standard techniques can
efficiently compute the probability of a particular observation sequence or find the most
probable parse tree for that sequence. Section II provides a brief description of PCFGs
and their associated algorithms.
However, these techniques are limited in the types of evidence they can exploit and the
types of queries they can answer. In particular, the existing PCFG techniques generally
require specification of a complete observation sequence. In many contexts, we may
have only a partial sequence available. It is also possible that we may have evidence
beyond simple observations. For example, in natural language processing, we may be
able to exploit contextual information about a sentence in determining our beliefs about
certain unobservable variables in its parse tree. In addition, we may be interested in
computing the probabilities of alternate types of events (e.g., future observations or
abstract features of the parse) that the extant techniques do not directly support.
The restricted query classes addressed by the existing algorithms limit the applicability
of the PCFG model in domains where we may require the answers to more complex
queries. A flexible and expressive representation for the distribution of structures
generated by the grammar would support broader forms of evidence and queries than
supported by the more specialized algorithms that currently exist. We adopt Bayesian
networks for this purpose, and define an algorithm to generate a network representing
the distribution of possible parse trees (up to a specified string length) generated from a
flies (0.4)
like (0.4)
flies (0.45)
verb pp (0.2) noun ! ants (0.5)
Fig. 1. A probabilistic context-free grammar (from Charniak [2]).
given PCFG. Section III describes this algorithm, as well as our algorithms for extending
the class of queries to include the conditional probability of a symbol appearing anywhere
within any region of the parse tree, conditioned on any evidence about symbols
appearing in the parse tree.
The restrictive independence assumptions of the PCFG model also limit its applica-
bility, especially in domains like plan recognition and natural language with complex
dependency structures. The flexible framework of our Bayesian-network representation
supports further extensions to context-sensitive probabilities, as in the probabilistic
parse tables of Briscoe & Carroll [8]. Section IV explores several possible ways to relax
the independence assumptions of the PCFG model within our approach. Modified versions
of our PCFG algorithms can support the same class of queries supported in the
context-free case.
II. Probabilistic Context-Free Grammars
A probabilistic context-free grammar is a tuple h\Sigma; the disjoint sets \Sigma
and N specify the terminal and nonterminal symbols, respectively, with S 2 N being the
start symbol. P is the set of productions, which take the form
the probability that E will be expanded into the
string -. The sum of probabilities p over all expansions of a given nonterminal E must
be one. The examples in this paper will use the sample grammar (from Charniak [2])
shown in Fig. 1.
This definition of the PCFG model prohibits rules of the
the empty string. However, we can rewrite any PCFG to eliminate such rules and
still represent the original distribution [2], as long as we note the probability Pr(S ! ").
For clarity, the algorithm descriptions in this paper assume Pr(S !
negligible amount of additional bookkeeping can correct for any nonzero probability.
The probability of applying a particular production to an intermediate string is
conditionally independent of what productions generated this string, or what productions
will be applied to the other symbols in the string, given the presence of E. Therefore,
the probability of a given derivation is simply the product of the probabilities of the
individual productions involved. We define the parse tree representation of each such
derivation as for non-probabilistic context-free grammars [9]. The probability of a string
in the language is the sum taken over all its possible derivations.
vp!verb np pp: 0.0014
vp!verb np: 0.00216
np!noun pp: 0.036
np!noun:
verb!flies: 0.4 prep!like: 1.0 noun!ants: 0.5
noun!swat: noun!flies: 0.45 verb!like: 0.4
Fig. 2. Chart for Swat flies like ants.
A. Standard PCFG Algorithms
Since the number of possible derivations grows exponentially with the string's length,
direct enumeration would not be computationally viable. Instead, the standard dynamic
programming approach used for both probabilistic and non-probabilistic CFGs [10] exploits
the common production sequences shared across derivations. The central structure
is a table, or chart, storing previous results for each subsequence in the input sentence.
Each entry in the chart corresponds to a subsequence x of the observation
string . For each symbol E, an entry contains the probability that the corresponding
subsequence is derived from that symbol, Pr(x jE). The index i
refers to the position of the subsequence within the entire terminal string, with
indicating the start of the sequence. The index j refers to the length of the subsequence.
The bottom row of the table holds the results for subsequences of length one, and the
top entry holds the overall result, which is the probability of the observed
string. We can compute these probabilities bottom-up, since we know that Pr(x i
if E is the observed symbol x i . We can define all other probabilities recursively as the
sum, over all productions of the product p \Delta Altering this
procedure to take the maximum rather than the sum yields the most probable parse
tree for the observed string. Both algorithms require time O(L 3 ) for a string of length
L, ignoring the dependency on the size of the grammar.
To compute the probability of the sentence Swat flies like ants, we would use the algorithm
to generate the table shown in Fig. 2, after eliminating any unused intermediate
entries. There are also separate entries for each production, though this is not necessary
if we are interested only in the final sentence probability. In the top entry, there are
two listings for the production S !np vp, with different subsequence lengths for the
right-hand side symbols. The sum of all probabilities for productions with S on the
left-hand side in this entry yields the total sentence probability of 0.001011.
This algorithm is capable of computing any inside probability, the probability of a
particular string appearing inside the subtree rooted by a particular symbol. We can
work top-down in an analogous manner to compute any outside probability [2], the probability
of a subtree rooted by a particular symbol appearing amid a particular string.
Given these probabilities we can compute the probability of any particular nonterminal
appearing in the parse tree as the root of a subtree covering some subsequence.
For example, in the sentence Swat flies like ants, we can compute the probability that like
ants is a prepositional phrase, using a combination of inside and outside probabilities.
swat (1,1,1)
verb (1,1,2)
flies (2,1,1)
prep (3,1,2) noun (4,1,2)
ants (4,1,1)
like (3,1,1)
noun (2,1,2)
Fig. 3. Parse tree for Swat flies like ants, with (i; j; labeled.
The Left-to-Right Inside (LRI) algorithm [10] specifies how we can use inside probabilities
to obtain the probability of a given initial subsequence, such as the probability of a
sentence (of any length) beginning with the words Swat flies. Furthermore, we can use
such initial subsequence probabilities to compute the conditional probability of the next
terminal symbol given a prefix string.
B. Indexing Parse Trees
Yet other conceivable queries are not covered by existing algorithms, or answerable via
straightforward manipulations of inside and outside probabilities. For example, given
observations of arbitrary partial strings, it is unclear how to exploit the standard chart
directly. Similarly, we are unaware of methods to handle observation of nonterminals
only (e.g., that the last two words form a prepositional phrase). We seek, therefore,
a mechanism that would admit observational evidence of any form as part of a query
about a PCFG, without requiring us to enumerate all consistent parse trees.
We first require a scheme to specify such events as the appearance of symbols at
designated points in the parse tree. We can use the indices i and j to delimit the leaf
nodes of the subtree, as in the standard chart parsing algorithms. For example, the pp
node in the parse tree of Fig. 3 is the root of the subtree whose leaf nodes are like and
ants, so 2.
However, we cannot always uniquely specify a node with these two indices alone. In
the branch of the parse tree passing through np, n, and flies, all three nodes have
To differentiate them, we introduce the k index, defined recursively. If a
node has no child with the same i and j indices, then it has
index is one more than the k index of its child. Thus, the flies node has the noun
node above it has its parent np has 3. We have labeled each node in the
parse tree of Fig. 3 with its (i; j; indices.
We can think of the k index of a node as its level of abstraction, with higher values
indicating more abstract symbols. For instance, the flies symbol is a specialization of
the noun concept, which, in turn, is a specialization of the np concept. Each possible
specialization corresponds to an abstraction production of the form
only one symbol on the right-hand side. In a parse tree involving such a production, the
nodes for E and E 0 have identical i and j values, but the k value for E is one more than
that of E 0 . We denote the set of abstraction productions as PA ' P .
All other productions are decomposition productions, in the set
two or more symbols on the right-hand side. If a node E is expanded by a decomposition
production, the sum of the j values for its children will equal its own j value, since the
length of the original subsequence derived from E must equal the total lengths of the
subsequences of its children. In addition, since each child must derive a string of nonzero
length, no child has the same j index as E, which must then have Therefore,
abstraction productions connect nodes whose indices match in the i and j components,
while decomposition productions connect nodes whose indices differ.
III. Bayesian Networks for PCFGs
A Bayesian network [11], [12], [13] is a directed acyclic graph where nodes represent
random variables, and associated with each node is a specification of the distribution
of its variable conditioned on its predecessors in the graph. Such a network defines a
joint probability distribution-the probability of an assignment to the random variables
is given by the product of the probabilities of each node conditioned on the values of
its predecessors according to the assignment. Edges not included in the graph indicate
conditional independence; specifically, each node is conditionally independent of its
nondescendants given its immediate predecessors. Algorithms for inference in Bayesian
networks exploit this independence to simplify the calculation of arbitrary conditional
probability expressions involving the random variables.
By expressing a PCFG in terms of suitable random variables structured as a Bayesian
network, we could in principle support a broader class of inferences than the standard
PCFG algorithms. As we demonstrate below, by expressing the distribution of parse
trees for a given probabilistic grammar, we can incorporate partial observations of a
sentence as well as other forms of evidence, and determine the resulting probabilities of
various features of the parse trees.
A. PCFG Random Variables
We base our Bayesian-network encoding of PCFGs on the scheme for indexing parse
trees presented in Section II-B. The random variable N ijk denotes the symbol in the parse
tree at the position indicated by the (i; j; indices. Looking back at the example parse
tree of Fig. 3, a symbol E labeled (i; j; indicates that N combinations
not appearing in the tree correspond to N variables taking on the null value nil.
Assignments to the variables N ijk are sufficient to describe a parse tree. However, if
we construct a Bayesian network using only these variables, the dependency structure
would be quite complicated. For example, in the example PCFG, the fact that N 213
has the value np would influence whether N 321 takes on the value pp, even given that
parent in the parse tree) is vp. Thus, we would need an additional link
between N 213 and N 321 , and, in fact, between all possible sibling nodes whose parents
have multiple expansions.
To simplify the dependency structure, we introduce random variables P ijk to represent
the productions that expand the corresponding symbols N ijk . For instance, we add the
node P 141 , which would take on the value vp!verb np pp in the example. N 213 and N 321
are conditionally independent given P 141 , so no link between siblings is necessary in this
case.
However, even if we know the production P ijk , the corresponding children in the
parse tree may not be conditionally independent. For instance, in the chart of Fig. 2,
entry (1,4) has two separate probability values for the production S !np vp, each
corresponding to different subsequence lengths for the symbols on the right-hand side.
Given only the production used, there are again multiple possibilities for the connected
N variables: N four of these
sibling nodes are conditionally dependent since knowing any one determines the values
of the other three. Therefore, we dictate that each variable P ijk take on different values
for each breakdown of the right-hand symbols' subsequence lengths.
The domain of each P ijk variable therefore consists of productions, augmented with
the j and k indices of each of the symbols on the right-hand side. In the previous
example, the domain of P 141 would require two possible values, S ! np[1; 3]vp[3; 1] and
where the numbers in brackets correspond to the j and k values,
respectively, of the associated symbol. If we know that P 141 is the former, then N
and N probability one. This deterministic relationship renders the child
N variables conditionally independent of each other given P ijk . We describe the exact
nature of this relationship in Section III-C.2.
Having identified the random variables and their domains, we complete the definition
of the Bayesian network by specifying the conditional probability tables representing
their interdependencies. The tables for the N variables represent their deterministic
relationship with the parent P variables. However, we also need the conditional probability
of each P variable given the value of the corresponding N variable, that is,
E). The PCFG specifies the relative
probabilities of different productions for each nonterminal, but we must compute the
probability, fi(E; j; (analogous to the inside probability [2]), that each symbol E t on
the right-hand side is the root node of a subtree, at abstraction level k t , with a terminal
subsequence length j t .
B. Calculating fi
B.1 Algorithm
We can calculate the values for fi with a modified version of the dynamic programming
algorithm sketched in Section II-A. As in the standard chart-based PCFG algorithms,
we can define this function recursively and use dynamic programming to compute its
values. Since terminal symbols always appear as leaves of the parse tree, we have, for
any terminal symbol x 2 \Sigma, fi(x;
For any nonterminal symbol since nonterminals can never be leaf
nodes. For is the sum, over all productions expanding E, of
the probability of that production expanding E and producing a subtree constrained by
the parameters j and k.
abstraction productions are possible. For an abstraction production
need the probabilities that E is expanded into E 0 and that E 0 derives a
string of length j from the abstraction level immediately below E. The former is given by
the probability associated with the production, while the latter is simply
According to the independence assumptions of the PCFG model, the expansion of E 0
is independent of its derivation, so the joint probability is simply the product. We can
compute these probabilities for every abstraction production expanding E. Since the
different expansions are mutually exclusive events, the value for fi(E; j; is merely the
sum of all the separate probabilities.
We assume that there are no abstraction cycles in the grammar. That is, there is no
sequence of productions since if such a cycle existed,
the above recursive calculation would never halt. The same assumption is necessary for
termination of the standard parsing algorithm. The assumption does restrict the classes
of grammars for which such algorithms are applicable, but it will not be restrictive in
domains where we interpret productions as specializations, since cycles would render an
abstraction hierarchy impossible.
For productions are possible. For a decomposition production
we need the probability that E is thus expanded and
that each E t derives a subsequence of appropriate length. Again, the former is given
by p, and the latter can be computed from values of the fi function. We must consider
every possible subsequence length j t for each E t such that
In addition, the
could appear at any level of abstraction k t , so we must consider all possible values
for a given subsequence length. We can obtain the joint probability of any combination
of
t=1 values by computing
since the derivation from each
is independent of the others. The sum of these joint probabilities over all possible
t=1 yields the probability of the expansion specified by the production's right-hand
side. The product of the resulting probability and p yields the probability of that
particular expansion, since the two events are independent. Again, we can sum over all
relevant decomposition productions to find the value of fi(E; j; 1).
The algorithm in Fig. 4 takes advantage of the division between abstraction and decomposition
productions to compute the values fi(E; j; strings bounded by length.
The array kmax keeps track of the depth of the abstraction hierarchy for each subsequence
length.
B.2 Example Calculations
To illustrate the computation of fi values, consider the result of using Charniak's
grammar from Fig. 1 as its input. We initialize the entries for
have probability one for each terminal symbol, as in Fig. 5. To fill in the entries for
we look at all of the abstraction productions. The symbols noun, verb,
and prep can all be expanded into one or more terminal symbols, which have nonzero fi
values at 1. We enter these three nonterminals at values equal to the
sum, over all relevant abstraction productions, of the product of the probability of the
given production and the value for the right-hand symbol at 1. For instance, we
compute the value for noun by adding the product of the probability of noun!swat and
the value for swat, that of noun!flies and flies, and that of noun!ants and ants. This
yields the value one, since a noun will always derive a string of length one, at a single
level abstraction above the terminal string, given this grammar. The abstraction phase
continues until we find S at 4, for which there are no further abstractions, so we go
Compute-Beta(grammar,length)
for each symbol x 2Terminals(grammar)
for each symbol E 2Nonterminals(grammar)
then
/* Decomposition phase */
for each production
for each sequence fj t g m
t=1 such that
for each sequence fk t g m
t=1 such that 1 - k t -kmax[j t ]
result/ p
for t / 1 to m
result
Abstraction Phase */
while
for each production
then fi[E;
return fi, kmax
Fig. 4. Algorithm for computing fi values.
on to begin the decomposition phase.
To illustrate the decomposition phase, consider the value for fi(S; 3; 1). There is
only one possible decomposition production, s!np vp. However, we must consider two
separate cases: when the noun phrase covers two symbols and the verb phrase one,
and when the noun phrase covers one and the verb phrase two. At a subsequence
length of two, both np and vp have nonzero probability only at the bottom level of
abstraction, while at a length of one, only at the third. So to compute the probability of
the first subsequence length combination, we multiply the probability of the production
by fi(np; 2; 1) and fi(vp; 1; 3). The probability of the second combination is a similar
product, and the sum of the two values provides the value to enter for S.
The other abstractions and decompositions proceed along similar lines, with additional
summation required when multiple productions or multiple levels of abstraction are
possible. The final table is shown in Fig. 5, which lists only the nonzero values.
np 0:0672 np 0:176 np 0:08 vp 0:3
vp 0:1008 vp 0:104 vp 0:12 2 prep 1:0
pp 0:176 pp 0:08 pp 0:4 verb 1:0
noun 1:0
like 1:0
swat 1:0
flies 1:0
ants 1:0
Fig. 5. Final table for sample grammar.
B.3 Complexity
For analysis of the complexity of computing the fi values for a given PCFG, it is useful
to define d to be the maximum length of possible chains of abstraction productions (i.e.,
the maximum k value), and m to be the maximum production length (number of symbols
on the right-hand side). A single run through the abstraction phase requires time
O(jPA j), and for each subsequence length, there are O(d) runs. For a specific value of
j, the decomposition phase requires time O(jPD jj each decomposition
production, we must consider all possible combinations of subsequence lengths and levels
of abstractions for each symbol on the right-hand side. Therefore, the whole algorithm
would take time O(n[djP A j
C. Network Generation Phase
We can use the fi function calculated as described above to compute the domains of
random variables N ijk and P ijk and the required conditional probabilities.
C.1 Specification of Random Variables
The procedure Create-Network, described in Fig. 6, begins at the top of the
abstraction hierarchy for strings of length n starting at position 1. The root symbol
variable, N 1n(kmax[n]) , can be either the start symbol, indicating the parse tree begins
here, or nil , indicating that the parse tree begins below. We must allow the parse tree
to start at any j and k where fi(S; because these can all possibly derive strings
(of any length bounded by n) within the language.
Create-Network then proceeds downward through the N ijk random variables and
specifies the domain of their corresponding production variables, P ijk . Each such production
variable takes on values from the set of possible expansions for the possible
nonterminal symbols in the domain of N ijk . If k ? 1, only abstraction productions are
possible, so the procedure Abstraction-Phase, described in Fig. 7, inserts all possible
expansions and draws links from P ijk to the random variable N ij(k\Gamma1) , which takes on the
value of the right-hand side symbol. If the procedure Decomposition-Phase,
described in Fig. 8, performs the analogous task for decomposition productions, except
that it must also consider all possible length breakdowns and abstraction levels for the
symbols on the right-hand side.
if fi[S; length;
then Insert-State(N 1(length)kmax[length] ,S)
if Start-Prob(fi,kmax,length,kmax[length]\Gamma1)? 0:0
then Insert-State(N 1(length)kmax[length] ,nil )
for k / kmax[j] down-to 1
for
for each symbol E 2Domain(N ijk )
then
then
else
else
Fig. 6. Procedure for generating the network.
for each production
Insert-State(N
Fig. 7. Procedure for finding all possible abstraction productions.
for each production
for each sequence fj t g m
t=1 such that
for each sequence fk t g m
t=1 such that 1 - k t -kmax[j t ]
then Insert-State(P ijk
for t / 1 to m
Fig. 8. Procedure for finding all possible decomposition productions.
then if fi[S;
then Insert-State(P ijk ,nil ! S[j;
Insert-State(N ij(k\Gamma1) ,S)
else if
then Insert-State(P ijk ,nil
Insert-State(N i(j \Gamma1)kmax[j \Gamma1] ,S)
if Start-Prob(fi,kmax,j,k)? 0:0
then Insert-State(P ijk ,nil !nil )
then Add-Parent(N ij(k\Gamma1) ,P ijk )
else Add-Parent(N i(j \Gamma1)kmax[j \Gamma1] ,P ijk )
Insert-State(N i(j \Gamma1)kmax[j \Gamma1] ,nil )
Fig. 9. Procedure for handling start of parse tree at next level.
if j=0
then return 0.0
else if k=0
then return
else return fi[S;
Fig. 10. Procedure for computing the probability of the start of the tree occurring for a particular
string length and abstraction level.
Create-Network calls the procedure Start-Tree, described in Fig. 9, to handle
the possible expansions of nil : either nil ! S, indicating that the tree starts immediately
below, or nil ! nil , indicating that the tree starts further below. Start-Tree
uses the procedure Start-Prob, described in Fig. 10, to determine the probability of
the parse tree starting anywhere below the current point of expansion.
When we insert a possible value into the domain of a production node, we add it as a
parent of each of the nodes corresponding to a symbol on the right-hand side. We also
insert each symbol from the right-hand side into the domain of the corresponding symbol
variable. The algorithm descriptions assume the existence of procedures Insert-State
and Add-Parent. The procedure Insert-State(node,label) inserts a new state with
name label into the domain of variable node. The procedure Add-Parent(child,parent)
draws a link from node parent to node child.
C.2 Specification of Conditional Probability Tables
After Create-Network has specified the domains of all of the random variables,
we can specify the conditional probability tables. We introduce the lexicographic order
OE over the set f(j; k)j1 -
(j For the purposes of simplicity, we do not
specify an exact value for each probability specify a weight,
We compute the exact probabilities through normalization, where
we divide each weight by the sum
. The prior probability table for the top node,
which has no parents, can be defined as follows:
(j;k)OE(n;kmax[n])
For a given state ae in the domain of any P ijk node, where ae represents a production
and corresponding assignment of j and k values to the symbols on the right-hand side,
of the form (p), we can define the conditional probability
of that state as:
E) / p
Y
For any symbol E in the domain of N ijk , Pr(P For the
productions for starting or delaying the tree, the probabilities are:
(j
The probability tables for the N ijk nodes are much simpler, since once the productions
are specified, the symbols are completely determined. Therefore, the entries
are either one or zero. For example, consider the nodes N
with the parent
node P
(among others). For the rule ae representing
. For all symbols other
than E ' in the domain of N i '
, this conditional probability is zero. We can fill in this
entry for all configurations of the other parent nodes (represented by the ellipsis in the
condition part of the probability), though we know that any conflicting configurations
(i.e., two productions both trying to specify the symbol N
are impossible. Any
configuration of the parent nodes that does not specify a certain symbol indicates that
the node takes on the value nil with probability one.
C.3 Network Generation Example
As an illustration, consider the execution of this algorithm using the fi values from
Fig. 5. We start with the root variable N 142 . The start symbol S has a fi value greater
than zero here, as well as at points below, so the domain must include both S and nil .
To obtain Pr(N simply divide fi(S; 4; 2) by the sum of all fi values for S,
yielding 0.055728.
The domain of P 142 is partially specified by the abstraction phase for the symbol S in
the domain of N 142 . There is only one relevant production, S !vp, which is a possible
expansion since fi(vp; 4; 1) ? 0. Therefore, we insert the production into the domain
of P 142 , with conditional probability one given that N since there are no other
possible expansions. We also draw a link from P 142 to N 141 , whose domain now includes
vp with conditional probability one given that P
To complete the specification of P 142 , we must consider the possible start of the tree,
since the domain of N 142 includes nil . The conditional probability of P
is 0.24356, the ratio of fi(S; 4; 1) and the sum of fi(S; j; 1). The link
from P 142 to N 141 has already been made during the abstraction phase, but we must also
insert S and nil into the domain of N 141 , each with conditional probability one given
the appropriate value of P 142 .
We then proceed to N 141 , which is at the bottom level of abstraction, so we must
perform a decomposition phase. For the production S ! np vp, there are three possible
combinations of subsequence lengths which add to the total length of four. If np derives
a string of length one and vp a string of length three, then the only possible levels
of abstraction for each are three and one, respectively, since all others will have zero
values. Therefore, we insert the production s!np[1,3] vp[3,1] into the domain of
where the numbers in brackets correspond to the subsequence length and level of
abstraction, respectively. The conditional probability of this value, given that N
is the product of the probability of the production, fi(np; 1; 3), and fi(vp; 3; 1), normalized
over the probabilities of all possible expansions.
We then draw links from P 141 to N 113 and N 231 , into whose domains we insert np and
vp, respectively. The i values are obtained by noting that the subsequence for np begins
at the same point as the original string while that for vp begins at a point shifted by the
length of the subsequence for np. Each occurs with probability one, given that the value
of P 141 is the appropriate production. Similar actions are taken for the other possible
subsequence length combinations. The operations for the other random variables are
performed in a similar fashion, leading to the network structure shown in Fig. 11.
C.4 Complexity of Network Generation
The resulting network has O(n 2 d) nodes. The domain of each N i11 variable has O(j\Sigmaj)
states to represent the possible terminal symbols, while all other N ijk variables have
O(jN possible states. There are n variables of the former, and O(n 2 d) of the latter.
For k ? 1, the P ijk variables (of which there are O(n 2 d)) have a domain of O(jPA states.
For P ij1 variables, there are states for each possible decomposition production, for each
possible combination of subsequence lengths, and for each possible level of abstraction
of the symbols on the right-hand side. Therefore, the P ij1 variables (of which there are
have a domain of O(jPD jj states, where we have again defined d to be
the maximum value of k, and m to be the maximum production length.
Unfortunately, even though each particular P variable has only the corresponding N
variable as its parent, a given N variable could have potentially O(n) P variables as
Fig. 11. Network from example grammar at maximum length 4.
parents. The size of the conditional probability table for a node is exponential in the
number of parents, although given that each N can be determined by at most one P (i.e.,
no interactions are possible), we can specify the table in a linear number of parameters.
If we define T to be the maximum number of entries of any conditional probability
table in the network, then the abstraction phase of the algorithm requires time
O(jPA jT ), while the decomposition phase requires time O(jPD jn Handling
the start of the parse tree and the potential space holders requires time O(T ). The total
time complexity of the algorithm is then O(n 2 jP D jn
O(jP jn m+1 d m T m ), which dwarfs the time complexity of the dynamic programming algorithm
for the fi function. However, this network is created only once for a particular
grammar and length bound.
D. PCFG Queries
We can use the Bayesian network to compute any joint probability that we can express
in terms of the N and P random variables included in the network. The standard
Bayesian network algorithms [11], [12], [14] can return joint probabilities of the form
conditional probabilities of the form Pr(X
each X is either N or P . Obviously, if we are
interested only in whether a symbol E appeared at a particular location in the parse
tree, we need only examine the marginal probability distribution of the corresponding
variable. Thus, a single network query will yield the probability Pr(N E).
The results of the network query are implicitly conditional on the event that the
length of the terminal string does not exceed n. We can obtain the joint probability
by multiplying the result by the probability that a string in the language has a length
not exceeding n. For any j, the probability that we expand the start symbol S into a
terminal string of length j is
which we can then sum for
To obtain the appropriate unconditional probability for any query, all network queries
reported in this section must be multiplied by
D.1 Probability of Conjunctive Events
The Bayesian network also supports the computation of joint probabilities analogous
to those computed by the standard PCFG algorithms. For instance, the probability of
a particular terminal string such as Swat flies like ants corresponds to the probability
ants). The probability of an initial
subsequence like Swat flies. , as computed by the LRI algorithm [10], corresponds to
the probability Pr(N like). Since the Bayesian network represents the
distribution over strings of bounded length, we can find initial subsequence probabilities
only over completions of length bounded by
However, although in this case our Bayesian network approach requires some modification
to answer the same query as the standard PCFG algorithm, it needs no modification
to handle more complex types of evidence. The chart parsing and LRI algorithms require
complete sequences as input, so any gaps or other uncertainty about particular
symbols would require direct modification of the dynamic programming algorithms to
compute the desired probabilities. The Bayesian network, on the other hand, supports
the computation of the probability of any evidence, regardless of its structure. For in-
stance, if we have a sentence Swat flies . ants where we do not know the third word,
a single network query will provide the conditional probability of possible completions
well as the probability of the specified
evidence Pr(N
This approach can handle multiple gaps, as well as partial information. For example,
if we again do not know the exact identity of the third word in the sentence Swat flies
. ants, but we do know that it is either swat or like, we can use the Bayesian network to
fully exploit this partial information by augmenting our query to specify that any domain
values for N 311 other than swat or like have zero probability. Although these types of
queries are rare in natural language, domains like speech recognition often require this
ability to reason when presented with noisy observations.
We can answer queries about nonterminal symbols as well. For instance, if we have
the sentence Swat flies like ants, we can query the network to obtain the conditional
probability that like ants is a prepositional phrase, Pr(N
like; ants). We can answer queries where we specify evidence about
nonterminals within the parse tree. For instance, if we know that like ants is a prepositional
phrase, the input to the network query will specify that N as well as
specifying the terminal symbols.
Alternate network algorithms can compute the most probable state of the random
variables given the evidence, instead of a conditional probability [11], [15], [14]. For
example, consider the case of possible four-word sentences beginning with the phrase
Swat flies. The probability maximization network algorithms can determine that the
most probable state of terminal symbol variables N 311 and N 411 is like flies, given that
N 111 =swat, N 211 =flies, and N 511 =nil.
D.2 Probability of Disjunctive Events
We can also compute the probability of disjunctive events through multiple network
queries. If we can express an event as the union of mutually exclusive events, each
of the form X
then we can query the network to
compute the probability of each, and sum the results to obtain the probability of the
union. For instance, if we want to compute the probability that the sentence Swat flies
like ants contains any prepositions, we would query the network for the probabilities
In a
domain like plan recognition, such a query could correspond to the probability that an
agent performed some complex action within a specified time span.
In this example, the individual events are already mutually exclusive, so we can sum
the results to produce the overall probability. In general, we ensure mutual exclusivity
of the individual events by computing the conditional probability of the conjunction of
the original query event and the negation of those events summed previously. For our
example, the overall probability would be Pr(N
prepjE)+Pr(N
prep; N 312 6= prepjE), where E corresponds to the event that the sentence is Swat flies like
ants.
The Bayesian network provides a unified framework that supports the computation
of all of the probabilities described here. We can compute the probability of any event
is a set of mutually exclusive events fX i t1 j t1 k t1
t=1 with each X being either N or P . We can also compute probabilities of events
where we specify relative likelihoods instead of strict subset restrictions. In addition,
given any such event, we can determine the most probable configuration of the uninstantiated
random variables. Instead of designing a new algorithm for each such query,
we have only to express the query in terms of the network's random variables, and use
any Bayesian network algorithm to compute the desired result.
D.3 Complexity of Network Queries
Unfortunately, the time required by the standard network algorithms in answering
these queries is potentially exponential in the maximum string length n, though the
exact complexity will depend on the connectedness of the network and the particular
network algorithm chosen. The algorithm in our current implementation uses a great
deal of preprocessing in compiling the networks, in the hope of reducing the complexity
of answering queries. Such an algorithm can exploit the regularities of our networks
(e.g., the conditional probability tables of each N ijk consist of only zeroes and ones) to
provide reasonable response time in answering queries. Unfortunately, such compilation
can itself be prohibitive and will often produce networks of exponential size. There
exist Bayesian network algorithms [16], [17] that offer greater flexibility in compilation,
possibly allowing us to to limit the size of the resulting networks, while still providing
acceptable query response times.
Determining the optimal tradeoff will require future research, as will determining the
class of domains where our Bayesian network approach is preferable to existing PCFG
algorithms. It is clear that the standard dynamic programming algorithms are more
efficient for the PCFG queries they address. For domains requiring more general queries
of the types described here, the flexibility of the Bayesian network approach may justify
the greater complexity.
IV. Context Sensitivity
For many domains, the independence assumptions of the PCFG model are overly
restrictive. By definition, the probability of applying a particular PCFG production
to expand a given nonterminal is independent of what symbols have come before and
of what expansions are to occur after. Even this paper's simplified example illustrates
some of the weaknesses of this assumption. Consider the intermediate string Swat ants
like noun. It is implausible that the probability that we expand noun into flies instead of
ants is independent of the choice of swat as the verb or the choice of ants as the object.
Of course, we may be able to correct the model by expanding the set of nonterminals
to encode contextual information, adding productions for each such expansion, and thus
preserving the structure of the PCFG model. However, this can obviously lead to an
unsatisfactory increase in complexity for both the design and use of the model. Instead,
we could use an alternate model which relaxes the PCFG independence assumptions.
Such a model would need a more complex production and/or probability structure to
allow complete specification of the distribution, as well as modified inference algorithms
for manipulating this distribution.
A. Direct Extensions to Network Structure
The Bayesian network representation of the probability distribution provides a basis
for exploring such context sensitivities. The networks generated by the algorithms of
this paper implicitly encode the PCFG assumptions through assignment of a single
nonterminal node as the parent of each production node. This single link indicates that
the expansion is conditionally independent of all other nondescendant nodes, once we
know the value of this nonterminal. We could extend the context-sensitivity of these
expansions within our network formalism by altering the links associated with these
production nodes.
We can introduce some context sensitivity even without adding any links. Since each
production node has its own conditional probability table, we can define the production
probabilities to be a function of the (i; j; values. For instance, the number
of words in a group strongly influences the likelihood of that group forming a noun
phrase. We could model such a belief by varying the probability of a np appearing over
different string lengths, as encoded by the j index. In such cases, we can modify the
standard PCFG representation so that the probability information associated with each
production is a function of i, j, and k, instead of a constant. The dynamic programming
algorithm of Fig. 4 can be easily modified to handle production probabilities that depend
on j and k. However, a dependency on the i index as well would require adding it as
a parameter of fi and introducing an additional loop over its possible values. Then, we
would have to replace any reference to the production probability, in either the dynamic
programming or network generation algorithm, with the appropriate function of i, j,
and k.
Alternatively, we may introduce additional dependencies on other nodes in the net-
work. A PCFG extension that conditions the production probabilities on the parent of
the left-hand side symbol has already proved useful in modeling natural language [18].
In this case, each production has a set of associated probabilities, one for each non-terminal
symbol that is a possible parent of the symbol on the left-hand side. This
new probability structure requires modifications to both the dynamic programming and
the network generation algorithms. We must first extend the probability information
of the fi function to include the parent nonterminal as an additional parameter. It is
then straightforward to alter the dynamic programming algorithm of Fig. 4 to correctly
compute the probabilities in a bottom-up fashion.
The modifications for the network generation algorithm are more complicated. Whenever
we add P ijk as a parent for some symbol node N - k , we also have to add N ijk as a
parent of P -
k . For example, the dotted arrow in the subnetwork of Fig. 12 represents the
additional dependency of P 112 on N 113 . We must add this link because N 112 is a possible
child nonterminal, as indicated by the link from P 113 . The conditional probability tables
for each P node must now specify probabilities given the current nonterminal and the
parent nonterminal symbols. We can compute these by combining the modified fi values
with the conditional production probabilities.
Returning to the example from the beginning of this section, we may want to condition
the production probabilities on the terminal string expanded so far. As a first
approximation to such context sensitivity, we can imagine a model where each production
has an associated set of probabilities, one for each terminal symbol in the language.
Each represents the conditional probability of the particular expansion given that the
corresponding terminal symbol occurs immediately previous to the subsequence derived
from the nonterminal symbol on the left-hand side. Again, our fi function requires an
additional parameter, and we need a modified version of the dynamic programming algorithm
to compute its values. However, the network generation algorithm needs to
introduce only one additional link, from N i11 for each P (i+1)jk node. The dashed arrows
Fig. 12. Subnetwork incorporating parent symbol dependency.
Fig. 13. Subnetwork capturing dependency on previous terminal symbol.
in the subnetwork of Fig. 13 reflect the additional dependencies introduced by this context
sensitivity, using the network example from Fig. 11. The P 1jk nodes are a special
case, with no preceding terminal, so the steps from the original algorithm are sufficient.
We can extend this conditioning to cover preceding terminal sequences rather than
individual symbols. Each production could have an associated set of probabilities, one
for each possible terminal sequence of length bounded by some parameter h. The fi
function now requires an additional parameter specifying the preceding sequence. The
network generation algorithms must then add links to P ijk from nodes N (i\Gammah)11 ,. ,
h, or from N 111 ,. , N (i\Gamma1)11 , if i ! h. The conditional probability tables
then specify the probability of a particular expansion given the symbol on the left-hand
side and the preceding terminal sequence.
In many cases, we may wish to account for external influences, such as explicit context
representation in natural language problems or influences of the current world state in
planning, as required by many plan recognition problems [19]. For instance, if we are
processing multiple sentences, we may want to draw links from the symbol nodes of one
sentence to the production nodes of another, to reflect thematic connections. As long
as our network can include random variables to represent the external context, then
we can represent the dependency by adding links from the corresponding nodes to the
appropriate production nodes and altering the conditional probability tables to reflect
the effect of the context.
In general, the Bayesian networks currently generated contain a set of random variables
sufficient for expressing arbitrary parse tree events, so we can introduce context
sensitivity by adding the appropriate links to the production nodes from the events on
which we wish to condition expansion probabilities. Once we have the correct network,
we can use any of the query algorithms from Section III-D to produce the corresponding
conditional probability.
B. Extensions to the Grammar Model
Context sensitivities expressed as incremental changes to the network dependency
structure represent only a minor relaxation of the conditional independence assumptions
of the PCFG model. More global models of context sensitivity will likely require a
radically different grammatical form and probabilistic interpretation framework. The
History-Based Grammar (HBG) [20] provides a rich model of context sensitivity by
conditioning the production probabilities on (potentially) the entire parse tree available
at the current expansion point. Since our Bayesian networks represent all positions of
the parse tree, it is theoretically possible to represent these conditional probabilities by
introducing the appropriate links. However, since the HBG model uses decision tree
methods to identify equivalence classes of the partial trees and thus produce simple
event structures to condition on, it is unclear exactly how to replicate this behavior in
a systematic generation algorithm.
If we restrict the types of context sensitivity, then we are more likely to find such a
network generation algorithm. In the non-stochastic case, context-sensitive grammars [9]
provide a more structured model than the general unrestricted grammar by allowing
only productions of the form ff 1 the ffs are arbitrary sequences
of terminal and/or nonterminal symbols. This restriction eliminates productions where
the right-hand side is shorter then the left-hand side. Such a production indicates
that A can be expanded into B only when it appears in the surrounding context of ff 1
immediately precedent and ff 2 immediately subsequent. Therefore, perhaps an extension
to a probabilistic context-sensitive grammar (PCSG), similar to that for PCFGs, could
provide an even richer model for the types of conditional probabilities briefly explored
here.
The intuitive extension involves associating a likelihood weighting with each context-sensitive
production and computing the probability of a particular derivation based on
these weights. These weights cannot correspond to probabilities, because we do not
know, a priori, which expansions may be applicable at a given point in the parse (due
to the different possible contexts). Therefore, a set of fixed production values may not
produce weights that sum to one in a particular context. We can instead use these
weights to determine probabilities after we know which productions are applicable. The
probability of a particular derivation sequence is then uniquely determined, though it
could be sensitive to the order in which we apply the productions. We could then
define a probability distribution over all strings in the context-sensitive language so that
the probability of a particular string is the sum of the probabilities over all possible
derivation sequences for that string.
This definition appears theoretically sound, though it is unclear whether any real-world
domains exist for which such a model would be useful. If we create such a model, we
should be able to generate a Bayesian network with the proper conditional dependency
structure to represent the distribution. We would have to draw links to each production
node from its potential context nodes, and the conditional probability tables would reflect
the production weights in each particular context possibility. It is an open question
whether we could create a systematic generation algorithm similar to that defined for
PCFGs.
Although the proposed PCSG model cannot account for dependence on position or
parent symbol, described earlier in this section, we could make similar extensions to
account for these types of dependencies. The result would be similar to the context-sensitive
probabilities of Pearl [21]. However, Pearl conditions the probabilities on
a part-of-speech trigram, as well as on the sibling and parent nonterminal symbols. If
we allow our model to specify conjunctions of contexts, then it may be able to represent
these same types of probabilities, as well as more general contexts beyond siblings and
trigrams.
It is clearly difficult to select a model powerful enough to encompass a significant set
of useful dependencies, but restricted enough to allow easy specification of the productions
and probabilities for a particular language. Once we have chosen a grammatical
formalism capable of representing the context sensitivities we wish to model, we must
define a network generation algorithm to correctly specify the conditional probabilities
for each production node. However, once we have the network, we can again use any
of the query algorithms from Section III-D. Thus, we have a unified framework for
performing inference, regardless of the form of the language model used to generate the
networks.
Probabilistic parse tables [8] and stochastic programs [22] provide alternate frameworks
for introducing context sensitivity. The former approach uses the finite-state
machine of the chart parser as the underlying structure and introduces context sensitivity
into the transition probabilities. Stochastic programs can represent very general
stochastic processes, including PCFGs, and their ability to maintain arbitrary state information
could support general context sensitivity as well. It is unclear whether any of
these approaches have advantages of generality or efficiency over the others.
V. Conclusion
The algorithms presented here automatically generate a Bayesian network representing
the distribution over all parses of strings (bounded in length by some parameter) in the
language of a PCFG. The first stage uses a dynamic programming approach similar to
that of standard parsing algorithms, while the second stage generates the network, using
the results of the first stage to specify the probabilities. This network is generated only
once for a particular PCFG and length bound. Once created, we can use this network
to answer a variety of queries about possible strings and parse trees. Using the standard
Bayesian network inference algorithms, we can compute the conditional probability or
most probable configuration of any collection of our basic random variables, given any
other event which can be expressed in terms of these variables.
These algorithms have been implemented and tested on several grammars, with the results
verified against those of existing dynamic programming algorithms when applicable,
and against enumeration algorithms when given nonstandard queries. When answering
standard queries, the time requirements for network inference were comparable to those
for the dynamic programming techniques. Our network inference methods achieved similar
response times for some other types of queries, providing a vast improvement over
the much slower brute force algorithms. However, in our current implementation, the
memory requirements of network compilation limit the complexity of the grammars and
queries, so it is unclear whether these results will hold for larger grammars and string
lengths.
Preliminary investigation has also demonstrated the usefulness of the network formalism
in exploring various forms of context-sensitive extensions to the PCFG model.
Relatively minor modifications to the PCFG algorithms can generate networks capable
of representing the more general dependency structures required for certain context sen-
sitivities, without sacrificing the class of queries that we can answer. Future research
will need to provide a more general model of context sensitivity with sufficient structure
to support a corresponding network generation algorithm.
Although answering queries in Bayesian networks is exponential in the worst case, our
method incurs this cost in the service of greatly increased generality. Our hope is that the
enhanced scope will make PCFGs a useful model for plan recognition and other domains
that require more flexibility in query forms and in probabilistic structure. In addition,
these algorithms may extend the usefulness of PCFGs in natural language processing
and other pattern recognition domains where they have already been successful.
Acknowledgments
We are grateful to the anonymous reviewers for careful reading and helpful suggestions.
This work was supported in part by Grant F49620-94-1-0027 from the Air Force Office
of Scientific Research.
--R
An Introduction
"Probabilistic languages: A review and some open questions,"
"Recognition of equations using a two-dimensional stochastic context-free grammar,"
"Stochastic grammars and pattern recognition,"
"Stochastic context-free grammars for modeling RNA,"
"Getting serious about parsing plans: A grammatical analysis of plan recognition,"
"Generalized probabilistic LR parsing of natural language (corpora) with unification-based grammars,"
Introduction to Automata Theory
"Basic methods of probabilistic context free grammars,"
Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference
Probabilistic Reasoning in Expert Systems: Theory and Algorithms
An Introduction to Bayesian Networks
"Bucket elimination: A unifying framework for probabilistic inference,"
"Cost-based abduction and MAP explanation,"
"Topological parameters for time-space tradeoff,"
"Query DAGs: A practical paradigm for implementing belief-network infer- ence,"
"Context-sensitive statistics for improved grammatical language models,"
"Accounting for context in plan recognition, with application to traffic monitoring,"
"Towards history-based grammars: Using richer models for probabilistic parsing,"
"Pearl: A probabilistic chart parser,"
"Effective Bayesian inference for stochastic programs,"
--TR
--CTR
Jorge R. Ramos , Vernon Rego, Feature-based generators for time series data, Proceedings of the 37th conference on Winter simulation, December 04-07, 2005, Orlando, Florida | probabilistic context-free grammars;bayesian networks |
275840 | Efficient Sparse LU Factorization with Partial Pivoting on Distributed Memory Architectures. | AbstractA sparse LU factorization based on Gaussian elimination with partial pivoting (GEPP) is important to many scientific applications, but it is still an open problem to develop a high performance GEPP code on distributed memory machines. The main difficulty is that partial pivoting operations dynamically change computation and nonzero fill-in structures during the elimination process. This paper presents an approach called S* for parallelizing this problem on distributed memory machines. The S* approach adopts static symbolic factorization to avoid run-time control overhead, incorporates 2D L/U supernode partitioning and amalgamation strategies to improve caching performance, and exploits irregular task parallelism embedded in sparse LU using asynchronous computation scheduling. The paper discusses and compares the algorithms using 1D and 2D data mapping schemes, and presents experimental studies on Cray-T3D and T3E. The performance results for a set of nonsymmetric benchmark matrices are very encouraging, and S* has achieved up to 6.878 GFLOPS on 128 T3E nodes. To the best of our knowledge, this is the highest performance ever achieved for this challenging problem and the previous record was 2.583 GFLOPS on shared memory machines [8]. | Currently with the Computer Science Department, University of Illinois at Urbana-Champaign.
pivoting operations interchange rows based on the numerical values of matrix elements during the
elimination process, it is impossible to predict the precise structures of L and U factors without
actually performing the numerical factorization. The adaptive and irregular nature of sparse LU
data structures makes an efficient implementation of this algorithm very hard even on a modern
sequential machine with memory hierarchies.
There are several approaches that can be used for solving nonsymmetric systems. One approach
is the unsymmetric-pattern multi-frontal method [5, 25] that uses elimination graphs to model
irregular parallelism and guide the parallel computation. Another approach [19] is to restructure
a sparse matrix into a bordered block upper triangular form and use a special pivoting technique
which preserves the structure and maintains numerical stability at acceptable levels. This method
has been implemented on Illinois Cedar multi-processors based on Aliant shared memory clusters.
This paper focuses on parallelization issues for a given column ordering with row interchanges to
maintain numerical stability. Parallelization of sparse LU with partial pivoting is also studied in [21]
on a shared memory machine by using static symbolic LU factorization to overestimate nonzero
fill-ins and avoid dynamic variation of LU data structures. This approach leads to good speedups
for up to 6 processors on a Sequent machine and further work is needed to assess the performance
of the sequential code.
As far as we know, there are no published results for parallel sparse LU on popular commercial
distributed memory machines such as Cray-T3D/T3E, Intel Paragon, IBM SP/2, TMC CM-5 and
Meiko CS-2. One difficulty in the parallelization of sparse LU on these machines is how to utilize a
sophisticated uniprocessor architecture. The design of a sequential algorithm must take advantage
of caching, which makes some previously proposed techniques less effective. On the other hand,
a parallel implementation must utilize the fast communication mechanisms available on these ma-
chines. It is easy to get speedups by comparing a parallel code to a sequential code which does
not fully exploit the uniprocessor capability, but it is not as easy to parallelize a highly optimized
sequential code. One such sequential code is SuperLU [7] which uses a supernode approach to
conduct sequential sparse LU with partial pivoting. The supernode partitioning makes it possible
to perform most of the numerical updates using BLAS-2 level dense matrix-vector multiplications,
and therefore to better exploit memory hierarchies. SuperLU performs symbolic factorization and
generates supernodes on the fly as the factorization proceeds. UMFPACK is another competitive
sequential code for this problem and neither SuperLU nor UMFPACK is always better than the
other [3, 4, 7]. MA41 is a code for sparse matrices with symmetric patterns. All of them are regarded
as of high quality and deliver excellent megaflop performance. In this paper we focus on the
performance analysis and comparison with SuperLU code since the structure of our code is closer
to that of SuperLU.
In this paper, we present an approach called S that considers the following key strategies together
in parallelizing the sparse LU algorithm:
1. Adopt a static symbolic factorization scheme to eliminate the data structure variation caused
by dynamic pivoting.
2. data regularity from the sparse structure obtained by the symbolic factorization
scheme so that efficient dense operations can be used to perform most of computation and
the impact of nonzero fill-in overestimation on overall elimination time is minimized.
3. Develop scheduling techniques for exploiting maximum irregular parallelism and reducing
memory requirements for solving large problems.
We observe that on most current commodity processors with memory hierarchies, a highly optimized
BLAS-3 subroutine usually outperforms a BLAS-2 subroutine in implementing the same numerical
operations [6, 9]. We can afford to introduce some extra BLAS-3 operations in re-designing the LU
algorithm so that the new algorithm is easy to be parallelized but the sequential performance of this
code is still competitive to the current best sequential code. We use the static symbolic factorization
technique first proposed in [20, 21] to predict the worst possible structures of L and U factors
without knowing the actual numerical values, then we develop a 2-D L/U supernode partitioning
technique to identify dense structures in both L and U factors, and maximize the use of BLAS-3
level subroutines for these dense structures. We also incorporate a supernode amalgamation [1, 10]
technique to increase the granularity of the computation.
In exploiting irregular parallelism in the re-designed sparse LU algorithm, we have experimented
with two mapping methods, one of which uses 1-D data mapping and the other uses 2-D data
mapping. One advantage of using 1-D data mapping is that the corresponding LU algorithm can be
easily modeled by directed acyclic task graphs (DAGs). Graph scheduling techniques and efficient
run-time support are available to schedule and execute DAG parallelism [15, 16]. Scheduling and
executing DAG parallelism is a difficult job because parallelism in sparse problems is irregular and
execution must be asynchronous. The important optimizations are overlapping computation with
communication, balancing processor loads and eliminating unnecessary communication overhead.
Graph scheduling can do an excellent job in exploiting irregular parallelism but it leads to extra
memory space per node to achieve the best performance. Also the 1-D data mapping can only expose
limited parallelism. Due to these restrictions, we have also examined a 2-D data mapping method
and an asynchronous execution scheme which exploits parallelism under memory constraints. We
have implemented our sparse LU algorithms and conducted experiments with a set of nonsymmetric
benchmark matrices on Cray-T3D and T3E. Our experiments show that our approach is quite
effective in delivering good performance in terms of high megaflop numbers. In particular, the 1-D
code outperforms the current 2-D code when processors have sufficient memory. But the 2-D code
has more potential to solve larger problems and produces higher megaflop numbers.
The rest of the paper is organized as follows. Section 2 gives the problem definition. Section 3
describes structure prediction and 2-D L/U supernode partitioning for sparse LU factorization.
Section 4 describes program partitioning and data mapping schemes. Section 5 addresses the
asynchronous computation scheduling and execution. Section 6 presents the experimental results.
Section 7 concludes the paper.
Find m such that ja mk
(03) if a then A is singular, stop;
row k with row m;
with a ik 6= 0
with a kj 6= 0
with a ik 6= 0
Figure
1: Sparse Gaussian elimination with partial pivoting for LU factorization.
Preliminaries
Figure
shows how a nonsingular matrix A can be factored into two matrices L and U using GEPP.
The elimination steps are controlled by loop index k. For elements manipulated at step k, we use i
for row indexing and j for column indexing. This convention will be used through the rest of this
paper. During each step of the elimination process, a row interchange may be needed to maintain
numerical stability. The result of LU factorization process can be expressed by:
L is a unit lower triangular matrix, U is a upper triangular matrix, and P is a permutation matrix
which contains the row interchange information. The solution of a linear system
be solved by two triangular solvers: y. The triangular solvers are much less
time consuming than the Gaussian elimination process.
Caching behavior plays an important role in achieving good performance for scientific computations.
To better exploit memory hierarchy in modern architectures, supernode partitioning is an important
technique to exploit the regularity of sparse matrix computations and utilize BLAS routines to
speed up the computation. It has been successfully applied to Cholesky factorization [26, 30,
31]. The difficulty for the nonsymmetric factorization is that supernode structure depends on
pivoting choices during the factorization thus cannot be determined in advance. SuperLU performs
symbolic factorization and identifies supernodes on the fly. It also maximizes the use of BLAS-
level operations to improve the caching performance of sparse LU. However, it is challenging to
parallelize SuperLU on distributed memory machines. Using the precise pivoting information at each
elimination step can certainly optimize data space usage, reduce communication and improve load
balance, but such benefits could be offset by high run-time control and communication overhead.
The strategy of static data structure prediction in [20] is valuable in avoiding dynamic symbolic
factorization, identifying the maximum data dependence patterns and minimizing dynamic control
overhead. We will use this static strategy in our S approach. But the overestimation does introduce
extra fill-ins and lead to a substantial amount of unnecessary operations in the numerical
factorization. We observe that in SuperLU [7] the DGEMV routine (the BLAS-2 level dense matrix
vector multiplication) accounts for 78% to 98% of the floating point operations (excluding the symbolic
factorization part). It is also a fact that BLAS-3 routine DGEMM (matrix-matrix multiplication)
is usually much faster than BLAS-1 and BLAS-2 routines [6]. On Cray-T3D with a matrix of size
\Theta 25, DGEMM can achieve 103 MFLOPS while DGEMV only reaches 85 MFLOPS. Thus the key idea
of our approach is that if we could find a way to maximize the use of DGEMM after using static symbolic
factorization, even with overestimated nonzeros and extra numerical operations, the overall
code performance could still be competitive to SuperLU which mainly uses DGEMV.
3 Storage prediction and dense structure identification
3.1 Storage prediction
The purpose of symbolic factorization is to obtain structures of L and U factors. Since pivoting
sequences are not known until the numerical factorization, the only way to allocate enough storage
space for the fill-ins generated in the numerical factorization phase is to overestimate. Given a
sparse matrix A with a zero-free diagonal, a simple solution is to use the Cholesky factor L c of
A T A. It has been shown that the structure of L c can be used as an upper bound for the structures
of L and U factors regardless of the choice of the pivot row at each step [20]. But it turns out
that this bound is not very tight. It often substantially overestimates the structures of the L and
U factors (refer to Table 1). Instead we consider another method from [20]. The basic idea is
to statically consider all possible pivoting choices at each step. The space is allocated for all the
possible nonzeros that would be introduced by any pivoting sequence that could occur during the
numerical factorization. We summarize the symbolic factorization method briefly as follows.
The nonzero structure of a row is defined as a set of column indices at which nonzeros or fill-ins
are present in the given n \Theta n matrix A. Since the nonzero pattern of each row will change as the
factorization proceeds, we use R k
i to denote the structure of row i after step k of the factorization
and A k to denote the structure of the matrix A after step k. And a k
ij denotes the element a ij in A k .
Notice that the structures of each row or the whole matrix cover the structures of both L and U
factors. In addition, during the process of symbolic factorization we assume that no exact numerical
cancelation occurs. Thus, we have
ij is structurally nonzerog:
We also define the set of candidate pivot rows at step k as follows:
ik is structurally nonzerog:
We assume that a kk is always a nonzero. For any nonsingular matrix which does not have a zero-free
diagonal, it is always possible to permute the rows of the matrix so that the permuted matrix has a
zero-free diagonal [11]. Though the symbolic factorization does work on a matrix that contains zero
entries in the diagonal, it is not preferable because it makes the overestimation too generous. The
symbolic factorization process will iterate n steps and at step k, for each row its structure
will be updated as:
R
Essentially the structure of each candidate pivot row at step k will be replaced by the union of the
structures of all the candidate pivot rows except those column indices less than k. In this way it is
guaranteed that the resulting structure A n will be able to accommodate the fill-ins introduced by
any possible pivot sequence. A simple example in Figure 2 demonstrates the whole process.
Nonzero
Fill-in
A
Figure
2: The first 3 steps of the symbolic factorization on a sample 5 \Theta 5 sparse matrix. The
structure remains unchanged at steps 4 and 5.
This symbolic factorization is applied after an ordering is performed on the matrix A to reduce
fill-ins. The ordering we are currently using is the multiple minimum degree ordering for A T A. We
also permute the rows of the matrix using a transversal obtained from Duff's algorithm [11] to make
A have a zero-free diagonal. The transversal can often help reduce fill-ins [12].
We have tested the storage impact of overestimation for a number of nonsymmetric testing matrices
from various sources. The results are listed in Table 1. The fourth column in the table is original
number of nonzeros, and the fifth column measures the symmetry of the structure of the original
matrix. The bigger the symmetry number is, the more nonsymmetric the original matrix is. A unit
symmetry number indicates a matrix is symmetric, but all matrices have nonsymmetric numerical
values. We have compared the number of nonzeros obtained by the static approach and the number
of nonzeros obtained by SuperLU, as well as that of the Cholesky factor of A T A, for these matrices.
The results in Table 1 show that the overestimation usually leads to less than 50% extra nonzeros
than SuperLU scheme does. Extra nonzeros do imply additional computational cost. For example,
one has to either check if a symbolic nonzero is an actual nonzero during a numerical factorization,
or directly perform arithmetic operations which could be unnecessary. If we can aggregate these
floating point operations and maximize the use of BLAS-3 subroutines, the sequential
code performance will still be competitive. Even the fifth column of Table 1 shows that the floating
operations from the overestimating approach can be as high as 5 times, the results in Section 6
will show that actual ratios of running times are much less. Thus it is necessary and beneficial to
identify dense structures in a sparse matrix after the static symbolic factorization.
It should be noted that there are some cases that static symbolic factorization leads to excessive
overestimation. For example, memplus matrix [7] is such a case. The static scheme produces 119
times as many nonzeros as SuperLU does. In fact, for this case, the ordering for SuperLU is applied
based on A T + A instead of A T A. Otherwise the overestimation ratio is 2.34 if using A T A for
SuperLU also. For another matrix wang3 [7], the static scheme produces 4 times as many nonzeros
as SuperLU does. But our code can still produce 1 GFLOPS for it on 128 nodes of T3E. This paper
focuses on the development of a high performance parallel code when overestimation ratios are not
too high. Future work is to study ordering strategies that minimize overestimation ratios.
factor entries/jAj S =SuperLU
Matrix
9 e40r0100 17281 553562 1.000 14.76 17.32 26.48 1.17 3.11
Table
1: Testing matrices and their statistics.
3.2 2-D L/U supernode partitioning and dense structure identification
Supernode partitioning is a commonly used technique to improve the caching performance of sparse
code [2]. For a symmetric sparse matrix, a supernode is defined as a group of consecutive columns
that have nested structure in the L factor of the matrix. Excellent performance has been achieved
in [26, 30, 31] using supernode partitioning for Cholesky factorization. However, the above definition
is not directly applicable to sparse LU with nonsymmetric matrices. A good analysis for defining
unsymmetric supernodes in an L factor is available in [7]. Notice that supernodes may need to be
further broken into smaller ones to fit into cache and to expose more parallelism. For the SuperLU
approach, after L supernode partitioning, there are no regular dense structures in a U factor that
could make it possible to use BLAS-3 routines (see Figure 3(a)). However in the S approach, there
are dense columns (or subcolumns) in a U factor that we can identify after the static symbolic
factorization (see Figure 3(b)). The U partitioning strategy is explained as follows. After an L
supernode partition has been obtained on a sparse matrix A, i.e., a set of column blocks with
possible different block sizes, the same partition is applied to the rows of the matrix to further
break each supernode panel into submatrices. Now each off-diagonal submatrix in the L part is
either a dense block or contains dense blocks. Furthermore, the following theorem identifies dense
structure patterns in U factors. This is the key to maximizing the use of BLAS-3 subroutines in
our algorithm.
(a) (b)
Figure
3: (a) An illustration of dense structures in a U factor in the SuperLU approach; (b) Dense
structures in a U factor in the S approach.
In the following theorem, we show that the 2-D L/U partitioning strategy is successful and there is
a rich set of dense structures to exploit. The following notations will be used through the rest of
the paper.
ffl The L and U partitioning divides the columns of A into N column blocks and the rows of A
into N row blocks so that the whole matrix is divided into N \Theta N submatrices. For submatrices
in the U factor, we denote them as U ij for 1 . For submatrices in the L factor, we
denote them as L ij for 1 denotes the diagonal submatrix. We use
A ij to denote a submatrix when it is not necessary to distinguish between L and U factors.
ffl Define S(i) as the starting column (or row) number of the i-th column (or row) block. For
convenience, we define S(N
ffl A subcolumn (or subrow) is a column (or row) in a submatrix. For simplicity, we use a global
column (or row) index to denote a subcolumn (or subrow) in a submatrix. For example,
by subcolumn k in the submatrix block U ij , it means the subcolumn in this submatrix with
the global column index k where 1). Similarly we use a ij to indicate an
individual nonzero element based on global indices. A compound structure in L or U is a
submatrix, a subcolumn, or a subrow.
ffl A compound structure is nonzero if it contains at least one nonzero element or fill-in. We
use A ij 6= 0 to indicate that block A ij is nonzero. Notice that an algorithm only needs to
operate on nonzero compound structures. A compound structure is structurally dense if all of
its elements are nonzeros or fill-ins. In the following we will not differentiate between nonzero
and fill-in entries. They are all considered as nonzero elements.
Theorem 1 Given a sparse matrix A with a zero-free diagonal, after the above static symbolic
factorization and 2-D L/U supernode partitioning are performed on A, each nonzero submatrix in
the U factor of A contains only structurally dense subcolumns.
Proof: Recall that P k is the set of candidate pivot rows at symbolic factorization step k. Given a
supernode spanning from column k to k + s, from its definition and the fact that after step k the
static symbolic factorization will only affect the nonzero patterns in submatrix a k+1:n;k+1:n , and A
has a zero-free diagonal, we have
Notice at each step k, the final structures of row i (i 2 P k ) are updated by the symbolic factorization
procedure as
R
For the structure of a row i where k - i - k +s, we are only interested in nonzero patterns of the U
part (excluding the part belonging to L kk ). We call this partial structure as UR i . Thus for
UR
It can be seen that after the k-th step updating, UR k
Knowing that the structure
of row k is unchanged after step k, we only need to prove that UR k
k+s as
shown below. Then we can infer that the nonzero structures of rows from k to k + s are same
and subcolumns at the U part are either structurally dense or zero. Now since P k oe P k+1 , and
it is clear that:
Similarly we can show that UR k+s
k .
The above theorem shows that the L/U partitioning can generate a rich set of structurally dense
subcolumns or even structurally dense submatrices in a U factor. We also further incorporate
this result with supernode amalgamation in Section 3.3 and our experiments indicate that more
than 64% of numerical updates is performed by the BLAS-3 routine DGEMM in S , which shows
the effectiveness of the L/U partitioning method. Figure 4 demonstrates the result of a supernode
partitioning on a 7 \Theta 7 sample sparse matrix. One can see that all the submatrices in the upper
triangular part of the matrix only contain structurally dense subcolumns.
Based on the above theorem, we can further show a structural relationship between two submatrices
in the same supernode column block, which will be useful in implementing our algorithm to detect
nonzero structures efficiently for numerical updating.
Corollary 1 Given two nonzero submatrices U ij , U
k in U ij is structurally dense, then subcolumn k in U i 0 j is also structurally dense.
Nonzero
Figure
4: An example of L/U supernode partitioning.
Proof: The corollary is illustrated in Figure 5. Since L i 0 i is nonzero, there must be a structurally
dense subrow in L i 0 i . This will lead to a nonzero element in the subcolumn k in U
the subcolumn k of U ij is structurally dense. According to Theorem 1, subcolumn k in U i 0 j is
structurally dense.
U
Figure
5: An illustration for Corollary 1.
Corollary 2 Given two nonzero submatrices U ij , U
is structurally dense, U must be structurally dense.
Proof: That is straightforward using Corollary 1.
3.3 Supernode amalgamation
For most tested sparse matrices, the average size of a supernode after L/U partitioning is very small,
about 1:5 to 2 columns. This results in very fine grained tasks. Amalgamating small supernodes
can lead to great performance improvement for both parallel and sequential sparse codes because
it can improve caching performance and reduce interprocessor communication overhead.
There could be many ways to amalgamate supernodes [7, 30]. The basic idea is to relax the
restriction that all the columns in a supernode must have exactly the same nonzero structure below
diagonal. The amalgamation is usually guided by a supernode elimination tree. A parent could
be merged with its children if the merging does not introduce too many extra zero entries into
a supernode. Row and column permutations are needed if the parent is not consecutive with its
children. However, a column permutation introduced by the above amalgamation method could
undermine the correctness of the static symbolic factorization. We have used a simpler approach
that does not require any permutation. This approach only amalgamates consecutive supernodes
if their nonzero structures only differ by a small number of entries and it can be performed in a
very efficient manner which only has a time complexity of O(n) [27]. We can control the maximum
allowed differences by an amalgamation factor r. Our experiments show that when r is in the range
of gives the best performance for the tested matrices and leads to improvement
on the execution times of the sequential code. The reason is that by getting bigger supernodes, we
are getting larger dense structures, although there may be a few zero entries in them, and we are
taking more advantage of BLAS-3 kernels. Notice that after applying the supernode amalgamation,
the dense structures identified in the Theorem 1 are not strictly dense any more. We call them
almost-dense structures and can still use the result of Theorem 1 with a minor revision. That is
summarized in the following corollary. All the results presented in Section 6 are obtained using this
amalgamation strategy.
Corollary 3 Given a sparse matrix A, if supernode amalgamation is applied to A after the static
symbolic factorization and 2-D L/U supernode partitioning are performed on A, each nonzero sub-matrix
in the U factor of A contains only almost-structurally-dense subcolumns.
4 Program partitioning, task dependence and processor
mapping
After dividing a sparse matrix A into submatrices using the L/U supernode partitioning, we need to
partition the LU code accordingly and define coarse grained tasks that manipulate on partitioned
dense data structures.
Program partitioning. Column block partitioning follows supernode structures. Typically there
are two types of tasks. One is F actor(k), which is to factorize all the columns in the k-th column
block, including finding the pivoting sequence associated with those columns. The other is
Update(k; j), which is to apply the pivoting sequence derived from F actor(k) to the j-th column
block, and modify the j-th column block using the k-th column block, where
Instead of performing the row interchange to the right part of the matrix right after each pivoting
search, a technique called "delayed-pivoting" is used [6]. In this technique, the pivoting sequence
is held until the factorization of the k-th column block is completed. Then the pivoting sequence is
applied to the rest of the matrix, i.e., interchange rows. Delayed-pivoting is important, especially to
the parallel algorithm, because it is equivalent to aggregating multiple small messages into a larger
one. Here the owner of the k-th column block sends the column block packed together with the
pivoting information to other processors.
An outline of the partitioned sparse LU factorization algorithm with partial pivoting is described
in
Figure
6. The code of F actor(k) is summarized in Figure 7. It uses BLAS-1 and BLAS-
subroutines. The computational cost of the numerical factorization is mainly dominated by
tasks. The function of task Update(k; j) is presented in Figure 8. The lines (05) and
are using dense matrix multiplications.
(2) Perform task F actor(k);
Perform task Update(k; j);
Figure
partitioned sparse LU factorization with partial pivoting.
(3) Find the pivoting row t in column m;
row t and row m of the column block k;
(5) Scale column m and update rest of columns in this column block;
Figure
7: The description of task F actor(k).
We use directed acyclic task graphs (DAGs) to model irregular parallelism arising in this partitioned
sparse LU program. The DAGs are constructed statically before numerical factorization.
Previous work on exploiting task parallelism for sparse Cholesky factorization has used elimination
trees (e.g. [28, 30]), which is a good way to expose the available parallelism because pivoting is not
required. For sparse LU, an elimination tree of A T A does not directly reflect the available paral-
lelism. Dynamically created DAGs have been used for modeling parallelism and guiding run-time
execution in a nonsymmetric multi-frontal method [5, 25].
Given the task definitions in Figures 6, 7 and 8 we can define the structure of a sparse LU task
graph in the following.
These four properties are necessary.
ffl There are N tasks F actor(k), where 1 - k - N .
ffl There is a task Update(k; . For a dense matrix, there will
be a total of N(N \Gamma 1)=2 updating tasks.
ffl There is a dependence edge from F actor(k) to task Update(k; j), where
(02) Interchange rows according to the pivoting sequence;
be the lower triangular part of L kk ;
(04) if the submatrix U kj is dense
else for each dense subcolumn c u of U kj
for each nonzero submatrix A ij
if the submatrix U kj is dense
else for each dense subcolumn c u of U kj
b be the corresponding dense subcolumn of A ij ;
Figure
8: A description of task Update(k; j).
ffl There is a dependence from Update(k; k 0 ) to F actor(k 0 ), where
exists no task Update(t; k 0 ) such that
We add one more property, that while not necessary, simplifies implementation. This property
essentially does not allow exploiting commutativity among Update() tasks. However, according to
our experience with Cholesky factorization [16], the performance loss due to this property is not
substantial, about 6% in average when graph scheduling is used.
ffl There is a dependence from Update(k; j) to Update(k there exists no
task Update(t; j) such that
Figure
9(a) shows the nonzero pattern of the partitioned matrix shown in Figure 4. Figure 9(b) is
the corresponding task dependence graph.
1-D data mapping. In the 1-D data mapping, all submatrices, from both L and U part, of the
same column block will reside in the same processor. Column blocks are mapped to processors in a
cyclic manner or based on other scheduling techniques such as graph scheduling. Tasks are assigned
based on owner-compute rule, i.e., tasks that modify the same column block are assigned to the
same processor that owns the column block.
One disadvantage of this mapping is that it serializes the computation in a single F actor(k) or
In other words, a single F actor(k) or Update(k; task will be performed by
(b)3 4 5125
Figure
9: (a) The nonzero pattern for the example matrix in Figure 4. (b) The dependence graph
derived from the partitioning result. For convenience, F () is used to denote F actor(), U() is used
to denote Update().
one processor. But this mapping strategy has an advantage that both pivot searching and subrow
interchange can be done locally without any communication. Another advantage is that parallelism
modeled by the above dependence structure can be effectively exploited using graph scheduling
techniques.
data mapping. In the literature 2-D mapping has been shown more scalable than 1-D for
sparse Cholesky [30, 31]. However there are several difficulties to apply the 2-D block-oriented
mapping to the case of sparse LU factorization even the static structure is predicted. Firstly,
pivoting operations and row interchanges require frequent and well-synchronized interprocessor
communication when submatrices in the same column block are assigned to different processors.
Effective exploitation of limited irregular parallelism in the 2-D case requires a highly efficient
asynchronous execution mechanism and a delicate message buffer management. Secondly, it is
difficult to utilize and schedule all possible irregular parallelism from sparse LU. Lastly, how to
manage a low space complexity is another issue since exploiting irregular parallelism to a maximum
degree may need more buffer space.
Our 2-D algorithm uses a simple standard mapping function. In this scheme, p available processors
are viewed as a two dimensional grid: c . A nonzero submatrix block A ij (could be
an L block or a U block) is assigned to processor P i mod pr ; j mod pc . The 2-D data mapping
is considered more scalable than 1-D data mapping because it enables parallelization of a single
F actor(k) or Update(k; j) task on p r processors. We will discuss how 2-D parallelism is exploited
using asynchronous schedule execution.
5 Parallelism exploitation
5.1 Scheduling and run-time support for 1-D methods
We discuss how 1-D sparse LU tasks are scheduled and executed so that parallel time can be
minimized. George and Ng [21] used a dynamic load balancing algorithm on a shared memory
machine. For distributed memory machines, dynamic and adaptive load balancing works well for
problems with very coarse grained computations, but it is still an open problem to balance the
benefits of dynamic scheduling with the run-time control overhead since task and data migration
cost is too expensive for sparse problems with mixed granularities. We use task dependence graphs
to guide scheduling and have investigated two types of scheduling schemes.
ffl Compute-ahead scheduling (CA). This is to use block-cyclic mapping of tasks with a
compute-ahead execution strategy, which is demonstrated in Figure 10. This idea has been
used to speed up parallel dense factorizations [23]. It executes the numerical factorization layer
by layer based on the current submatrix index. The parallelism is exploited for concurrent
updating. In order to overlap computation with communication, the F actor(k
executed as soon as F actor(k) and Update(k; k so that the pivoting
sequence and column block k for the next layer can be communicated as early as possible.
ffl Graph scheduling. We order task execution within each processor using the graph scheduling
algorithms in [36]. The basic optimizations are balancing processor loads and overlapping
computation with communication to hide communication latency. These are done by utilizing
global dependence structures and critical path information.
(01) if column block 1 is local
(02) Perform task F actor(1);
Broadcast column block 1 and the pivoting sequence;
local
Receive column block k and the pivoting choices;
rows according to the pivoting sequence;
Perform task F actor(k
Broadcast column block k and the pivoting sequence;
local
if column block k has not been received
Receive column block k and the pivoting choices;
rows according to the pivoting sequence;
Perform task Update(k; j);
Figure
10: The 1-D code using compute-ahead schedule.
Graph scheduling has been shown effective in exploiting irregular parallelism for other applications
(e.g. [15, 16]). Graph scheduling should outperform the CA scheduling for sparse LU because it
does not have a constraint in ordering F actor() tasks. We demonstrate this point using the LU
task graph in Figure 9. For this example, the Gantt charts of the CA schedule and the schedule
derived by our graph scheduling algorithm are listed in Figure 11. It is assumed that each task
has a computation weight 2 and each edge has communication weight 1. It is easy to see that
our scheduling approach produces a better result than the CA schedule. If we look at the CA
schedule carefully, we can see that the reason is that CA can look ahead only one step so that the
execution of task F actor(3) is placed after Update(1; 5). On the other hand, the graph scheduling
algorithm detects that F actor(3) can be executed before Update(1; 5) which leads to better overlap
of communication with computation.
P1(a)
Figure
11: (a) A schedule derived by our graph scheduling algorithm. (b) A compute-ahead schedule.
For convenience F () is used to denote F actor(), U() is used to denote Update().
However the implementation of the CA algorithm is much easier since the efficient execution of
a sparse task graph schedule requires a sophisticated run-time system to support asynchronous
communication protocols. We have used the RAPID run-time system [16] for the parallelization of
sparse LU using graph scheduling. The key optimization is to use Remote Memory Access(RMA) to
communicate a data object between two processors. It does not incur any copying/buffering during a
data transfer since low communication overhead is critical for sparse code with mixed granularities.
RMA is available in modern multi-processor architectures such as Cray-T3D [34], T3E [32] and
Meiko CS-2 [15]. Since the RMA directly writes data to a remote address, it is possible that the
content at the remote address is still being used by other tasks and then the execution at the remote
processor could be incorrect. Thus for a general computation, a permission to write the remote
address needs to be obtained before issuing a remote write. However in the RAPID system, this
hand-shaking process is avoided by a carefully designed task communication protocol [16]. This
property greatly reduces task synchronization cost. As shown in [17], the RAPID sparse code can
deliver more than 70% of the speedup predicted by the scheduler on Cray-T3D. In addition, using
RAPID system greatly reduces the amount of implementation work to parallelize sparse LU.
(01) Let (my rno; my cno) be the 2-D coordinates of this processor;
Perform ScaleSwap(k);
Perform Update
Perform Update 2D(k; j);
Figure
12: The SPMD code of 2-D asynchronous code.
5.2 Asynchronous execution for the 2-D code
As we discussed previously, 1-D data mapping can not expose parallelism to a maximum extent.
Another issue is that a time-efficient schedule may not be space-efficient. Specifically, to support
concurrency among multiple updating stages in both RAPID and CA code, multiple buffers are
needed to keep pivoting column blocks of different stages on each processor. Therefore for a given
problem, the per processor space complexity of the 1-D codes could be as high as O(S 1 ), where S 1
is the space complexity for a sequential algorithm. For sparse LU, each processor in the worst case
may need a space for holding the entire matrix. The RAPID system [16] also needs extra memory
space to hold dependence structures.
Based on the above observation, our goal for the 2-D code is to reduce memory space requirement
while exploiting a reasonable amount of parallelism so that it can solve large problem instances in
an efficient way. In this section, we present an asynchronous 2-D algorithm which can substantially
overlap multi-stages of updating but its memory requirement is much smaller than that of 1-D methods
Figures
12 shows the main control of the algorithm in an SPMD coding style. Figure 13 shows
the SPMD code for F actor(k) which is executed by processors of column k mod p c . Recall that
the algorithm uses 2-D block-cyclic data mapping and the coordinates for the processor that owns
are (i mod Also we divide the function of Update() (in Figure 8) into
two parts: ScaleSwap() which does scaling and delayed row interchange for submatrix A k:N; k+1:N
as shown in Figure 14; Update 2D() which does submatrix updating as shown in Figure 15. In all
figures, the statements which involve interprocessor communication are marked with .
It can be seen that the computation flow of this 2-D code is still controlled by the pivoting tasks
F actor(k). The order of execution for F actor(k), is sequential, but Update 2D()
tasks, where most of the computation comes from, can execute in parallel among all processors. The
asynchronous parallelism comes from two levels. First a single stage of tasks Update 2D(k;
Find out local maximum element of column m;
(05)* Send the subrow within column block k containing the local maximum
to processor P k mod pr ; k mod pc ;
(06) if this processor owns L kk
(07)* Collect all local maxima and find the pivot row t;
(08)* Broadcast the subrow t within column block k along this processor column
and interchange subrow t and subrow m if necessary.
Scale local entries of column m;
Update local subcolumns from column
(12)* Multicast the pivot sequence along this processor row;
(13)* if this processor owns L kk then Multicast L kk along this processor row;
(14)* Multicast the part of nonzero blocks in L k+1:N; k owned by this processor
along this processor row;
Figure
13: Parallel execution of F actor(k) for the 2-D asynchronous code.
can be executed concurrently on all processors. In addition, different stages of Update 2D() tasks
from Update 2D(k; can also be overlapped.
The idea of compute-ahead scheduling is also incorporated, i.e., F actor(k + 1) is executed as soon
as Update finishes.
Some detailed explanation for pivoting, scaling and swapping is given below. In line (5) of Figure
13, the whole subrow is communicated when each processor reports its local maximum to
the processor that owns the L kk block. Let m be the current global column
number on which the pivoting is conducted, then without further synchronization, processor
locally swap subrow m with subrow t which contains the selected pivoting
element. This shortens the waiting time to conduct further updating with a little more communication
volume. However in line (08), processor P k mod pr ; k mod pc must send the original subrow m to
the owner of subrow t for swapping, and the selected subrow t to other processors as well for updat-
ing. In F actor() tasks, synchronizations take place at lines (05), (07) and (08) when each processor
reports its local maximum to P k mod pr ; k mod pc , and P k mod pr ; k mod pc broadcasts the subrow
containing global maximum along the processor column. For task ScaleSwap(), the main role is to
scale U k; k+1:N and perform delayed row interchanges for remaining submatrices A k+1:N; k+1:N .
We examine the degree of parallelism exploited in this algorithm by determining number of updating
stages that can be overlapped. Using this information we can also determine the extra buffer space
needed per processor to execute this algorithm correctly. We define the stage overlapping degree
then receive the pivot sequence from P my rno; k mod pc ;
(04) if This processor own a part of row m or the pivot row t for column m
(05)* Interchange nonzero parts of row t and row m owned by this processor;
(08)* if my cno 6= k mod p c then receive L kk from P my rno; k mod pc ;
Scale nonzero blocks in U k; k:N owned by this processor;
(10)* Multicast the scaling results along this processor column;
(11)* if my cno 6= k mod p c then receive L k:N; k from P my rno; k mod pc ;
(12)* if my rno 6= k mod p r then receive U k; k:N from P k mod pr ; my cno ;
Figure
14: Task ScaleSwap(k) for the 2-D asynchronous code.
(1) Update 2D(k;
using L ik and U kj ;
Figure
15: Update 2D(k; j) for the 2-D asynchronous code.
for updating tasks as
There exist tasks Update 2D(k; ) and Update 2D(k executed concurrently.g
Here Update 2D(k; ) denotes a set of Update 2D(k; tasks where
Theorem 2 For the asynchronous 2-D algorithm on p processors where p ? 1 and
the reachable upper bound of overlapping degree is p c among all processors; and the reachable upper
bound of overlapping degree within a processor column is min(p r \Gamma
Proof: We will use the following facts in proving the theorem:
ffl Fact 1. F actor(k) is executed at processors with column number k mod p c . Processors on
this column are synchronized. When a processor completes F actor(k), this processor can still
do Update shown in Figure 13, but all Update tasks belonging to
this processor where t ? 1 must have been completed on this processor.
ffl Fact 2. ScaleSwap(k) is executed at processors with row number k mod p r . When a processor
completes ScaleSwap(k), all Update tasks belonging to this processor where t ? 0
must have been completed on this processor.
Part 1. First we show that Update 2D() tasks can be overlapped to a degree of p c among all
processors.
When trivial based on Fact 1. When p c ? 1, we can imagine a scenario in which all
processors in column 0 have just finished task F actor(k), and some of them are still working on
Update processors in column 1 could go ahead and execute Update 2D(k; )
tasks. After processors in column 1 finish Update 2D(k; k+1) task, they will execute F actor(k+1).
Then after finishing Update 2D(k; ) tasks, processors in column 2 could execute Update 2D(k
Finally, processors in column p c \Gamma 1 could
execute F actor(k moment, processors in
column 0 may be still working on Update Thus the overlapping degree is p c .
Now we will show by contradiction that the maximum overlapping degree is p c . Assume that at
some moment, there exist two updating stages being executed concurrently: Update 2D(k; ) and
Update must have been completed. Without loss of
generality, assuming that processors in column 0 execute F actor(k 0 ), then according to Fact 1 all
Update should be completed before this moment. Since block cyclic
mapping is used, it is easy to see each processor column has performed one of the F actor(j) tasks
should be completed on all processors.
Then for any concurrent stage Update 2D(k; ), k must satisfy which is a contradiction.
Part 2. First, we show that overlapping degree min(p r \Gamma can be achieved within a processor
column. For the convenience of illustration, we consider a scenario in which all delayed
row interchanges in ScaleSwap() take place locally without any communication within a processor
column. Therefore there is no interprocessor synchronization going on within a processor column
except in F actor() tasks. Assuming , we can imagine at some moment, processors in
column 0 have completed F actor(s), and P 0;0 has just finished ScaleSwap(s), and starts executing
Update 2D(s; ), where s mod processors in column 1 will execute
Update 1), after which P 1;0 can start ScaleSwap(s
then Update 2D(s Following this reasoning, after Update 2D(s
been finished on processors of column could complete previous
Update 2D() tasks and ScaleSwap(s+p r \Gamma 1), and start Update 2D(s+p r \Gamma 1; ). Now P 0;0 may be
still working on Update 2D(s; ). Thus the overlapping degree is obviously the
above reasoning will stop when processors of column
and F actor(s 1). In that case when P pc \Gamma1;0 is to start Update 2D(s+
pr \Gamma1;0 could be still working on Update 2D(s \Gamma because of the compute ahead scheduling.
Hence the overlapping degree is p c .
Now we need to show that the upper bound of overlapping degree within a processor column is
We have already shown in the proof of Part 1 that the overall overlapping degree
is less than p c , so is the overlapping degree within a processor column. To prove it is also less
than 1, we can use the similar proof as that for part 1, except using ScaleSwap(k) to replace
F actor(k), and using Fact 2 instead of Fact 1.
Knowing degree of overlapping is important in determining the amount of memory space needed to
accommodate those communication buffers on each processor for supporting asynchronous execu-
tion. Buffer space is additional to data space needed to distribute the original matrix. There are
four types of communication that needs buffering:
1. Pivoting along a processor column (lines (05), (07), and (08) in Figure 13), which includes
communicating pivot positions and multicasting pivot rows. We call the buffer for this purpose
Pbuffer.
2. Multicasting along a processor row (line (12), (13) and (14) in Figure 13). The communicated
data includes L kk , local nonzero blocks in L k+1:N; k , and pivoting sequences. We call the buffer
for this purpose Cbuffer.
3. Row interchange within a processor column (line (05) in Figure 14). We call this buffer Ibuffer.
4. Multicasting along a processor column (line (10) in Figure 14). The data includes local nonzero
blocks of a row panel. We call the buffer Rbuffer.
Here we assume that p r - because based on our experimental results, setting p r -
always leads to better performance. Thus the overlapping degree of Update 2D() tasks within a
processor row is at most p c , and the overlapping degree within a processor column is at most p r \Gamma 1.
Then we need p c separate Cbuffer's for overlapping among different columns and
Rbuffer's for overlapping among different rows.
We estimate the size of each Cbuffer and Rbuffer as follows. Assuming that the sparsity ratio of a
given matrix is s after fill-in and the maximum block size is BSIZE, each Cbuffer is of size:
maxfspace for local nonzero blocks of L k:N;k
Similarly each Rbuffer is of size:
local nonzero blocks of U
We ignore the buffer size for Pbuffer and Ibuffer because they are very small (the size of Pbuffer is
only about BSIZE \Delta BSIZE and the size of Ibuffer is about s \Delta n=p c ). Thus the total buffer space
needed for the asynchronous execution is: C
Notice that the sequential space complexity In practice, we set p c =p 2. Therefore the
buffer space complexity for each processor is 2:5
which is very small
for a large matrix. For all the benchmark matrices we have tested, the buffer space is less than
100 K words. Given a sparse matrix, if the matrix data is evenly distributed onto p processors, the
total memory requirement per processor is S 1 =p +O(1) considering n AE p and n AE BSIZE. This
leads us to conclude that the 2-D asynchronous algorithm is very space scalable.
6 Experimental studies
Our experiments were originally conducted on a Cray-T3D distributed memory machine at San
Supercomputing Center. Each node of the T3D includes a DEC Alpha EV4(21064) processor
with 64 Mbytes of memory. The size of the internal cache is 8 Kbytes per processor. The BLAS-3
matrix-matrix multiplication routine DGEMM can achieve 103 MFLOPS, and the BLAS-2 matrix-vector
multiplication routine DGEMV can reach 85 MFLOPS. These numbers are obtained assuming
all the data is in cache and using cache read-ahead optimization on T3D, and the matrix block
size is chosen as 25. The communication network of the T3D is a 3-D torus. Cray provides a
shared memory access library called shmem which can achieve 126 Mbytes/s bandwidth and 2:7-s
communication overhead using shmem put() primitive [34]. We have used shmem put() for the
communications in all the implementations.
We have also conducted experiments on a newly acquired Cray-T3E at San Diego Supercomputing
Center. Each T3E node has a clock rate of 300 MHZ, an 8Kbytes internal cache, 96Kbytes second
level cache, and 128 Mbytes main memory. The peak bandwidth between nodes is reported as
500 Mbytes/s and the peak round trip communication latency is about 0.5 to 2 -s [33]. We
have observed that when block size is 25, DGEMM achieves 388 MFLOPS while DGEMV reaches 255
MFLOPS. We have used block size 25 in our experiments since if the block size is too large, the
available parallelism will be reduced. In this section we mainly report results on T3E. In some
occasions that the absolute performance is concerned, we also list the results on T3D to see how
our approach scales when the underline architecture is upgraded. All the results are obtained on
T3E unless explicitly stated.
In calculating the MFLOPS achieved by our parallel algorithms, we do not include extra floating
point operations introduced by the overestimation. We use the following formula:
Achieved
Operation count obtained from SuperLU
Parallel time of our algorithm on T3D or T3E :
The operation count for a matrix is reported by running SuperLU code on a SUN workstation with
large memory since SuperLU code cannot run for some large matrices on a single T3D or T3E node
due to memory constraint. We also compare the S sequential code with SuperLU to make sure
that the code using static symbolic factorization is not too slow and will not prevent the parallel
version from delivering high megaflops.
6.1 Impact of static symbolic factorization on sequential performance
We study if the introduction of extra nonzero elements by the static factorization substantially
affects the time complexity of numerical factorization. We compare the performance of the S
sequential code with SuperLU code performance in Table 2 1 for those matrices from Table 1 that
1 The times for S in this table do not include symbolic preprocessing cost while the times for SuperLU include
symbolic factorization because SuperLU does it on the fly. Our implementation for static symbolic preprocessing is
can be executed on a single T3D or T3E node. We also introduce two other matrices to show how
well the method works for larger matrices and denser matrices. One of the two matrices is b33 5600
which is truncated from BCSSTK33 because of the current sequential implementation is not able to
handle the entire matrix due to memory constraint, and the other one is dense1000.
Matrix S Approach SuperLU Exec. Time Ratio
Seconds Mflops Seconds Mflops S /SuperLU
sherman5 2.87 0.94 8.81 26.9 2.39 0.78 10.57 32.4 1.21 1.22
sherman3 6.06 2.03 10.18 30.4 4.27 1.68 14.46 36.7 1.56 1.21
jpwh991 2.11 0.69 8.24 25.2 1.62 0.56 10.66 31.0 1.34 1.23
goodwin 43.72 17.0 15.3 39.4 -
dense1000 10.48 4.04 63.6 165.0 19.6 8.39 34.0 79.4 0.53 0.48
Table
2: Sequential performance: S versus SuperLU. A "-" implies the data is not available due
to insufficient memory.
Though the static symbolic factorization introduces a lot of extra computation as shown in Table 1,
the performance of S after 2-D L/U partitioning is consistently competitive to that of highly optimized
SuperLU. The absolute single node performance that has been achieved by the S approach
on both T3D and T3E is consistently in the range of 5 \Gamma 10% of the highest DGEMM performance for
those matrices of small or medium sizes. Considering the fact that sparse codes usually suffer poor
cache reuse, this performance is reasonable. In addition, the amount of computation for the testing
matrices in Table 2 is small, ranging from to 107 million double precision floating operations.
Since the characteristic of the S approach is to explore more dense structures and utilize BLAS-3
kernels, better performance is expected on larger or denser matrices. This is verified on a matrix
b33 5600. For even larger matrices such as vavasis3, we cannot run S on one node, but as shown
later, the 2-D code can achieve 32.8 MFLOPS per node on 16 T3D processors. Notice that the
megaflops performance per node for sparse Choleksy reported in [24] on 16 T3D nodes is around
40 MFLOPS, which is also a good indication that S single-node performance is satisfactory.
We present a quantitative analysis to explain why S can be competitive to SuperLU. Assume the
speed of BLAS-2 kernel is ! 2 second=f lop and the speed of BLAS-3 kernel is ! 3 second=f lop. The
total amount of numerical updates is C f lops for SuperLU and C 0 f lops for the S . Apparently
simplicity, we ignore the computation from the scaling part within each column because
it contributes very little to the total execution time. Hence we have:
very efficient. For example, the preprocessing time is only about 2.76 seconds on a single node of T3E for the largest
matrix we tested (vavasis3).
where T symbolic is the time spent on dynamic symbolic factorization in the SuperLU approach, ae
is the percentage of the numerical updates that are performed by DGEMM in S . Let j be the
ratio of symbolic factorization time to numerical factorization time in SuperLU, then we simplify
Equation (1) to the following:
We estimate that j - 0:82 for the tested matrices based on the results in [7]. In [17], we have
also measured ae as approximately ae - 0:67. The ratios of the number of floating point operations
performed in S and SuperLU for the tested matrices are available in Table 1. In average, the value
of C 0
is 3:98. We plug in these typical parameters in Equation 2 and 3, and we have:
For lop. Then we can get T S
- 1:93. For T3E,
lop. And we get T S
- 1:68. These estimations are close
to the ratios obtained in Table 2. The discrepancy is caused by the fact that the submatrix sizes
of supernodes are non-uniform, which leads to different caching performance. If submatrices are
of uniform sizes, we expect our prediction is more accurate. For instance, in the dense case, C 0
is
exactly 1. The ratio T S
is calculated as 0:48 for T3D and 0:42 for T3E, which are almost the
same as the ratios listed in Table 2.
The above analysis shows that using BLAS-3 as much as possible makes S competitive to SuperLU.
Suppose in a machine that DGEMM outperforms DGEMV substantially and the ratio of the computation
that is performed by DGEMM is high enough, S could be faster than SuperLU for some matrices.
The last two entries in Table 2 have already shown this.
6.2 Parallel performance of 1-D codes
In this subsection, we report a set of experiments conducted to examine the overall parallel performance
of 1-D codes, the effectiveness of scheduling and supernode amalgamation.
Performance: We list the MFLOPS numbers of the 1-D RAPID code obtained on various
number of processors for several testing matrices in Table 3 entry implies the data is not
available due memory constraint, same below). We know that the megaflops of DGEMM on T3E is
about 3.7 times as large as that on T3D, and the RAPID code after using a upgraded machine is
speeded up about 3 times in average, which is satisfactory. For the same machine, the performance
of the RAPID code increases when the number of processors increases and speedups compared to
the pure S sequential code (if applicable) can reach up to 17.7 on 64 T3D nodes and 24.1 on 64 T3E
nodes. From 32 to 64 nodes, the performance gain is small except for matrices goodwin, e40r0100
and b33 5600, which are much larger problems than the rest. The reason is that those small tested
matrices do not have enough amount of computation and parallelism to saturate a large number of
processors when the elimination process proceeds toward the end. It is our belief that better and
more scalable performance can be obtained on larger matrices. But currently the available memory
on each node of T3D or T3E limits the problem size that can be solved with the current version of
the RAPID code.
Matrix P=2 P=4 P=8 P=16 P=32 P=64
sherman5 14.7 44.4 25.8 79.0 40.8 133.1 53.8 168.6 64.9 210.7 68.4 229.9
sherman3 16.4 51.4 30.0 90.7 45.7 143.5 61.1 192.8 64.3 199.0 66.3 212.7
jpwh991 13.3 41.4 23.2 75.6 40.5 124.2 51.2 173.9 58.0 193.2 60.0 217.3
orsreg1 17.4 53.4 30.6 90.6 51.2 160.3 68.7 215.6 75.3 223.3 75.3 231.6
goodwin 29.6 73.6 54.0 135.7 87.9 238.0 136.4 373.7 182.0 522.6 218.1 655.8
Table
3: Absolute performance (MFLOPS) of the 1-D RAPID code.
Effectiveness of Graph Scheduling: We compare the performance of 1-D CA code with 1-D
RAPID code in Figure 16. The Y axis is stands for parallel time.
For 2 and 4 processors, in certain cases, the compute-ahead code is slightly faster than the RAPID
code. But for the number of processors more than 4, the RAPID code runs faster. The
more processors involved, the bigger the performance gap tends to be. The reason is that for a
small number of processors, there are sufficient tasks making all processors busy and the compute-
ahead schedule performs well while the RAPID code suffers a certain degree of system overhead.
For a larger number of processors, schedule optimization becomes important since there is limited
parallelism to exploit.
Effectiveness of supernode amalgamation: We have examined how effective our supernode
amalgamation strategy is using the 1-D RAPID code. Let PT a and PT be the parallel time with and
without supernode amalgamation respectively. The parallel time improvement ratios
on T3E for several testing matrices are listed in Table 4 and similar results on T3D are in [17].
Apparently the supernode amalgamation has brought significant improvement due to the increase
of supernode size which implies an increase of the task granularities. This is important to obtaining
good parallel performance [22].
Comparison of the RAPID Code with the 1-D CA Code
#proc
*: sherman5
+: sherman3
-0.10.10.30.50.7Comparison of the RAPID Code with the 1-D CA Code
#proc
*: jpwh991
x: goodwin
Figure
Impact of different scheduling strategies on 1-D code approach.
Matrix P=1 P=2 P=4 P=8 P=16 P=32
sherman5 47% 47% 46% 50% 40% 43%
sherman3 20% 25% 23% 28% 22% 14%
jpwh991 48% 48% 48% 50% 47% 40%
Table
4: Parallel time improvement obtained by supernode amalgamation.
6.3 2-D code performance
As mentioned before, our 2-D code exploits more parallelism but maintains a lower space complexity,
and has much more potential to solve large problems. We show the absolute performance obtained
for some large matrices on T3D in Table 5. Since some matrices cannot fit for a small number
of processors, we only list results on 16 or more processors. The maximum absolute performance
achieved on 64 nodes of T3D is 1.48 GFLOPS, which is translated to 23.1 MFLOPS per node. For
nodes, the per-node performance is 32.8 MFLOPS.
Table
6 shows the performance numbers on T3E for the 2-D code. We have achieved up to 6.878
GFLOPS on 128 nodes. For 64 nodes, megaflops on T3E are from 3.1 to 3.4 times as large as that
on T3D. Again considering that DGEMM megaflops on T3E is about 3.7 times as large as that on
T3D, our code performance after using a upgraded machine is good.
Notice that 1-D codes cannot solve the last six matrices of Table 6. For those matrices solvable using
both 1-D RAPID and 2-D codes, we compare the average parallel time differences by computing
the and the result is in Figure 17. The 1-D RAPID code achieves
Matrix Time(Sec) Mflops Time(Sec) Mflops Time(Sec) Mflops
goodwin 6.0 110.7 4.6 145.2 3.6 184.8
ex11 87.9 305.0 53.4 501.8 33.4 802.6
raefsky4 129.8 242.9 76.0 413.8 43.2 719.2
Table
5: Performance results of the 2-D code for large matrices on T3D.
Matrix P=8 P=16 P=32 P=64 P=128
Time Mflops Time Mflops Time Mflops Time Mflops Time Mflops
goodwin 3.1 215.2 1.9 344.6 1.3 496.3 1.1 599.2 0.9 715.2
ex11 50.7 528.8 28.3 946.2 16.2 1654.2 9.9 2703.1 6.4 4182.2
raefsky4 79.4 391.2 43.2 718.9 24.1 1290.7 13.9 2233.3 8.6 3592.9
inaccura 16.8 244.6 9.9 415.2 6.3 655.8 3.9 1048.0 3.0 1391.4
af23560 22.3 285.4 12.9 492.9 8.12 784.3 5.7 1123.2 4.2 1512.7
Table
Performance results of 2-D asynchronous algorithm on T3E. All times are in seconds.
better performance because it uses sophisticated graph scheduling technique to guide the mapping
of column blocks and ordering of tasks, which results in better overlapping of communication with
computation. The performance difference is larger for the matrices listed in the left of Figure 17
compared to the right of Figure 17. We partially explain the reason by analyzing load balance
factors of the 1-D RAPID code and the 2-D code in Figure 18. The load balance factor is defined
as work total =(P \Delta work max ) [31]. Here we only count the work from the updating part because it is
the major part of the computation. The 2-D code has better load balance, which can make up for
the impact of lacking of efficient task scheduling. This is verified by Figure 17 and Figure 18. One
can see that when the load balance factor of the 2-D code is close to that of the RAPID code (e.g.,
lnsp3937), the performance of the RAPID code is much better than the 2-D code; when the load
balance factor of the 2-D code is significantly better than that of the RAPID code (e.g., jpwh991
and orserg1), the performance differences are smaller.
Synchronous versus asynchronous 2-D code. Using a global barrier in the 2-D code at each
elimination step can simplify the implementation, but it cannot overlap computations among different
updating stages. We have compared parallel time reductions between the asynchronous code
and the synchronous code for some testing matrices in Table 7. It shows that asynchronous design
improves performance significantly, especially on large number of processors on T3E. It demonstrates
the importance of exploiting parallelism using asynchronous execution. The experiment
0.20.4Comparison of the RAPID Code with the 2-D Code
#proc
*: sherman5
+: sherman3
Comparison of the RAPID Code with the 2-D Code
#proc
*: jpwh991
x: goodwin
Figure
17: Performance improvement of 1-D RAPID over 2-D code:
balance comparison of RAPID v.s. 2-D
#proc
load
balance
factor
x sherman3
balance comparison of RAPID v.s. 2-D
#proc
load
balance
factor
x jpwh991
Figure
18: Comparison of load balance factors of 1-D RAPID code and 2-D code.
data on T3D is in [14].
7 Concluding remarks
In this paper we present an approach for parallelizing sparse LU factorization with partial pivoting
on distributed memory machines. The major contribution of this paper is that we integrate several
techniques together such as static symbolic factorization, scheduling for asynchronous parallelism,
2-D L/U supernode partitioning techniques to effectively identify dense structures, and maximize
the use of BLAS-3 subroutines in the algorithm design. Using these ideas, we are able to exploit
more data regularity for this open irregular problem and achieve up to 6.878 GFLOPS on 128 T3E
nodes. This is the highest performance known for this challenging problem and the previous record
was 2.583 GFLOPS on shared memory machines [8].
Matrix P=2 P=4 P=8 P=16 P=32 P=64
sherman5 7.7% 6.4% 19.4% 28.1% 25.9% 24.1%
sherman3 10.2% 12.4% 20.3% 22.7% 26.0% 25.0%
jpwh991 8.7% 10.0% 23.8% 33.3% 35.7% 28.6%
orsreg1 6.1% 7.7% 17.5% 28.0% 20.5% 28.2%
goodwin 5.4% 14.1% 14.2% 24.6% 26.0% 30.2%
Table
7: Performance improvement of 2-D asynchronous code over 2-D synchronous code.
The comparison results show that the 2-D code has a better scalability than 1-D codes because 2-D
mapping exposes more parallelism with a carefully designed buffering scheme. But the 1-D RAPID
code still outperforms the 2-D code if there is sufficient memory since the scheduling and execution
techniques for the 2-D code are simple, and are not competitive to graph scheduling. Recently we
have conducted research on developing space efficient scheduling algorithms while retaining good
time efficiency [18]. It is still an open problem to develop advanced scheduling techniques that better
exploit parallelism for 2-D sparse LU factorization with partial pivoting. There are other issues
which are related to this work and need to be further studied, for example, alternative for parallel
sparse LU based on Schur complements [13] and static estimation and parallelism exploitation for
sparse QR [29, 35].
It should be noted that the static symbolic factorization could fail to be practical if the input matrix
has a nearly dense row because it will lead to an almost complete fill-in of the whole matrix. It
might be possible to use different matrix reordering to avoid that. Fortunately, this is not the case in
most of matrices we have tested. Therefore our approach is applicable to a wide range of problems
using a simple ordering strategy. It is interesting in the future to study ordering strategies that
minimize overestimation ratios so that S can consistently deliver good performance for various
classes of sparse matrices.
Acknowledgment
This work is supported by NSF RIA CCR-9409695, NSF CDA-9529418, the UC MICRO grant with
a matching from SUN, NSF CAREER CCR-9702640, and ARPA DABT-63-93-C-0064 through the
Rutgers HPCD project.
We would like to thank Kai Shen for the efficient implementation of the static symbolic factorization
algorithm, Xiaoye Li and Jim Demmel for helpful discussions and providing us their testing
matrices and SuperLU code, Cleve Ashcraft, Tim Davis, Apostolos Gerasoulis, Esmond Ng, Ed
Rothberg, Rob Schreiber, Horst Simon, Chunguang Sun, Kathy Yelick and anonymous referees for
their valuable comments.
--R
The Influence of Relaxed Supernode Partitions on the Multifrontal Method.
Progress in Sparse Matrix Methods for Large Sparse Linear Systems on Vector Supercomputers.
User's guide for the Unsymmetric-pattern Multifrontal Package (UMFPACK)
Personal Communication
An Unsymmetric-pattern Multifrontal Method for Sparse LU factor- ization
Numerical Linear Algebra on Parallel Processors.
A Supernodal Approach to Sparse Partial Pivoting.
An Asynchronous Parallel Supernodal Algorithm for Sparse Gaussian Elimination.
An Extended Set of Basic Linear Algebra Subroutines.
The Multifrontal Solution of Indefinite Sparse Symmetric Systems of Equations.
On Algorithms for Obtaining a Maximum Transversal.
Personal Communication
Structural Representations of Schur Complements in Sparse Matrices
A Comparison of 1-D and 2-D Data Mapping for Sparse LU Factorization with Partial Pivoting
Efficient Run-time Support for Irregular Task Computations with Mixed Granularities
Sparse LU Factorization with Partial Pivoting on Distributed Memory Machines.
Space and Time Efficient Execution of Parallel Irregular Computations.
The Parallel Solution of Nonsymmetric Sparse Linear Systems Using H
Symbolic Factorization for Sparse Gaussian Elimination with Partial Pivoting.
Parallel Sparse Gaussian Elimination with Partial Pivoting.
On the Granularity and Clustering of Directed Acyclic Task Graphs
Scientific Computing: An Introduction with Parallel Computing Compilers
Highly Scalable Parallel Algorithms for Sparse Matrix Factorization.
A Parallel Unsymmetric-pattern Multifrontal Method
Parallel Algorithms for Sparse Linear Systems
Parallel sparse gaussian elimination with partial pivoting and 2-d data mapping
Computational Models and Task Scheduling for Parallel Sparse Cholesky Factorization.
Distributed Sparse Gaussian Elimination and Orthogonal Factorization.
Exploiting the Memory Hierarchy in Sequential and Parallel Sparse Cholesky Factorization.
Improved Load Distribution in Parallel Sparse Cholesky Fac- torization
Synchronization and Communication in the T3E Multiprocess.
The Cray T3E Network: Adaptive Routing in a High Performance 3D Torus.
Decoupling Synchronization and Data Transfer in Message Passing Systems of Parallel Computers.
Parallel Sparse Orthogonal Factorization on Distributed-memory Multiprocessors
PYRROS: Static Task Scheduling and Code Generation for Message-Passing Multiprocessors
--TR
--CTR
Kai Shen , Xiangmin Jiao , Tao Yang, Elimination forest guided 2D sparse LU factorization, Proceedings of the tenth annual ACM symposium on Parallel algorithms and architectures, p.5-15, June 28-July 02, 1998, Puerto Vallarta, Mexico
Xiaoye S. Li , James W. Demmel, Making sparse Gaussian elimination scalable by static pivoting, Proceedings of the 1998 ACM/IEEE conference on Supercomputing (CDROM), p.1-17, November 07-13, 1998, San Jose, CA
Patrick R. Amestoy , Iain S. Duff , Jean-Yves L'excellent , Xiaoye S. Li, Analysis and comparison of two general sparse solvers for distributed memory computers, ACM Transactions on Mathematical Software (TOMS), v.27 n.4, p.388-421, December 2001
Xiaoye S. Li , James W. Demmel, SuperLU_DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems, ACM Transactions on Mathematical Software (TOMS), v.29 n.2, p.110-140, June | dense structures;Sparse LU factorization;gaussian elimination with partial pivoting;irregular parallelism;asynchronous computation scheduling |
275845 | Strong Interaction Fairness Via Randomization. | AbstractWe present MULTI, a symmetric, distributed, randomized algorithm that, with probability one, schedules multiparty interactions in a strongly fair manner. To our knowledge, MULTI is the first algorithm for strong interaction fairness to appear in the literature. Moreover, the expected time taken by MULTI to establish an interaction is a constant not depending on the total number of processes in the system. In this sense, MULTI guarantees real-time response. MULTI makes no assumptions (other than boundedness) about the time it takes processes to communicate. It, thus, offers an appealing tonic to the impossibility results of Tsay and Bagrodia, and Joung concerning strong interaction fairness in an environment, shared-memory, or message-passing, in which processes are deterministic and the communication time is nonnegligible. Because strong interaction fairness is as strong a fairness condition that one might actually want to impose in practice, our results indicate that randomization may also prove fruitful for other notions of fairness lacking deterministic realizations and requiring real-time response. | Introduction
A multiparty interaction is a set of I/O actions executed jointly by a number of processes, each of
which must be ready to execute its own action for any of the actions in the set to occur. An attempt
to participate in an interaction delays a process until all other participants are available. After
the actions are executed, the participating processes continue their local computation. Although a
relatively new concept, the multiparty interaction has found its way into various distributed programming
languages and algebraic models of concurrency. See [11] for a taxonomy of programming
languages offering linguistic support for multiparty interaction.
Although multiparty interactions are executed synchronously, the underlying model of communication
is usually asynchronous and bipartied. The multiparty interaction scheduling problem then
is concerned with synchronizing asynchronous processes to satisfy the following requirements: (1) an
interaction can be established only if it is enabled (i.e., all of its participants are ready), and (2) a
process can participate in only one interaction at a time. Moreover, some notion of fairness is typically
associated with the implementation to prevent "unfair" computations that favor a particular
process or interaction.
Several important fairness notions have been proposed in the literature [1, 2, 3], including: weak
interaction fairness, where if an interaction is continually enabled, then some of its participants
will eventually engage in an interaction; and strong interaction fairness, where an interaction that
is infinitely often enabled will be established infinitely often. A distinguishing characteristic of
interaction fairness is that it is much weaker than most known fairness notions, while strong
interaction fairness is much stronger.
In general, stronger fairness notions induce more liveness properties, but are also more difficult
to implement. Therefore, it is not surprising to see that only weak interaction fairness has been
widely implemented (e.g., [18, 15, 4, 14, 12, 20, 10]). It is also not surprising to see that all of
these algorithms are asymmetric and deterministic, as weak interaction fairness (and thus strong
interaction fairness) has been proven impossible by any symmetric, deterministic, and distributed
algorithm [8, 13]. Given that a process decides autonomously when it will attempt an interaction,
and at a time that cannot be predicted in advance, strong interaction fairness is still not possible
even if the symmetry requirement is dropped [19, 9].
Note that these impossibility results do not depend on the type of communication primitives
(e.g., message-passing or shared-memory) provided by the underlying execution model. They hold
as long as one process's readiness for multiparty interaction can be known by another only through
communications, and the time it takes two processes to communicate is nonnegligible (but can be
finitely bounded).
In the case of CSP communication guard scheduling, a special case of the multiparty interaction
scheduling problem where each interaction involves exactly two processes, randomization has proven
to be an effective technique for coping with the aforementioned impossibility phenomena. For ex-
ample, the randomized algorithm of Reif and Spirakis [17] is symmetric, weak interaction fair with
probability 1, and guarantees real time response: if two processes are continuously willing to interact
with each other within a time interval \Delta, then they establish an interaction within \Delta time with high
likelihood, and the expected time for establishment of interaction is constant.
The randomized algorithm of Francez and Rodeh [8] is simpler: a process p i expresses its willingness
to establish an interaction with a process p j by setting a Boolean variable shared by the two
may then need to wait a certain amount of time ffi before reaccessing the variable to
determine if p j is likewise interested in the interaction. The authors show that, under the proviso
that the time to access a shared variable is negligible compared to ffi, the algorithm is weak interaction
fair with probability 1. Note, however, that this assumption, combined with the fact that no lower
bound on ffi is provided, may significantly limit the algorithm's practicality. Furthermore, no strong
interaction fairness is claimed by either algorithm.
In this paper, we present Multi, an extension of Francez and Rodeh's randomized algorithm to
the multiparty case. We prove that Multi is weak interaction fair with probability 1. We also show
that if the transition of a process to a state in which it is ready for interaction is independent of the
random draws of the other processes, then, with probability 1, Multi is strong interaction fair. To
our knowledge, Multi is the first algorithm for strong interaction fairness to appear in the literature.
We also present a detailed timing analysis of Multi and establish a lower bound on how long a
process must wait before reaccessing a shared variable. Consequently, our algorithm can be fine-tuned
for optimal performance. Moreover, we show that the expected time to establish an interaction is a
constant not depending on the total number of processes in the system. Thus, Multi also guarantees
real-time response.
Because strong interaction fairness is as strong a fairness condition that one might actually want
to impose in practice, our results indicate that randomization may also prove fruitful for other notions
of fairness lacking deterministic realizations and requiring real-time response.
The rest of the paper is organized as follows. Section 2 describes the multiparty interaction
scheduling problem in a more anthropomorphic setting known as Committee Coordination. Our
randomized algorithm is presented in Section 3 and analyzed in Section 4. Section 5 concludes.
2 The Committee Coordination Problem
The problem of scheduling multiparty interactions in asynchronous systems has been elegantly characterized
by Chandy and Misra as one of Committee Coordination [5]:
Professors (cf. processes) in a certain university have organized themselves into committees
(cf. interactions) and each committee has a fixed membership roster of one or more
professors. From time to time, a professor may decide to attend a committee meeting;
it starts waiting and continues to wait until a meeting of a committee of which it is a
member is established.
To state the Committee Coordination problem formally, we need the following two assumptions:
(1) a professor attending a committee meeting will not leave the meeting until all other members of
the committee have attended the meeting; and (2) given that all members have attended a committee
meeting, each committee member leaves the meeting in finite time. The problem then is to devise
an algorithm satisfying the following three requirements:
Synchronization: When a professor attends a committee meeting, all other members of the committee
will eventually attend the meeting as well.
Exclusion: No two committees meet simultaneously if they have a common member.
Weak Interaction Fairness: If all professors of a committee are waiting, then eventually some
professor will attend a committee meeting.
We shall also consider strong interaction fairness, i.e., a committee that is infinitely often
enabled will be established infinitely often. A committee is enabled if every member of the committee
is waiting, and is disabled otherwise.
The overall behavior of a professor can be described by the state transition diagram of Figure 1,
where state T corresponds to thinking, W corresponds to waiting for a committee meeting, and E
means that the professor is actively engaged in a meeting.
Note that any algorithm for the problem should only control the transition of a professor from
state W to state E, but not the other two transitions. That is, the transitions from T to W and from
to T are autonomous to each professor. Moreover, we do not assume any upper bound on the time
'i
'i
'i
meeting
start meeting
ready for
finish
meeting
Figure
1: State transition diagram of a professor.
a professor can spend in thinking. Otherwise, an algorithm for the problem could simply wait long
enough until all professors become waiting, and then schedule a committee meeting of its choosing.
All three requirements for the problem and strong interaction fairness would then be easily satisfied.
3 The Algorithm
In this section, we present Multi, our randomized algorithm for committee coordination. In the
algorithm, we associate with each committee M a counter CM whose value ranges over [0 \Delta \Delta jprof.Mj
\Gamma1], where prof :M is the set of professors involved in M . CM can be accessed only by the professors
in prof :M and only through the TEST&OP instruction as follows:
result := TEST&OP(CM , zero-op, nonzero-op)
The effect of this instruction is to apply to CM the operation zero-op if its value is zero and the
operation nonzero-op otherwise, and to assign to the variable result the old value (i.e., the value
before the operation) of CM . The operations used here are no-op, inc, and dec, where
For example, if returns 2. If
TEST&OP(CM , no-op, dec) leaves CM unchanged and returns 0. To simplify the presentation of the
algorithm, we assume that the execution of the TEST&OP instruction is "atomic." This assumption
is removed in Section 4.4, where a more concurrent implementation of TEST&OP is considered.
Algorithm Multi can be informally described as follows. Initially, all the shared counters are
set to zero. When a professor p i decides to attend a committee meeting, it randomly chooses a
committee M of which it is a member. It then attempts to start a meeting of M by increasing the
value of CM by 1 (all increments and decrements are to be interpreted modulo jprof.Mj). If the new
value of CM is 0 (i.e., before the increment), then professor p i concludes that
each of the other members of M has increased CM by one, and is waiting for p i to convene M . So
goes to state E to start the meeting.
If the new value of CM is not zero, then at least one of the professors in prof.M is not yet ready.
waits for some period of time (hoping that its partners will become ready) and then
reaccesses CM . If CM has now been set to 0, then all the professors which were not ready for M are
now ready, and so p i can attend the meeting. If CM is still not zero, then some professor is still not
ready for M . So p i withdraws its attempt to start M by decreasing the value of CM by 1 and tries
another committee.
The algorithm to be executed by each professor p i is presented in Figure 2, where waiting (line 1)
is a Boolean flag indicating whether or not p i is waiting for a committee meeting. The constant ffi
(line is the amount of time a professor waits before reaccessing a counter. We will later require
(see Section 4) that ffi be greater than is the maximum amount of
time a professor can spend in executing lines 2 and 3. 1 Note that the algorithm is symmetric in the
sense that all professors execute the same code and make no use of their process ids.
4 Analysis of the Algorithm
In this section we prove that Multi satisfies the synchronization and exclusion requirements of the
Committee Coordination problem, and, with probability 1, is weak and strong interaction fair. We
also analyze the expected time Multi takes to schedule a committee meeting.
4.1 Definitions
We assume a discrete global time axis where, to an external observer, the events of the system are
totally ordered. Internally, however, processors may execute instructions simultaneously at the same
time instance. Simultaneous access to a shared counter will be arbitrated in the implementation of
precisely, jmax should also include the time it takes to execute line 1. To simplify the analysis, we assume
that the Boolean flag waiting only serves to indicate the state of the executing professor, and so no explicit test of the
flag is needed. Moreover, we assume that an action is executed instantaneously at some time instance. The time it
takes to execute an action is the difference between the time the action is executed and the time the previous action
(of the same professor) was executed.
1. while waiting do f
2. randomly choose a committee M from fM j
3. if TEST&OP(CM , inc,
4. then /* a committee meeting is established */
5. attend the meeting of M
6. else f wait
7. if TEST&OP(CM , no-op,
8. then /* a committee meeting is established */
9. attend the meeting of M ;
10. /* else try another committee */ g
11. g
Figure
2: Algorithm Multi for professor p i .
the TEST&OP instruction, which we assume is executed atomically.
Since the time axis is discrete, it is meaningless to say that "there are infinitely many time
instances in some finite time interval such that ." Therefore, throughout this paper, the phrase
"there are infinitely many time instances" refers to the interval [0; 1).
For analysis purposes, we present in Figure 3 a refinement of the state transition diagram of Figure
1, where state W is refined into three sub-states . The actions taken by the professors
from these sub-states are:
randomly choose a new committee.
execute the instruction TEST&OP(CM ; inc; inc).
executing the instruction TEST&OP(CM ; no-op; dec).
We say that a professor accesses counter CM when it executes the TEST&OP instruction of state W 1 ,
reaccesses CM when it executes the TEST&OP instruction of state W 2 , and monitors committee M
while it is in state W 2 waiting for reaccess to CM .
According to the algorithm, if at time t a professor p accesses a counter CM by TEST&OP(CM ; inc; inc)
in state W 1 , then it will be in state W 2 or E right after 2 t, depending on the value of CM . Further-
2 If an action, which transits a professor p from state s1 to state s2 , occurs at time t, then we say that p is in state
s1 just before t, and is in state s2 right after t. For p's state to be defined at every time instance, we stipulate that p's
'i
'i
'i
'i
'i
ready for
meeting
reaccess CM (=0)
access CM
draw
finish
meeting
access CM
reaccess CM (6=
random
Figure
3: State transition diagram of a professor executing the algorithm.
ffistart waitingrandom drawrandom drawrandom draw
Figure
4: Timing constraints on the actions executed by a professor.
more, if p enters state W 2 at time t to monitor a committee M , then at time t reaccess
CM by TEST&OP(CM ; no-op; dec). Depending on the value of CM , after time the professor will
either return to state W 0 to choose another committee, or enter state E to attend the meeting of M .
In executing algorithm Multi, a professor starts waiting for a committee meeting in state W 0
and then repeatedly cycles through states W 0 through W 2 until entering a committee meeting via a
transition from state W 1 or W 2 . The actions it performs along this cycle are subject to the timing
constraints depicted in Figure 4. In particular, the interval between consecutive access and reaccess
actions must be of length ffi, while the interval between consecutive reaccess and access actions must
have length no greater than j max . We shall sometimes refer to the former as a "ffi-interval." As will
be made explicit in Section 4.3, the duration of a ffi-interval may vary from iteration to iteration of
the algorithm; we will require only that ffi is greater than a lower bound determined by j max and the
number of professors in the committee currently under consideration.
Figure
5 illustrates a possible scenario for four professors executing the algo-
state at time t is s2 if p executes the action at time t. For example, if p accesses CM at time t and then reaccesses CM
at time t is in state W2 at any time instance in [t; t ffi). Note that the interval is open at t + ffi. So if we
say that p is in state W2 at time t, then p must have accessed CM at some time in
prof
prof
Echoose M 14choose M 123choose M 123T
Echoose M 234choose M 234choose M 123T
Echoose M 123choose M 234choose M 123T
Echoose M 14choose M 234choose M 14
Figure
5: A partial computation of four professors.
rithm, where p 1 and p 4 are involved in committee M 14 , are involved in M 123 , and p 2 ,
are involved in M 234 . For each professor, we explicitly depict its state (on the Y-axis) at
each global time instance (on the X-axis). For example, at time 1 professor p 1 starts waiting for a
committee meeting and so it enters state W 0 from state T . At time 2, it randomly chooses M 14 and
then accesses CM 14 at time 3. Since CM before the access, p 1 enters state W 2 to monitor M 14
for units and then reaccesses CM 14
at time 6. Since p 4 will not access CM 14
returns to state W 0 to try another committee. Later at time 12, p 1 chooses committee M 123 and
then accesses CM 123 at time 13. When p 1 reaccesses CM 123 at time 16, it finds that both p 2 and p 3
are willing to start the meeting of M 123 . So p 1 enters state E to attend the meeting. The meeting of
123 ends at time 19, after which the committee members can return to state T at a time of their
own choosing. The shaded area between time 17 and 19 represents a synchronization interval for the
three professors.
4.2 Properties of the Algorithm That Hold with Certainty
We now analyze the correctness of the algorithm. We begin with an invariant about the value of a
shared counter CM , which we will use in proving that Multi satisfies the synchronization condition
of the Committee Coordination Problem.
Lemma 1 If at time t there are k professors in state W 2 monitoring committee M and no professor,
since last entering state W 0 , has entered state E to attend a meeting of M , then the value of CM at
time t is k and k ! jprof :M j. If, however, at time t some professor has entered state E to attend a
meeting of M , then professors in prof :M will have entered state E to
attend the meeting of M .
Proof: We prove the lemma by induction on t i , the time at which the i th system event occurs. The
lemma holds at time t 0 because initially every professor in prof :M is in state T and
the induction hypothesis, assume the following at time t
prof :M is the set of professors in
state W 2 monitoring committee M , no professor, since last entering state W 0 , has entered state E
to attend a meeting of M , and
Consider now the nearest time t j , j - i, at which some professor p accesses or reaccesses CM .
Since no professor accesses or reaccesses CM in [t the induction hypothesis holds as well in this
interval. Suppose first that p accesses CM through the instruction TEST&OP(CM ; inc; inc). If before
the access CM ! jprof after the access (which is less than jprof
and p enters state W 2 . That is, after time t j there are jQj professors in state W 2 monitoring
committee M , and Conversely, if before the access after the
access enters state E. Since just before t j the other professors in Q are all monitoring
M , by time reaccessed CM by TEST&OP(CM ; no-op; dec). Moreover, when they
reaccess CM they will find that CM = 0 and so they will all enter state E to start M .
To see why this last statement is true, recall that by the first of the two assumptions we put forth
in defining the Committee Coordination problem (Section 2), no professor attending M will leave M
before all of M 's members have entered state E to attend M . Consequently, no professor attending
M can leave M to attempt another instance of M (by accessing CM ), from which the desired result
follows.
Suppose now that p reaccesses CM through the instruction TEST&OP(CM ; no-op; dec). Then p 2 Q.
Since CM 6= 0, right after p's reaccess, returns to state W 0 . So right after t j ,
professors are in state W 2 monitoring M . 2
Theorem 1 (Synchronization) If a professor in prof :M enters state E at time t to attend a
meeting of M , then within ffi time all professors in prof :M will have entered state E to attend the
meeting of M .
Proof: The theorem follows immediately from Lemma 1. 2
Theorem simultaneously if they have a common member
Proof: The result follows from the fact that a professor monitors one committee at a time. 2
4.3 Properties of the Algorithm That Hold with Probability 1
We move on to prove that Multi is weak and strong interaction fair, and analyze its time complexity.
For this we need the the following two assumptions:
A1: The ffi-interval a professor chooses to wait for committee M satisfies the condition
is the maximum amount of time a professor can spend in executing lines 2 and 3
of Multi.
A2: A professor's transition from thinking to waiting (see Figure 1) does not depend on the random
draws performed by other professors.
Note that A2 is required only for strong interaction fairness.
We also require some definitions about the "random draw" a professor performs in state W 0 when
deciding which meeting to attempt. Recall that we say that a professor accesses a counter CM when
it executes the instruction TEST&OP(CM ; inc, inc) (line 3 of the algorithm) and reaccesses CM when
it executes TEST&OP(CM , no-op, dec) (line 7). Now suppose that professor p accesses some counter
in the time interval there is more than one such access, choose the most recent one. Then
the choice of counter must be the result of the random draw performed before the access (line 2).
Let D t;p denote this random draw; D t;p is not defined if p does not access any counter in
Furthermore, let D t;prof prof :M and D t;p is defined g.
For example, if p is in state W 2 at time t, then p must have accessed some counter in the interval
must be defined. As we shall see in Lemma 4, the definition of D t;p guarantees
that if D t;p is defined for all p 2 prof :M and these random draws yield the same outcome M , then
M will be established.
Henceforth we shall use / p;M to denote the fixed non-zero probability that professor p 2 prof :M
chooses committee M in a random draw. Thus,
Y
p2prof :M
is the probability that a set of mutually independent random draws, one by each professor in prof :M ,
yields the same outcome M .
The following three lemmas are used in the fairness proofs. The first one says that D t;prof :M and
must refer to mutually disjoint sets of random draws if t and t 0 are at least ffi apart.
Proof: Directly from the definition of D t;p . 2
Lemma 3 Assume A1 and that committee M is continuously enabled in the interval
is, M is enabled at any time instance in [t there exists a time
instance t, t
Since M is continuously enabled in [t every professor of M is, by definition, in a
-state throughout this interval. Clearly, either one of the following holds:
such that p is in state W 2 throughout
such that p is in state W 0 or W 1 throughout
In case (i), let t g. Then D t;p is defined for every p 2 prof :M and every
Given that t and that ' - j max , we thus have that there exists some t,
j. So the lemma is proven.
For case (ii), suppose that some professor p 1 2 prof :M stays in W 0 or W 1 in [t
then accesses some counter at t i +m 1 , where
by the assumed lower bound on ', t
If D t i +m 1 ;p is defined for all p 2 prof :M , then we are done. Otherwise, there must exist another
professor 2. By the lower bound on ',
still enabled at time t i +m 1 . So p 2 is in a W-state at t i +m 1 .
cannot be in state W 2 , for otherwise D t i +m 1 ;p 2
would be defined. So p 2 is in state W 0 or
must access some counter within j max time. Assume that p 2 accesses some counter at
We argue that D t i +m 1 +m 2 ;p 1
is also defined. To see this, recall that
By the lower bound on ' and the fact that jprof :M j - 2, we have
still in a W -state at any time instance in
some counter at t i +m 1 , p 1 must have entered state W 2 after the access, and stays in W 2 throughout
must be defined for all t in [t i and the fact
that
Note that t defined for every other professor
in prof :M , then we are done. Otherwise, similar to the above reasoning, there must exist a third
professor prof :M such that D t i +m 1 +m 2 ;p 3 is not defined. Using the same argument, we can show
that there exists m 3 where
is defined for all
Continuing in this fashion, we can show that there exists k professors in prof :M , and
and D t i +m 1 +m 2 +:::+m k ;p l is defined for all 1 - l - k. Given that there are only a finite number of
professors in prof :M , eventually we will establish that there exists some t, t
D t;p is defined for each p 2 prof :M . The lemma is then proven. 2
The proof of Lemma 3 is illustrated in Figure 6 for a committee of size 4 with t
the smallest ffi allowed by A1.
As a consequence of Lemma 3, different professors can choose different values for ffi; these values
need only satisfy the lower bound established by A1. 3 Therefore, the clocks used by the professors
to implement time-outs need not be adjusted to the same accuracy.
Lemma 3 says that, under assumption A1, if a committee M is continuously enabled sufficiently
long, then there exists an interval of length ' within which every professor in prof :M performs a
random draw. The following lemma ensures that if their random draws yield the same outcome,
then they must establish M .
Lemma 4 If jD t;prof :M all the random draws in D t;prof :M yield the same outcome
3 As such, the ffi referred to in the definition of D t;prof :M and in the statement of Lemma 1 should be understood as
the minimum and the maximum of the relevant ffi values, respectively.
M is continuously enabled
\Gammareadyaccessreaccess
\Gammareaccessaccess
\Gammareaccessaccess
access
Figure
Illustration of the proof of Lemma 3. is the maximum possible interval throughout
which M is enabled but jD t;prof:M j 6= jprof:M j. Here jprof:M must be defined for all
. Note that if ffi min would equal
would not be defined.
M , then by time t some professor must have already entered state E to start M , and by time
all professors in prof :M will enter state E to start M .
Proof: Assume the hypotheses described in the lemma. Let p i 2 prof :M be the first professor
which, after performing its random draw in D t;prof :M , accesses CM by TEST&OP(CM ; inc; inc), and
prof :M be the last professor to do so. Let t i and t j be the time at which p i and p j ,
respectively, access CM . Then, t.
CM at time t i , it will not reaccess CM until
professors that access CM in [t remain in state W 2 before p j accesses CM . By Lemma 1,
just before access. So when p j accesses CM at t j , it will set CM to zero and
enter state E to start M . Moreover, by time ffi, every other professor of M will learn that M
has been started when it reaccesses CM by TEST&OP(CM ; no-op; dec), and so will also enter state E
to start M . Since ffi, the lemma is thus established. 2
Note that Lemma 4 relies on the fact that the access involved in the definition of D t;p occurs in
the interval that is open at and closed at t. If we were to relax the definition to the
closed interval then the correctness of Lemma 4 would depend on how an access/reaccess
conflict to the same counter is resolved in the implementation. To see this, suppose that p 1 accesses
a counter at a counter at t. So by the new definition both D t;p 1
and D t;p 2
are defined. Suppose further that both random draws yield the same outcome M , which involves
only access the same counter CM at t, respectively. Assume that
before By the algorithm, p 1 will wait ffi time and then reaccess CM at t, causing
a conflict with p 2 's access at the same time. Clearly, M will be established only if the conflict is
resolved in favor of the access; i.e., p 2 gets to go first.
Theorem 3 (Weak Interaction Fairness) Assume A1 and that all members of a committee M
are waiting for committee meetings. Then the probability is 1 that eventually a meeting involving
some member of M will be started.
Proof: Assume A1, and suppose that M is enabled at t. Let
jprof :M j. Consider the probability that M is continuously enabled in [t; t
M is continuously enabled in [t; t exists a time instance t 1 ,
that jD t 1 ;prof :M j. If the random draws in D t 1 ;prof :M yield the same outcome M , then, by
Lemma 4, M must be disabled at t 1 . (Even if the random draws do not yield the same outcome,
some professor of M may still establish another committee meeting M 0 if its random draw has the
outcome M 0 and at the same time all other professors of M 0 are also interested in M 0 .) So the
probability that the random draws in D t 1 ;prof :M do not cause any committee involving a member of
M to be started is no greater than and so the probability that M is continuously enabled
in [t; t
Similarly, if M is still enabled after t 1 , then by Lemmas 2 and 3 there must exist another time
instance t 2 such that D t 2 ;prof :M contains a completely new set of random draws of size jprof :M j.
Again, the probability that M remains enabled after these random draws is no greater than
given that the random draws in D t 1 ;prof :M do not cause any member of M to attend a meeting. So
the probability that M is continuously enabled up to time t 2 is no greater than
if M is still enabled at t 2 then there will be another new set of random draws D t 3 ;prof :M of size
jprof :M j. In general, the probability that M remains continuously enabled after i mutually disjoint
sets of random draws D t 1 ;prof As i tends to infinity,
tends to zero. So the probability is zero that M remains enabled forever. 2
Intuitively, A1 requires that the ffi parameter used in the algorithm is large enough so that a
professor will not reaccess a counter before the other professors get a chance to access the counter.
reaccess 6
access 6
reaccess 6
access 6
reaccess
reaccess 6
access 6
reaccess 6
access 6
reaccess
Figure
7: Two professors wait forever without establishing a meeting due to a bad choice of ffi.
If this assumption is removed from Theorem 3, then a set of professors could access and reaccess
a counter forever without ever establishing a committee meeting. To illustrate, consider Figure 7.
Each professor reaccesses a counter before the other professor could access the same counter. So no
matter what committees they choose in their random draws, at no time can a professor see the result
of a counter set by the other professor.
The strong interaction fairness property of the algorithm additionally requires assumption A2
and a lemma on the probabilistic behavior of a large number of random draws.
Lemma 5 Assume A2. If there are infinitely many t's such that jD t;prof :M then the
probability is 1 that all the random draws in D t;prof :M produce the same outcome for infinitely many
t's.
be an infinite sequence of increasing time instances at which jD t i ;prof :M
jprof :M j. W.l.o.g. assume that 8i; t 2, the sets D t i ;prof :M are pairwise
disjoint.
Consider the random draws in set D t i ;prof :M . Let EM denote the event that the random draws in
produce the same outcome M . By A2, the probability of EM 's occurrence is independent
of the time these random draws are made and is given by /M . Define random variable A i to be 1
if EM occurs at t i , and 0 otherwise. Then A
By the Law of Large Numbers (see, for example, [6]), for any ffl we have
lim
That is, when n tends to infinity, the probability is 1 that
n tends to /M . Therefore, with
probability 1, the set fi j A Hence, with probability 1, there are infinitely
many i's such that the random draws in D t i ;prof :M produce the same outcome M . 2
Theorem 4 (Strong Interaction Fairness) Assume A1 and A2. Then if a committee is enabled
infinitely often, with probability 1 the committee will be convened infinitely often.
Proof: Since the algorithm satisfies weak interaction fairness, we assume that there are infinitely
many i's such that M becomes enabled at time instance t i . Let either (1) there
are infinitely many i's such that M is continuously enabled in each interval [t or (2) starting
from some point t i 0 onward, whenever M becomes enabled at t i , some professor in prof :M attends
a committee meeting in the interval
Consider Case (1). By Lemma 3 and A1, there are infinitely many i's such that each interval
contains some time instance t such that jD t;prof :M j. Then by Lemma 5 and A2,
with probability 1 there are infinitely many t's such that all the random draws in D t;prof :M produce
the same outcome. So by Lemma 4, with probability 1 M is convened infinitely often.
Consider Case (2). W.l.o.g. assume each interval (t contains no time instance t such that
by the previous argument we can also show that M will be
convened infinitely often with probability 1. Let Q prof :M be the set of professors that have
accessed a counter between the time t i at which they are waiting for M until the time at which
they attend a committee meeting. For each q 2 Q i , let a be q's first such access, and let D 1
denote q's random draw performed right before a. For each
p's latest random draw performed before time t i . Note that since p is in a W -state at t i , D 2
defined and its outcome must cause p to attend a committee meeting at some time after t i . Let
g. By A2, the random draws in D 0
are
mutually independent and have a nonzero probability /M to yield the same outcome M . Therefore,
by the Law of Large Numbers (see the proof of Lemma 5), the probability is 1 that there are infinitely
many i's such that all the random draws in D 0
yield the same outcome M . If all the random
draws in D 0
yield the same outcome M , then either a meeting of M will be established, or
each professor in prof :M will still be waiting for M and will perform another random draw to access
a new counter. By a technique similar to Lemma 3, it can be seen that in the later case we would
be able to find a time instance t in (t j. By the assumption
of the case, we conclude that with probability 1 M is convened infinitely often. 2
Note that if Assumption A2 is dropped from Theorem 4, then a conspiracy against strong inter-action
fairness can be devised. To illustrate, consider a system of two professors p 1 and p 2 , and three
committees involving only involving only p 2 , and M 12 , which involves both p 1 and p 2 .
Suppose that p 1 becomes waiting, and then tosses a coin to choose either M 1 or M 12 . The malicious
could remain thinking until p 1 has selected M 1 ; then p 2 becomes waiting just before
random draw is performed only if p 1 's latest random draw yields outcome
once it selects M 1 , M 12 will not be started if p 1 remains in
its meeting while p 2 is waiting. However, M 12 is enabled as soon as p 2 becomes waiting. Similarly,
could also remain thinking until p 2 has selected M 2 . So if this scenario is repeated ad infinitum,
then the resulting computation would not be strong interaction fair.
The time complexity of the algorithm is analyzed in the following theorem, which assume a worst
case scenario that a professor spend j max time in executing lines 2 and 3 of Multi.
Theorem 5 (Time Complexity) Assume that each professor spends j max time in executing lines 2
and 3 of Multi, and assume A1, i.e., the amount of time ffi a professor spends in monitoring an
interaction is greater than 1). Then the expected time it takes any member of a
committee M to start a meeting from the time that M becomes enabled is no greater than
Proof: Suppose that M becomes enabled at time t. Consider first that there exists a time instance
Assume first that while M is
enabled, no conflicting committee is enabled simultaneously. (Two committee conflict if they share
a common member.) So when M is enabled, each professor in prof :M can only attend a meeting of
M .
By Lemma 4, if the random draws in jD t 1 ;prof :M j yield the same outcome M (an event that
occurs with probability /M ), then some professor in prof :M will start a meeting of M by time t 1 .
Otherwise, each professor in prof :M must perform another random draw and access the selected
counter within j (from the time it reaccesses the previous selected counter). So there must
exist another time instance t 2 , contains a completely new
set of random draws, one by each professor in prof :M . Once again, if the new random draws yield
the same outcome M , then some professor will start a committee meeting. So the probability that
M will be disabled by time t 2 is
In general, let D t i ;prof :M be the i th set of random draws performed by the professors, where
ffi). Then the probability that M will be disabled by time t i is
ffi). So the expected
time starting from t until some member of M enters state E to start a meeting is no greater than
We assumed above that no committee conflicting with M can be enabled while M is enabled; this
implies that for each set of random draws D t i ;prof :M , the random draws must yield the same outcome
M for the members of M to start a meeting. If a conflicting committee is enabled simultaneously,
then some of the random draws in D t i ;prof :M may still lead to a committee meeting even if they do
not yield outcome M . Hence, the expected time starting from t until some member of M enters state
E to start a meeting will actually be less than (j conflicting committee is enabled
simultaneously.
Assume next that there exists no time instance t 1 ,
must be disabled before t
's disabledness must be the result of some professor's random draw leading to the establishment
of some committee meeting involving that professor. So the disabling of M at some time before
must also be a probabilistic event. Therefore, in this case the expected time
starting from t until some member of M enters state E to start a meeting is no greater than
j. Given that that the expected time is
no greater than
Therefore, in either case, the expected time starting from t until some member of M enters state
E to start a meeting is no greater than (j
Intuitively, Theorem 5 states that the expected time for any member of prof :M to start a committee
meeting when they are all waiting is no greater than the amount of time to execute one round
of the while-loop of Multi (i.e., divided by the probability that these professors in their
random draws all choose the same committee M (i.e., /M ).
Note that j max is a constant determined by the size (number of members) of the largest committee.
Call this value S max and let j . The probability /M is a constant determined by the
size of M , and the maximum number of committees of which a professor can be a member (call
this value C
p2prof :M /
Finally, ffi is a constant determined by
Therefore, the time complexity of the algorithm is bounded by
the following constant:
While in the worst case S max could be equal to the total number of professors, and C max could
be equal to the total number of committees in the system (which, in turn, could be dependent on
the total number of professors), in practice, it is generally known that both parameters must be kept
small and independent of the total number of professors in the system [7]. 4
In contrast, deterministic algorithms for Committee Coordination such as [15, 12, 10] have time
complexity
where c 0 is a constant and N is the total number of professors in the system. 5 The time complexity
of these algorithms depends explicitly on N because they use priority to beak the symmetry among
professors. As such, a lower priority professor may have to wait for a higher priority professor if they
attempt to establish a conflicting committee, and the higher priority professor in turn may have to
wait for another higher priority professor, and so on. (Recall, Section 1, that there is no symmetric,
deterministic, and distributed solution for Committee Coordination.)
If C max and S max are kept small and independent of N , then Multi, in addition to guaranteeing
strong interaction fairness, out-performs deterministic algorithms while providing real-time response.
4.4 A Non-Atomic Implementation of TEST&OP
As promised in Section 3, we now present a non-atomic, and hence more concurrent, implementation
of the TEST&OP instruction. Recall that the execution of the statement TEST&OP(CM , zero-op,
nonzero-op) actually involves two 6 actions: read CM , then apply to CM the operation zero-op if
and the operation nonzero-op otherwise. More precisely, the actions are read followed
by inc when a professor executes TEST&OP(CM ; inc; inc) to access a counter, and read followed by
dec/no-op when it executes TEST&OP(CM ; no-op; dec) to reaccess the counter.
Clearly, we can apply a mutual exclusion algorithm (see [16] for a survey) to ensure that each
access and reaccess to a counter proceeds atomically. This, however, is an overkill. For example,
4 A scheme of synchrony loosening is therefore proposed in [7] for reducing the size of an interaction in practical
applications.
5 These algorithms and Multi all allow professors to distributedly establish a committee meeting on their own. Other
deterministic algorithms such as [5, 4, 14] employ "managers" to coordinate committee meetings; the time complexity
of these algorithms then depends on the number of managers they use.
6 Three, if you count the Boolean test.
accesses to a counter can be executed concurrently. 7 To see this, consider three possible interleaved
executions of two TEST&OP(CM ; inc; inc)'s:
read 1 , read 2 , inc 1 , inc 2
read 1 , read 2 , inc 2 , inc 1
read 1 , inc 1 , read 2 , inc 2
Observe that the first two executions have the same effect: both will cause the executing professors
to enter state W 2 to monitor M because the value of CM returned by both reads is less than
value before the two accesses is less than jprof then the third
execution, in which the two accesses proceed atomically, will also have the same effect.
If CM 's value before the two accesses is jprof :M \Gamma 2j, then in the third execution the professor
executing the first access will enter W 2 to monitor M , while the other professor will enter state E
to start a meeting of M . When the first professor's ffi-interval expires, it will learn that a meeting of
M has been established when it reaccesses CM , and so will also enter state E to start a meeting of
M . The situation is similar in the first two executions: both professors will enter state E to start
a meeting of M when they reaccess the counter. So all three interleaved executions preserve the
synchronization property of the algorithm. (Breaking the atomicity of TEST&OP clearly has no effect
to the algorithm's exclusion and fairness properties.)
Note that the system performance may be increased if we reverse the order of execution of the
read and inc actions in the implementation of TEST&OP(CM ; inc; inc). To see this, consider again
the case where two professors attempt to access CM simultaneously. The following are two possible
interleaved executions:
inc 1 , inc 2 , read 1 , read 2
inc 1 , inc 2 , read 2 , read 1
Suppose that CM 's value before the access is jprof after the two increments,
So each professor, upon reading the value of CM , learns that all professors of M are now interested
in M and can enter state E to start a meeting of M . Moreover, the new implementation of
7 We assume that basic machine-level instructions such as inc, dec, load , and store are executed atomically. Thus, if
two such instructions are executed concurrently, the result is equivalent to their sequential execution in some unknown
order.
8 When concurrent accesses to the same counter are allowed, more than one professor may access CM simultaneously
and then all enter state W2 to monitor M because CM ! jprof before the accesses. Likewise, Lemma 1, which
assumes that access to a counter is atomic, needs to be slightly changed to reflect the possibility that
professors in prof :M are in state W2 at the same time.
inc) still ensures Multi's synchronization property regardless of how the actions
of overlapping TEST&OP(CM ; inc; inc) instructions are interleaved. 9
Similarly, interleaving read and dec/no-op of different professors' reaccesses to the same counter
cannot invalidate the algorithm's synchronization property. Only simultaneous access and reaccess
to the same counter may conflict. To illustrate, suppose that p 1 wishes to access CM while p 2 wishes
to reaccess. Suppose further that the value of CM before the attempt is jprof
access proceeds atomically before p 2 's reaccess, both professors will enter state E to start the
meeting. However, if the four constituent actions are interleaved as follows:
read 2 , inc 1 , read 1 , dec 2
decrement CM
by one and go to state W 0 to select a new committee. On the other hand, since
before p 2 's decrement, it will discover that thus enter state E to start M . Hence, the
synchronization requirement is violated.
To ensure that access and reaccess to the same counter are mutually exclusive while at the same
time allowing concurrent accesses and concurrent reaccesses, we can implement TEST&OP(CM ; inc; inc)
and TEST&OP(CM , no-op; dec) using the algorithm shown in Figures 8 and 9. The algorithm is
based on Dekker's algorithm for the bi-process critical section problem [16] and, as discussed above,
now returns the new value of CM . CM access count is a counter recording the
number of professors that are attempting to access CM , while CM reaccess count records the number
of professors attempting to reaccess CM . Both counters are initialized to zero. Furthermore, variable
CM turn, initialized to access, is used for resolving conflicts between accesses and reaccesses.
It can be seen that a professor enters the critical section to access CM only if CM access count
only professors attempting to access CM may modify
CM access count , and they test CM reaccess count only when CM access count ? 0, it follows that
when some professor p enters the critical section to access CM , no other professor can simultaneously
enter the critical section to reaccess CM . Moreover, no professor can enter the critical section to
9 If this new implementation is adopted, then line 3 of Figure 2 needs to be changed to "if TEST&OP(CM ; inc; inc) = 0"
because TEST&OP(CM ; inc; inc) now returns the value of CM after the access. Note, however, that TEST&OP(CM , no-
op, dec) must still return the value of CM before the access. This is because if it returns the value of CM after the
access, then when the returned value is zero, the executing professor p i would not be able to tell if (1) only p i itself is
interested in M (so that p i should decrease CM by one and then return to state W0 to retry another committee), or
(2) all members in prof :M are interested in M (so that p i should leave CM unchanged and then enter state E to start
a meeting of M .)
1. inc (CM access count) ;
2. while CM reaccess count ? 0 do
3. if CM turn = reaccess then f
4. dec (CM access count) ;
5. while CM turn = reaccess do no-op ;
6. inc (CM access count) ; g
7. /* beginning of critical section */
8. inc(C M
9. return read(C M
10. /* end of critical section */
11. dec (CM access count) ;
12. if CM access count = 0 then CM turn := reaccess ;
Figure
8: Implementation of TEST&OP(CM ; inc; inc).
1. inc (CM reaccess count) ;
2. while CM access count ? 0 do
3. if CM turn = access then f
4. dec (CM reaccess count) ;
5. while CM turn = access do no-op ;
6. inc (CM reaccess count) ; g
7. /* beginning of critical section */
8. if read(C M
9. else f return read(C M
10. /* end of critical section */
11. dec (CM reaccess count) ;
12. if CM reaccess count
Figure
9: Implementation of TEST&OP(CM ; no-op; dec).
reaccess CM while p is already in the critical section but has not yet left the critical section. Similarly,
if a professor is in the critical section to reaccess CM , then no other professor can enter the critical
section to access CM . The mutual exclusion property therefore holds.
Note that it is possible that while a professor is in the critical section (say, to access CM ), some
professor has already "flipped" CM turn to reaccess (line 12 of Figures 8). However, the premature
flipping of CM turn cannot invalidate the algorithm's mutual exclusion property because the entering
of the critical section to reaccess CM does not depend on the value of CM turn, but rather on the value
of CM access count : as long as some professor is in the critical section to access CM , CM access count
remains greater than 0, and so no professor can exit the while-loop of Figures 9 (lines 2-6) to reaccess
CM .
The algorithm is also deadlock-free. To see this, consider an arbitrary time instance at which
A ' prof :M is the set of professors wishing to access CM and R ' prof :M is the set of professors
wishing to reaccess CM definition). Consider now the plight of some p 2 A (similar
reasoning applies in the case of reaccess). If obviously, p will succeed. Otherwise what
happens next depends on the value of CM turn: If CM turn = access then each professor in R must
undo its increment of CM reaccess count , and wait in line 5 of Figure 9 until CM turn is flipped
to reaccess . When CM reaccess count has been reset to zero, p can then enter the critical section.
Conversely, if CM turn = reaccess , then p and the other professors in A must undo their increments
of CM access count , collectively resetting the value of this variable to zero, and wait in line 5 of
Figure
8 until the professors in R enter the critical section and then flip CM turn to access .
Moreover, the algorithm permits concurrent access (and concurrent reaccess, too), meaning that
if a professor p 1 attempts to access CM while some other professor p 2 is accessing the counter, then
may succeed even if there is already a third professor waiting for reaccess to CM . This is because
is in the critical section while CM turn = access , then all professors waiting for reaccess to
CM are blocked at line 5 of Figure 9, and CM reaccess count = 0. So p 1 can immediately enter the
critical section.
Note that allowing subsequent professors to concurrently access a counter cannot indefinitely
delay a professor waiting for reaccess to the same counter because (1) the number of professors in a
committee is finite, and (2) a professor p's access to CM must be followed by a reaccess to the same
counter (unless p's access leads to a committee meeting; and if this is the case, then the professor
must enter state E to wait for the other members to finish their reaccesses so that they can start a
meeting). By assumption A1 the time between the access and reaccess, i.e., the ffi-interval, must be
long enough for other professors to finish their accesses.
Note further that permitting concurrent accesses is highly desirable because it increases the
likelihood of establishing committee meetings. For example, suppose that two sets of professors are
waiting for access and reaccess to CM , respectively, while some professor is already accessing the
counter. Deferring the reaccesses until all accesses have proceeded can only help the members of M
reach consensus, while scheduling the accesses and reaccesses in a fair manner (e.g., alternatively)
adds no help at all to the establishment of M 's meeting.
Conclusions
We have presented Multi, a new randomized algorithm for scheduling multiparty interactions. We
have shown that by properly setting the value of ffi (the amount of time a process is willing to wait
for an interaction to be established), our algorithm is both weak and strong interaction fair with
probability 1. Our results hold even if the time it takes to access a shared variable (the communication
delay) is nonnegligible. To our knowledge, this makes Multi the first algorithm for strong interaction
fairness to appear in the literature.
Strong interaction fairness has been proven impossible by any deterministic algorithm. Our results
therefore indicate that randomization is a feasible and efficient countermeasure to such impossibility
phenomena. Furthermore, since most known fairness notions are weaker than strong interaction
fairness, they too can be implemented via randomization. For example, strong process fairness [1],
where a process infinitely often ready for an enabled interaction shall participate in an interaction
infinitely often, is also realized by our randomized algorithm in spite of the fact that it cannot be
implemented by any deterministic multiparty interaction scheduling [19, 9].
Multi is an extension of Francez and Rodeh's randomized algorithm for CSP-like biparty inter-
actions. Francez and Rodeh were able to claim only weak interaction fairness for their algorithm,
and then only under the limiting assumption that the communication time is negligible compared to
ffi. In this case, strong interaction fairness would be possible even in a deterministic setting.
We have also analyzed the time complexity of our algorithm. Like Reif and Spirakis's real-time
algorithm [17], the expected time taken by Multi to establish an interaction is a constant not
depending on the total number of processes in the system.
Although Multi is presented in a shared-memory model, it can be easily converted to a message-passing
algorithm by letting some processes maintain the shared variables, and other processes
communicate with them by message passing to obtain the values of these variables. The time to
read/write a shared variable then accounts for the time it takes to deliver a message. The ffi parameter
in Assumption A1 can be properly adjusted to reflect the new communication delay so that both
weak and strong interaction fairness notions can still be guaranteed with probability 1.
Multi, as originally described in Section 3, uses an operation TEST&OP for processes to access
a shared counter atomically. This operation is rather complex and will not be generally available.
Moreover, it unnecessarily eliminates potential concurrency. So, in Section 4.4, we have proposed an
implementation of TEST&OP that uses the more basic atomic instructions inc, dec, load , and store.
Implementing Multi on a machine that does not support the atomic execution of these instructions,
as could well be the case for inc and dec, is an interesting open problem.
As discussed in Section 4.4, the implementation of TEST&OP would be much simpler if a general-purpose
mutual exclusion algorithm was used instead. However, we know of no mutual exclusion
algorithm that allows concurrent accesses to the critical section if the accesses themselves do not
conflict with one another. Therefore we had to design our own solution.
Finally, unlike deterministic algorithms, randomized algorithms such as Multi only "guarantee"
average-case behavior, not a worst-case bound. It would therefore be interesting to conduct simulation
studies on Multi to measure its response time in practical settings. Experiments in which the
size of S max and C max (see Section 4.3) vary from small constants to large values approaching the
number of professors in the system would be especially insightful.
Acknowledgments
. We would like to thank the anonymous referees for their careful reading of
the manuscript and their valuable comments.
--R
Appraising fairness in languages for distributed program- ming
On fairness as an abstraction for the design of distributed systems.
Fairness and hyperfairness in multi-party interactions
Process synchronization: Design and performance evaluation of distributed algo- rithms
A Foundation of Parallel Program Design.
A Course in Probability Theory.
Interacting Processes: A Multiparty Approach to Coordinated Distributed Programming.
A distributed abstract data type implemented by a probabilistic communication scheme.
Characterizing fairness implementability for multiparty interaction.
Coordinating first-order multiparty interactions
A comprehensive study of the complexity of multiparty interaction.
An implementation of N-party synchronization using tokens
On the advantage of free choice: A symmetric and fully distributed solution to the dining philosophers problem (extended abstract).
A distributed synchronization scheme for fair multi-process handshakes
A new and efficient implementation of multiprocess synchronization.
Algorithms for Mutual Exclusion.
Real time synchronization of interprocess communications.
Distributed algorithms for ensuring fair interprocess communications.
Some impossibility results in interprocess synchronization.
--TR
--CTR
Catuscia Palamidessi , Oltea Mihaela Herescu, A randomized encoding of the -calculus with mixed choice, Theoretical Computer Science, v.335 n.2-3, p.373-404, 23 May 2005
Rafael Corchuelo , Jos A. Prez , Antonio Ruiz-Corts, Aspect-oriented interaction in multi-organisational web-based systems, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.41 n.4, p.385-406, 15 March | multiparty interaction;weak interaction fairness;randomized algorithms;committee coordination;distributed algorithms;strong interaction fairness |
275853 | Scheduling Block-Cyclic Array Redistribution. | AbstractThis article is devoted to the run-time redistribution of one-dimensional arrays that are distributed in a block-cyclic fashion over a processor grid. While previous studies have concentrated on efficiently generating the communication messages to be exchanged by the processors involved in the redistribution, we focus on the scheduling of those messages: how to organize the message exchanges into "structured" communication steps that minimize contention. We build upon results of Walker and Otto, who solved a particular instance of the problem, and we derive an optimal scheduling for the most general case, namely, moving from a CYCLIC(r) distribution on a P-processor grid to a CYCLIC(s) distribution on a Q-processor grid, for arbitrary values of the redistribution parameters P, Q, r, and s. | Introduction
Run-time redistribution of arrays that are distributed in a block-cyclic fashion over a multidimensional
processor grid is a difficult problem that has recently received considerable attention. This
interest is motivated largely by the HPF [13] programming style, in which scientific applications
are decomposed into phases. At each phase, there is an optimal distribution of the data arrays
onto the processor grid. Typically, arrays are distributed according to a CYCLIC(r) pattern along
one or several dimensions of the grid. The best value of the distribution parameter r depends on
the characteristics of the algorithmic kernel as well as on the communication-to-computation ratio
of the target machine [5]. Because the optimal value of r changes from phase to phase and from
one machine to another (think of a heterogeneous environment), run-time redistribution turns out
to be a critical operation, as stated in [10, 21, 22] (among others).
Basically, we can decompose the redistribution problem into the following two subproblems:
Message generation The array to be redistributed should be efficiently scanned or processed in
order to build up all the messages that are to be exchanged between processors.
Communication scheduling All the messages must be efficiently scheduled so as to minimize
communication overhead. A given processor typically has several messages to send, to all
other processors or to a subset of these. In terms of MPI collective operations [16], we must
schedule something similar to an MPI ALLTOALL communication, except that each processor
may send messages only to a particular subset of receivers (the subset depending on the
sender).
Previous work has concentrated mainly on the first subproblem, message generation. Message
generation makes it possible to build a different message for each pair of processors that must
communicate, thereby guaranteeing a volume-minimal communication phase (each processor sends
or receives no more data than needed). However, the question of how to efficiently schedule the
messages has received little attention. One exception is an interesting paper by Walker and Otto [21]
on how to schedule messages in order to change the array distribution from CYCLIC(r) on a P -
processor linear grid to CYCLIC(Kr) on the same grid. Our aim here is to extend Walker and Otto's
work in order to solve the general redistribution problem, that is, moving from a CYCLIC(r)
distribution on a P -processor grid to a CYCLIC(s) distribution on a Q-processor grid.
The general instance of the redistribution problem turns out to be much more complicated
than the particular case considered by Walker and Otto. However, we provide efficient algorithms
and heuristics to optimize the scheduling of the communications induced by the redistribution
operation. Our main result is the following: For any values of the redistribution parameters P , Q,
r and s, we construct an optimal schedule, that is, a schedule whose number of communication
steps is minimal. A communication step is defined so that each processor sends/receives at most one
message, thereby optimizing the amount of buffering and minimizing contention on communication
ports. The construction of such an optimal schedule relies on graph-theoretic techniques such as
the edge coloring number of bipartite graphs. We delay the precise (mathematical) formulation of
our results until Section 4 because we need several definitions beforehand.
Without loss of generality, we focus on one-dimensional redistribution problems in this article.
Although we usually deal with multidimensional arrays in high-performance computing, the problem
reduces to the "tensor product" of the individual dimensions. This is because HPF does not
allow more than one loop variable in an ALIGN directive. Therefore, multidimensional assignments
and redistributions are treated as several independent one-dimensional problem instances.
The rest of this article is organized as follows. In Section 2 we provide some examples of
redistribution operations to expose the difficulties in scheduling the communications. In Section 3
we briefly survey the literature on the redistribution problem, with particular emphasis given to
the Walker and Otto paper [21]. In Section 4 we present our main results. In Section 5 we report
on some MPI experiments that demonstrate the usefulness of our results. Finally, in Section 6, we
state some conclusions and future work directions.
Motivating Examples
Consider an array X[0:::M \Gamma 1] of size M that is distributed according to a block cyclic distribution
CYCLIC(r) onto a linear grid of P processors (numbered from Our goal is to
redistribute X using a CYCLIC(s) distribution on Q processors (numbered from
For simplicity, assume that the size M of X is a multiple of Qs), the least common
multiple of P r and Qs: this is because the redistribution pattern repeats after each slice of L
elements. Therefore, assuming an even number of slices in X will enable us (without loss of
generality) to avoid discussing side effects. Let L be the number of slices.
Example 1
Consider a first example with 5. Note that the new grid
of Q processors can be identical to, or disjoint of, the original grid of P processors. The actual
total number of processors in use is an unknown value between 16 and 32. All communications are
summarized in Table 1, which we refer to as a communication grid. Note that we view the source
and target processor grids as disjoint in Table 1 (even if it may not actually be the case). We see
that each source processor messages and that each processor
receives 7 messages, too. Hence there is no need to use a full all-to-all
communication scheme that would require 16 steps, with a total of 16 messages to be sent per
processor (or more precisely, 15 messages and a local copy). Rather, we should try to schedule
the communication more efficiently. Ideally, we could think of organizing the redistribution in 7
steps, or communication phases. At each step, 16 messages would be exchanged, involving
disjoint pairs of processors. This would be perfect for one-port communication machines, where
each processor can send and/or receive at most one message at a time.
Note that we may ask something more: we can try to organize the steps in such a way that at
each step, the 8 involved pairs of processors exchange a message of the same length. This approach
is of interest because the cost of a step is likely to be dictated by the length of the longest message
exchanged during the step. Note that message lengths may or may not vary significantly. The
numbers in Table 1 vary from 1 to 3, but they are for a single slice vector. For a vector X of length
lengths vary from 1000 to 3000 (times the number of
bytes needed to represent one data-type element).
A schedule that meets all these requirements, namely, 7 steps of 16 disjoint processor pairs
exchanging messages of the same length, will be provided in Section 4.3.2. We report the solution
schedule in Table 2. Entry in position (p; q) in this table denotes the step (numbered from a to g
for clarity) at which processor p sends its message to processor q.
In
Table
3, we compute the cost of each communication step as (being proportional to) the
length of the longest message involved in this step. The total cost of the redistribution is then the
sum of the cost of all the steps. We further elaborate on how to model communication costs in
Section 4.3.1.
Table
1: Communication grid for 5. Message lengths are indicated for
a vector X of size
Communication grid for
of msg.
Nbr of msg. 7 7
Example 2
The second example, with shows the usefulness of an efficient
schedule even when each processor communicates with every other processor. As illustrated in
Table
4, message lengths vary with a ratio from 2 to 7, and we need to organize the all-to-all
exchange steps in such a way that messages of the same length are communicated at each step.
Again, we are able to achieve such a goal (see Section 4.3.2). The solution schedule is given in
Table
5 (where steps are numbered from a to p), and its cost is given in Table 6. (We do check
that each of the 16 steps is composed of messages of the same length.)
Example 3
Our third motivating example is with As shown in Table 7, the
communication scheme is severely unbalanced, in that processors may have a different number of
messages to send and/or to receive. Our technique is able to handle such complicated situations.
We provide in Section 4.4 a schedule composed of 10 steps. It is no longer possible to have messages
of the same length at each step (for instance, processor messages only of length 3 to send,
while processor messages only of length 1 or 2), but we do achieve a redistribution in
communication steps, where each processor sends/receives at most one message per step. The
number of communication steps in Table 8 is clearly optimal, as processor
to send. The cost of the schedule is given in Table 9.
Table
2: Communication steps for
Communication steps for
9 - a - b - d f - g e - c
Table
3: Communication costs for
Communication costs for
Step a b c d e f g Total
Cost
Example 4
Our final example is with P 6= Q, just to show that the size of the two processor grids need not be
the same. See Table 10 for the communication grid, which is unbalanced. The solution schedule
(see Section 4.4) is composed of 4 communication steps, and this number is optimal, since processor
messages to receive. Note that the total cost is equal to the sum of the message lengths
that processor must receive; hence, it too is optimal.
3 Literature overview
We briefly survey the literature on the redistribution problem, with particular emphasis given to
the work of Walker and Otto [21].
Table
4: Communication grid for are indicated
for a vector X of size
of msg.
Nbr of msg.
3.1 Message Generation
Several papers have dealt with the problem of efficient code generation for an HPF array assignment
statement like
where both arrays A and B are distributed in a block-cyclic fashion on a linear processor grid. Some
researchers (see Stichnoth et al.[17], van Reeuwijk et al.[19], and Wakatani and Wolfe [20]) have
dealt principally with arrays distributed by using either a purely scattered or cyclic distribution
(CYCLIC(1) in HPF) or a full block distribution (CYCLIC(d n
is the array size and p the
number of processors).
Recently, however, several algorithms have been published that handle general block-cyclic
distributions. Sophisticated techniques involve finite-state machines (see Chatterjee et
al. [3]), set-theoretic methods (see Gupta et al. [8]), Diophantine equations (see Kennedy et al. [11,
12]), Hermite forms and lattices (see Thirumalai and Ramanujam [18]), or linear programming (see
Ancourt et al. [1]). A comparative survey of these algorithms can be found in Wang et al. [22],
where it is reported that the most powerful algorithms can handle block-cyclic distributions as
efficiently as the simpler case of pure cyclic or full-block mapping.
At the end of the message generation phase, each processor has computed several different
messages (usually stored in temporary buffers). These messages must be sent to a set of receiving
processors, as the examples of Section 2 illustrate. Symmetrically, each processor computes the
number and length of the messages it has to receive and therefore can allocate the corresponding
memory space. To summarize, when the message generation phase is completed, each processor
Table
5: Communication steps for
Communication steps for
9 g a
Table
Communication costs for
Communication costs for
Step a b c d e f
Cost
has prepared a message for all those processors to which it must send data, and each processor
possesses all the information regarding the messages it will receive (number, length, and origin).
3.2 Communication Scheduling
Little attention has been paid to the scheduling of the communications induced by the redistribution
operation. Simple strategies have been advocated. For instance, Kalns and Ni [10] view the
communications as a total exchange between all processors and do not further specify the operation.
In their comparative survey, Wang et al. [22] use the following template for executing an array
assignment statement:
1. Generate message tables, and post all receives in advance to minimize operating systems
overhead
2. Pack all communication buffers
3. Carry out barrier synchronization
Table
7: Communication grid for 5. Message lengths are indicated for
a vector X of size
14 Nbr of msg.
Nbr of msg. 6 9 6 6 9 6 6 9 6 6 9 6 6 9 9
4. Send all buffers
5. Wait for all messages to arrive
6. Unpack all buffers
Although the communication phase is described more precisely, note that there is no explicit
scheduling: all messages are sent simultaneously by using an asynchronous communication pro-
tocol. This approach induces a tremendous requirement in terms of buffering space, and deadlock
may well happen when redistributing large arrays.
The ScaLAPACK library [4] provides a set of routines to perform array redistribution. As
described by Prylli and Tourancheau [15], a total exchange is organized between processors, which
are arranged as a (virtual) caterpillar. The total exchange is implemented as a succession of steps.
At each step, processors are arranged into pairs that perform a send/receive operation. Then the
caterpillar is shifted so that new exchange pairs are formed. Again, even though special care is taken
in implementing the total exchange, no attempt is made to exploit the fact that some processor
pairs may not need to communicate.
The first paper devoted to scheduling the communications induced by a redistribution is that
of Walker and Otto [21]. They review two main possibilities for implementing the communications
induced by a redistribution operation:
Wildcarded nonblocking receives Similar to the strategy of Wang et al. described above, this
asynchronous strategy is simple to implement but requires buffering for all the messages to
be received (hence, the total amount of buffering is as high as the total volume of data to be
redistributed).
Table
8: Communication steps for
Communication steps for
Table
9: Communication costs for
Communication costs for
Step a b c d e f g h i j Total
Cost
Synchronous schedules A synchronized algorithm involves communication phases or steps. At
each step, each participating processor posts a receive, sends data, and then waits for the
completion of the receive. But several factors can lead to performance degradation. For
instance, some processors may have to wait for others before they can receive any data. Or
hot spots can arise if several processors attempt to send messages to the same processor at
the same step. To avoid these drawbacks, Walker and Otto propose to schedule messages
so that, at each step, each processor sends no more than one message and receives no more
than one message. This strategy leads to a synchronized algorithm that is as efficient as the
asynchronous version, as demonstrated by experiments (written in MPI [16]) on the IBM
SP-1 and Intel Paragon, while requiring much less buffering space.
Walker and Otto [21] provide synchronous schedules only for some special instances of the
redistribution problem, namely, to change the array distribution from CYCLIC(r) on a P -processor
linear grid to CYCLIC(Kr) on a grid of same size. Their main result is to provide a schedule
composed of K steps. At each step, all processors send and receive exactly one message. If K is
smaller than P , the size of the grid, there is a dramatic improvement over a traditional all-to-all
implementation.
Table
10: Communication grid for Message lengths are indicated
for a vector X of size
msg.
Nbr of msg. 2 4
Our aim in this article is to extend Walker and Otto's work in order to solve the general re-distribution
problem, that is, moving from a CYCLIC(r) distribution on a P -processor grid to a
CYCLIC(s) distribution on a Q-processor grid. We retain their original idea: schedule the communications
into steps. At each step, each participating processor neither sends nor receives more
than one message, to avoid hot spots and resource contentions. As explained in [21], this strategy
is well suited to current parallel architectures. In Section 4.3.1, we give a precise framework to
model the cost of a redistribution.
4 Main Results
4.1 Problem Formulation
Consider an array X[0:::M \Gamma 1] of size M that is distributed according to a block-cyclic distribution
CYCLIC(r) onto a linear grid of P processors (numbered from Our goal is
to redistribute X by using a CYCLIC(s) distribution on Q processors (numbered from
1). Equivalently, we perform the HPF assignment is CYCLIC(r) on a
-processor grid, while Y is CYCLIC(s) on a Q-processor grid 1 .
The block-cyclic data distribution maps the global index i of vector X (i.e., element X[i]) onto a
processor index p, a block index l, and an item index x, local to the block (with all indices starting
at 0). The mapping i \Gamma! (p; l; x) may be written as
bi=rc
We derive the relation
1 The more general assignment Y [a can be dealt with similarly.
Table
11: Communication steps for
Communication steps for
Table
12: Communication costs for
Communication costs for
Step a b c d Total
Cost
Similarly, since Y is distributed CYCLIC(s) on a Q-processor grid, its global index j is mapped as
y. We then get the redistribution equation
Qs) be the least common multiple of P r and Qs. Elements i and L+i of X are
initially distributed onto the same processor (because L is a multiple of P r, hence
r divides L, and P divides L \Xi r). For a similar reason, these two elements will be redistributed
onto the same processor In other words, the redistribution pattern repeats after
each slice of L elements. Therefore, we restrict the discussion to a vector X of length L in the
following. Let rQs). The bounds in equation (3) become
s:
Given the distribution parameters r and s, and the grid parameters P and Q, the
redistribution problem is to determine all the messages to be exchanged, that is, to find all
values of p and q such that the redistribution equation (3) has a solution in the unknowns l, m,
x, and y, subject to the bounds in Equation (4). Computing the number of solutions for a given
processor pair (p; q) will give the length of the message.
We start with a simple lemma that leads to a handy simplification:
Lemma 1 We can assume that r and s are relatively prime, that is,
Proof The redistribution equation (3) can be expressed as
Equation (3) can be
expressed as
If it has a solution for a given processor pair (p; q), then \Delta divides z, z = \Deltaz 0 , and we deduce a
solution for the redistribution problem with r 0 , s 0 , P , and Q.
Let us illustrate this simplification on one of our motivating examples:
Back to Example 3
Note that we need to scale message lengths to move from a redistribution operation where r and s
are relatively prime to one where they are not. Let us return to Example 3 and assume for a while
that we know how to build the communication grid in Table 7. To deduce the communication grid
for say, we keep the same messages, but we scale all lengths by
This process makes sense because the new size of a vector slice is \DeltaL rather than L. See Table 13
for the resulting communication grid. Of course, the scheduling of the communications will remain
the same as with while the cost in Table 9 will be multiplied by \Delta.
4.2 Communication Pattern
Consider a redistribution with parameters r, s, P , and Q, and assume that
Qs). The communication pattern induced by the redistribution operation is a
complete all-to-all operation if and only if
Proof We rewrite Equation (5) as ps \Gamma because P r:l \Gamma Qs:m is an arbitrary multiple
of g. Since z lies in the interval [1 \Gamma whose length is r guaranteed that a
multiple of g can be found within this interval if Conversely, assume that g - r s:
we will exhibit a processor pair (p; q) exchanging no message. Indeed, is the
desired processor pair. To see this, note that pr \Gamma (because g divides P r); hence,
no multiple of g can be added to pr \Gamma qs so that it lies in the interval [1 \Gamma Therefore, no
message will be sent from p to q during the redistribution. 2
In the following, our aim is to characterize the pairs of processors that need to communicate
during the redistribution operation (in the case s). Consider the following function
2 For another proof, see Petitet [14].
Table
13: Communications for are indicated for
a vector X of size
14 Nbr of msg.
Nbr of msg. 6 9 6 6 9 6 6 9 6 6 9 6 6 9 9
Function f maps each processor pair (p; q) onto the congruence class of pr \Gamma qs modulo g.
According to the proof of Lemma 2, p sends a message to q if and only if f(p;
(modg). Let us illustrate this process by using one of our motivating examples.
Back to Example 4
In this example, We have (as in the proof
of Lemma 2). If
receives no message from p. But if
does receive a message (see Table 10 to check this).
To characterize classes, we introduce integers u and v such that
r \Theta
(the extended Euclid algorithm provides such numbers for relatively prime r and s). We have the
following result.
Proposition 1 Assume that
r
' u
mod
g:
Proof First, to see that PQ
indeed is an integer, note that
Since g divides both P r and Qs, it divides PQ.
Two different classes are disjoint (by definition). It turns out that all classes have the same
number of elements. To see this, note that for all k 2 [0;
integer d 0 , and
Since there are g classes, we deduce that the number of elements in each class is PQ
.
Next, we see that (p - ; q -s mod
(because
Finally, (p
both P r and Qs divide
divides )rs. We deduce
that PQ
divides hence all the processors pairs (p
are distinct. We
have thus enumerated class(0).
Definition 3 Consider a redistribution with parameters r, s, P , and Q, and assume that
1. Let length(p; q) be the length of the message sent by processor p to processor q to redistribute a
single slice vector X of size
As we said earlier, the communication pattern repeats for each slice, and the value reported in
the communication grid tables of Section 2 are for a single slice; that is, they are equal to length(p; q).
are interesting because they represent homogeneous communications: all processor pairs in
a given class exchange a message of same length.
Proposition 2 Assume that Qs) be the length of the vector X
to be redistributed. Let vol(k) be the piecewise function given by Figure 1 for k 2 [1 \Gamma
(recall that if (p; q) 2 class(k) where
sends no message to q).
vol(k
s
volr
Figure
1: The piecewise linear function vol.
Proof We simply count the number of solutions to the redistribution equation pr
easily derive the piecewise linear vol function
represented in Figure 1.
We now know how to build the communication tables in Section 2. We still have to derive a
schedule, that is, a way to organize the communications as efficiently as possible.
4.3 Communication Schedule
4.3.1 Communication Model
According to the previous discussion, we concentrate on schedules that are composed of several
successive steps. At each step, each sender should send no more than one message; symmetrically,
each receiver should receive no more than one message. We give a formal definition of a schedule
as follows.
Definition 4 Consider a redistribution with parameters r, s, P , and Q.
ffl The communication grid is a P \Theta Q table with a nonzero entry length(p; q) in position
(p; q) if and only if p has to send a message to q.
ffl A communication step is a collection of pairs
t, and length(p t.
A communication step is complete if senders or all receivers are
active) and is incomplete otherwise. The cost of a communication step is the maximum value
of its entries, in other words, maxflength(p
ffl A schedule is a succession of communication steps such that each nonzero entry in the communication
grid appears in one and only one of the steps. The cost of a schedule may be
evaluated in two ways:
1. the number of steps NS, which is simply the number of communication steps in the
schedule; or
2. the total cost TC, which is the sum of the cost of each communication step (as defined
above).
The communication grid, as illustrated in the tables of Section 2, summarizes the length of the
required communications for a single slice vector, that is, a vector of size Qs). The
motivation for evaluating schedules via their number of steps or via their total cost is as follows:
ffl The number of steps NS is the number of synchronizations required to implement the sched-
ule. If we roughly estimate each communication step involving all processors (a permutation)
as a measure unit, the number of steps is the good evaluation of the cost of the redistribution.
ffl We may try to be more precise. At each step, several messages of different lengths are
exchanged. The duration of a step is likely to be related to the longest length of these
messages. A simple model would state that the cost of a step is ff
where ff is a start-up time and - the inverse of the bandwidth on a physical communication
link. Although this expression does not take hot spots and link contentions into account, it
has proven useful on a variety of machines [4, 6]. The cost of a redistribution, according to
this formula, is the affine expression
ff \Theta NS
with motivates our interest in both the number of steps and the total cost.
4.3.2 A Simple Case
There is a very simple characterization of processor pairs in each class, in the special case where r
and Q, as well as s and P , are relatively prime.
Proposition 3 Assume that
respectively denote the inverses of s and r modulo g).
Proof relatively prime with Qs, hence with g. Therefore
the inverse of r modulo g is well defined (and can be computed by using the extended Euclid algorithm
applied to r and g). Similarly, the inverse of s modulo g is well defined, too. The condition
easily translates into the conditions of the proposition.
In this simple case, we have a very nice solution to our scheduling problem. Assume first that
1. Then we simply schedule communications class by class. Each class is composed
of PQ
processor pairs that are equally distributed on each row and column of the communication
grid: in each class, there are exactly Q
sending processors per row, and P
receiving processors per
column. This is a direct consequence of Proposition 3. Note that g does divide P and Q: under
the hypothesis gcd(r;
To schedule a class, we want each processor
g, to send
a message to each processor
(or equivalently, if we look at the receiving side). In other words, the
processor in position p 0 within each block of g elements must send a message to the processor
in position q 0 within each block of g elements. This can be done in max(P;Q)
complete steps of
messages. For instance, if there are five blocks of senders three blocks
of receivers blocks of senders send messages to 3 blocks of
receivers. We can use any algorithm for generating the block permutation; the ordering of the
communications between blocks is irrelevant.
If we have an all-to-all communication scheme, as illustrated in Example 2, but
our scheduling by classes leads to an algorithm where all messages have the same length at a given
step. If 1. In this case we simply regroup classes
that are equivalent modulo g and proceed as before.
We summarize the discussion by the following result
Proposition 4 Assume that scheduling each
class successively leads to an optimal communication scheme, in terms of both the number of steps
and the total cost.
Proof Assume without loss of generality that P - Q. According to the previous discussion, if
(the number of classes) times P
(the number of steps for each class)
communication steps. At each step we schedule messages of the same class k, hence of same length
vol(k). If times P
communication steps, each composed of messages of the
same length (namely,
processing a given class k 2 [0;
Remark 1 Walker and Otto [21] deal with a redistribution with We have
shown that going from r to Kr can be simplified to going from
the technique described in this section enables us to retrieve the results of [21].
4.4 The General Case
When gcd(s; P entries of the communication grid may not be evenly distributed on the
rows (senders). Similarly, when entries of the communication grid may not be
evenly distributed on the columns (receivers).
Back to Example 3
We have 5. We see in Table 7 that some rows of the communication
grid have 5 nonzero entries (messages), while other rows have 10. Similarly,
hence r 3. Some columns of the communication grid have 6 nonzero entries, while other columns
have 10.
Our first goal is to determine the maximum number of nonzero entries in a row or a column of
the communication grid. We start by analyzing the distribution of each class.
, and in any class class(k), k 2 [0; 1], the processors pairs are
distributed as follows:
ffl There are P 0
entries per column in Q 0 columns of the grid, and none in the remaining columns.
ffl There are Q 0
entries per row in P 0 rows of the grid, and none in the remaining rows.
Proof First let us check that
Since r" is relatively prime with Q 0 (by definition of r 0 ) and with s" (because
have
There are PQ
elements per class. Since all classes are obtained by a translation of class(0),
we can restrict ourselves to discussing the distribution of elements in this class. The formula
in Lemma 1 states that
r
mod
. But
-s mod P can take only those values that are multiple of s 0 and -r mod Q can take only those
values that are multiple of r 0 , hence the result. To check the total number of elements, note that
Let us illustrate Lemma 3 with one of our motivating examples.
Back to Example 3
Elements of each class should be located on P 0
columns of the
processor grid. Let us check class(1) for instance. Indeed we have the following.
Lemma 3 shows that we cannot use a schedule based on classes: considering each class separately
would lead to incomplete communication steps. Rather, we should build up communication steps
by mixing elements of several classes, in order to use all available processors. The maximum number
of elements in a row or column of the communication grid is an obvious lower bound for the number
of steps of any schedule, because each processor cannot send (or receive) more than one message
at any communication step.
Proposition 5 Assume that (otherwise the communication
grid is full). If we use the notation of Lemma 3,
1. the maximum number mR of elements in a row of the communication grid is
d
and
2. the maximum number mC of elements in a column of the communication grid is
d
e:
Proof According to Lemma 1, two elements of class(k) and class(k
are on the same row of the
communication grid if -s in the interval [0; PQ
Necessarily, s 0 , which divides P and
is relatively prime with u. A fortiori s 0 is relatively prime with u. Therefore s 0 divides
share the same rows of the processor grid if they are congruent modulo s 0 . This induces
a partition on classes. Since there are exactly Q 0
elements per row in each class, and since the
number of classes congruent to the same value modulo s 0 is either b r+s\Gamma1
c or d r+s\Gamma1
e, we deduce
the value of mR . The value of mC is obtained similarly.
It turns out that the lower bound for the number of steps given by Lemma 5 can indeed be
achieved.
Theorem 1 Assume that (otherwise the communication grid
is full), and use the notation of Lemma 3 and Lemma 5. The optimal number of steps NS opt for
any schedule is
Proof We already know that the number of steps NS of any schedule is greater than or equal to
g. We give a constructive proof that this bound is tight: we derive a schedule whose
number of steps is maxfmR ; mC g. To do so, we borrow some material from graph theory. We view
the communication grid as a graph
is the set of sending processors, and
is the set of receiving processors; and
only if the entry (p; q) in the communication grid is nonzero.
G is a bipartite graph (all edges link a vertex in P to a vertex in Q). The degree of G, defined as
the maximum degree of its vertices, is g. According to K-onig's edge coloring
theorem, the edge coloring number of a bipartite graph is equal to its degree (see [7, vol. 2, p.1666]
or Berge [2, p. 238]). This means that the edges of a bipartite graph can be partitioned in d G
disjoint edge matchings. A constructive proof is as follows: repeatedly extract from E a maximum
matching that saturates all maximum degree nodes. At each iteration, the existence of such a
maximum matching is guaranteed (see Berge [2, p. 130]). To define the schedule, we simply let the
matchings at each iteration represent the communication steps.
Remark 2 The proof of Theorem 1 gives a bound for the complexity of determining the optimal
number of steps. The best known maximum matching algorithm for bipartite graphs is due to
Hopcroft and Karp [9] and has cost O(jV j 5
there are at most max(P; Q) iterations to
construct the schedule, we have a procedure in O((jP j
2 to construct a schedule whose
number of steps is minimal.
4.5 Schedule Implementation
Our goal is twofold when designing a schedule:
ffl minimize the number of steps of the schedule, and
ffl minimize the total cost of the schedule.
We have already explained how to view the communication grid as a bipartite graph E).
More accurately, we view it as an edge-weighted bipartite graph: the edge of each edge (p; q) is the
length length(p; q) of the message sent by processor p to processor q.
We adopt the following two strategies:
stepwise If we specify the number of steps, we have to choose at each iteration a maximum
matching that saturates all nodes of maximum degree. Since we are free to select any of such
matchings, a natural idea is to select among all such matchings one of maximum weight (the
weight of a matching is defined as the sum of the weight of its edges).
greedy If we specify the total cost, we can adopt a greedy heuristic that selects a maximum
weighted matching at each step. We might end up with a schedule having more than NS opt
steps but whose total cost is less.
To implement both approaches, we rely on a linear programming framework (see [7, chapter
30]). Let A be the jV j \Theta jEj incidence matrix of G, where
ae 1 if edge j is incident to vertex i
Since G is bipartite, A is totally unimodular (each square submatrix of A has determinant 0, 1 or
\Gamma1). The matching polytope of G is the set of all vectors x 2 Q jEj such that
ae
(intuitively, is selected in the matching). Because the polyhedron determined
by Equation 7 is integral, we can rewrite it as the set of all vectors x 2 Q jEj such that
To find a maximum weighted matching, we look for x such that
where c 2 N jEj is the weight vector.
If we choose the greedy strategy, we simply repeat the search for a maximum weighted matching
until all communications are done. If we choose the stepwise strategy, we have to ensure that, at
each iteration, all vertices of maximum degree are saturated. This task is not difficult: for each
vertex v of maximum degree in position i, we replace the constraint (Ax)
translates into Y t is the number of maximum degree vertices and Y 2 f0; 1g jV j
whose entry in position i is 1 iff the ith vertex is of maximum degree. We note that in either case
we have a polynomial method. Because the matching polyhedron is integral, we solve a rational
linear problem but are guaranteed to find integer solutions.
To see the fact that the greedy strategy can be better than the stepwise strategy in terms of
total cost, consider the following example.
Example 5
Consider a redistribution problem with 3. The communication grid
is given in Table 14. The stepwise strategy is illustrated in Table 15: the number of steps is equal
to 10, which is optimal, but the total cost is 20 (see Table 16). The greedy strategy requires more
steps, namely, 12 (see Table 17), but its total cost is only (see Table 18).
Table
14: Communication grid for Message lengths are indicated
for a vector X of size
of msg.
Nbr of msg.
4.5.1 Comparison with Walker and Otto's Strategy
Walker and Otto [21] deal with a redistribution where We know that going
from r to Kr can be simplified to going from we apply the
results of Section 4.3.2 (see Remark 1). In the general case (s are evenly
distributed among the columns of the communication grid (because r 1), but not necessarily
among the rows. However, all rows have the same total number of nonzero elements because s 0
divides In other words, the bipartite graph is regular. And since
maximum matching is a perfect matching.
Because messages have the same length: length(p;
(p; q) in the communication grid. As a consequence, the stepwise strategy will lead to an optimal
schedule, in terms of both the number of steps and the total cost. Note that NS opt = K under
the hypotheses of Walker and Otto: using the notation of Lemma 5, we have
We have
d
s
Note that the same result applies when Because the graph is regular and all
entries in the communication grid are equal, we have the following theorem, which extends Walker
and Otto main result [21].
Table
15: Communication steps (stepwise strategy) for
Stepwise strategy for
Table
Communication costs (stepwise strategy) for
Stepwise strategy for
Step a b c d e f g h i j Total
Cost
Proposition 6 Consider a redistribution problem with arbitrary P , Q and s). The
schedule generated by the stepwise strategy is optimal, in terms of both the number of steps and the
total cost.
The strategy presented in this article makes it possible to directly handle a redistribution from
an arbitrary CYCLIC(r) to an arbitrary CYCLIC(s). In contrast, the strategy advocated by Walker
and Otto requires two redistributions: one from CYCLIC(r) to CYCLIC(lcm(r,s)) and a second
one from CYCLIC(lcm(r,s)) to CYCLIC(s).
5 MPI Experiments
This section presents results for runs on the Intel Paragon for the redistribution algorithm described
in Section 4.
Table
17: Communication steps (greedy strategy) for
Greedy strategy for
Table
Communication costs (greedy strategy) for
Greedy strategy for
Step a b c d e f g h i j k l Total
Cost
5.1 Description
Experiments have been executed on the Intel Paragon XP/S 5 computer with a C program calling
routines from the MPI library. MPI is chosen for portability and reusability reasons. Schedules
are composed of steps, and each step generates at most one send and/or one receive per processor.
Hence we used only one-to-one communication primitives from MPI.
Our main objective was a comparison of our new scheduling strategy against the current re-distribution
algorithm of ScaLAPACK [15], namely, the "caterpillar" algorithm that was briefly
summarized in Section 3.2. To run our scheduling algorithm, we proceed as follows:
1. Compute schedule steps using the results of Section 4.
2. Pack all the communication buffers.
3. Carry out barrier synchronization.
4. Start the timer.
5. Execute communications using our redistribution algorithm (resp. the caterpillar algorithm).
6. Stop the timer.
7. Unpack all buffers.
The maximum of the timers is taken over all processors. We emphasize that we do not take the
cost of message generation into account: we compare communication costs only.
Instead of the caterpillar algorithm, we could have used the MPI ALLTOALLV communication
primitive. It turns out that the caterpillar algorithm leads to better performance than the MPI ALLTOALLV
for all our experiments (the difference is roughly 20% for short vectors and 5% for long vectors).
We use the same physical processors for the input and the output processor grid. Results are
not very sensitive to having the same grid or disjoint grids for senders and receivers.
5.2 Results
Three experiments are presented below. The first two experiments use the schedule presented in
Section 4.3.2, which is optimal in terms of both the number of steps NS and the total cost TC.
The third experiment uses the schedule presented in Section 4.4, which is optimal only in terms of
NS.
Back to Example 1
The first experiment corresponds to Example 1, with 5. The
redistribution schedule requires 7 steps (see Table 3). Since all messages have same length, the
theoretical improvement over the caterpillar algorithm, which as 16 steps, is 7=16 - 0:44. Figure 2
shows that there is a significant difference between the two execution times. The theoretical ratio
is obtained for very small vectors (e.g., of size 1200 double-precision reals). This result is not
surprising because start-up times dominate the cost for small vectors. For larger vectors the ratio
varies between 0:56 and 0:64. This is due to contention problems: our scheduler needs only 7 step,
but each step generates 16 communications, whereas each of the 16 steps of the caterpillar algorithm
generates fewer communications (between 6 and 8 per step), thereby generating less contention.
Back to Example 2
The second experiment corresponds to Example 2, with
Our redistribution schedule requires 16 steps, and its total cost is 6). The
caterpillar algorithm requires 16 steps, too, but at each step at least one processor sends a message
of length (proportional to) 7, hence a total cost of 112. The theoretical gain 77=112 - 0:69 is to be
expected for very long vectors only (because of start-up times). We do not obtain anything better
than 0:86, because of contentions. Experiments on an IBM SP2 or on a Network of Workstations
would most likely lead to more favorable ratios.
Back to Example 4
The third experiment corresponds to Example 4, with
experiment is similar to the first one in that our redistribution schedule requires much fewer steps
than does the caterpillar (12). There are two differences, however: P 6= Q, and our algorithm
is not guaranteed to be optimal in terms of total cost. Instead of obtaining the theoretical ratio
of 4=12 - 0:33, we obtain results close to 0:6. To explain this, we need to take a closer look at
the caterpillar algorithm. As shown in Table 19, 6 of the 12 steps of the caterpillar algorithm are
indeed empty steps, and the theoretical ratio rather is 4=6 - 0:66.
Global size of redistributed vector (64-bit double precision)500015000
Microseconds
caterpillar
optimal scheduling
Figure
2: Comparing redistribution times on the Intel Paragon for
Table
19: Communication costs for with the caterpillar schedule.
Caterpillar for
Step a b c d e f g h i j k l Total
Cost
6 Conclusion
In this article, we have extended Walker and Otto's work in order to solve the general redistribution
problem, that is, moving from a CYCLIC(r) distribution on a P -processor grid to a CYCLIC(s)
distribution on a Q-processor grid. For any values of the redistribution parameters P , Q, r, and s,
we have constructed a schedule whose number of steps is optimal. Such a schedule has been shown
optimal in terms of total cost for some particular instances of the redistribution problem (that
include Walker and Otto's work). Future work will be devoted to finding a schedule that is optimal
in terms of both the number of steps and the total cost for arbitrary values of the redistribution
problem. Since this problem seems very difficult (it may prove NP-complete), another perspective
is to further explore the use of heuristics like the greedy algorithm that we have introduced, and
to assess their performances.
We have run a few experiments, and these generated optimistic results. One of the next releases
of the ScaLAPACK library may well include the redistribution algorithm presented in this article.
Global size of redistributed vector (64-bit double precision)40008000Microseconds
caterpillar
optimal scheduling
Figure
3: Time measurement for caterpillar and greedy schedule for different vector sizes, redistributed
from
--R
A linear algebra framework for static HPF code distribution.
Graphes et hypergraphes.
Generating local addresses and communication sets for data-parallel programs
A portable linear algebra library for distributed memory computers - design issues and performance
Software libraries for linear algebra computations on high performance computers.
Matrix computations.
Handbook of combinatorics.
Compiling array expressions for efficient execution on distributed-memory machines
Processor mapping techniques towards efficient data redistribution.
Efficient address generation for block-cyclic distributions
A linear-time algorithm for computing the memory access sequence in data-parallel programs
Steele Jr.
Algorithmic redistribution methods for block cyclic decompositions.
Efficient block-cyclic data redistribution
MPI the complete reference.
Generating communication for array state- ments: design
Fast address sequence generation for data-parallel programs using integer lattices
An implementation framework for HPF distributed arrays on message-passing parallel computer systems
Redistribution of block-cyclic data distributions using MPI
Redistribution of block-cyclic data distributions using MPI
Runtime performance of parallel array assignment: an empirical study.
--TR
--CTR
Prashanth B. Bhat , Viktor K. Prasanna , C. S. Raghavendra, Block-cyclic redistribution over heterogeneous networks, Cluster Computing, v.3 n.1, p.25-34, 2000
Stavros Souravlas , Manos Roumeliotis, A pipeline technique for dynamic data transfer on a multiprocessor grid, International Journal of Parallel Programming, v.32 n.5, p.361-388, October 2004
Ching-Hsien Hsu , Shih-Chang Chen , Chao-Yang Lan, Scheduling contention-free irregular redistributions in parallelizing compilers, The Journal of Supercomputing, v.40 n.3, p.229-247, June 2007
Hyun-Gyoo Yook , Myong-Soon Park, Scheduling GEN_BLOCK Array Redistribution, The Journal of Supercomputing, v.22 n.3, p.251-267, July 2002
Ching-Hsien Hsu, Sparse Matrix Block-Cyclic Realignment on Distributed Memory Machines, The Journal of Supercomputing, v.33 n.3, p.175-196, September 2005
Minyi Guo , Yi Pan, Improving communication scheduling for array redistribution, Journal of Parallel and Distributed Computing, v.65 n.5, p.553-563, May 2005
Minyi Guo , Ikuo Nakata, A Framework for Efficient Data Redistribution on Distributed Memory Multicomputers, The Journal of Supercomputing, v.20 n.3, p.243-265, November 2001
Neungsoo Park , Viktor K. Prasanna , Cauligi S. Raghavendra, Efficient Algorithms for Block-Cyclic Array Redistribution Between Processor Sets, IEEE Transactions on Parallel and Distributed Systems, v.10 n.12, p.1217-1240, December 1999
Ching-Hsien Hsu , Yeh-Ching Chung , Don-Lin Yang , Chyi-Ren Dow, A Generalized Processor Mapping Technique for Array Redistribution, IEEE Transactions on Parallel and Distributed Systems, v.12 n.7, p.743-757, July 2001
Ching-Hsien Hsu , Yeh-Ching Chung , Chyi-Ren Dow, Efficient Methods for Multi-Dimensional Array Redistribution, The Journal of Supercomputing, v.17 n.1, p.23-46, Aug. 2000
Saeri Lee , Hyun-Gyoo Yook , Mi-Soo Koo , Myong-Soon Park, Processor reordering algorithms toward efficient GEN_BLOCK redistribution, Proceedings of the 2001 ACM symposium on Applied computing, p.539-543, March 2001, Las Vegas, Nevada, United States
Ching-Hsien Hsu , Kun-Ming Yu, A Compressed Diagonals Remapping Technique for Dynamic Data Redistribution on Banded Sparse Matrix, The Journal of Supercomputing, v.29 n.2, p.125-143, August 2004
Emmanuel Jeannot , Frdric Wagner, Scheduling Messages For Data Redistribution: An Experimental Study, International Journal of High Performance Computing Applications, v.20 n.4, p.443-454, November 2006
PeiZong Lee , Wen-Yao Chen, Generating communication sets of array assignment statements for block-cyclic distribution on distributed memory parallel computers, Parallel Computing, v.28 n.9, p.1329-1368, September 2002
Antoine P. Petitet , Jack J. Dongarra, Algorithmic Redistribution Methods for Block-Cyclic Decompositions, IEEE Transactions on Parallel and Distributed Systems, v.10 n.12, p.1201-1216, December 1999
Jih-Woei Huang , Chih-Ping Chu, An Efficient Communication Scheduling Method for the Processor Mapping Technique Applied Data Redistribution, The Journal of Supercomputing, v.37 n.3, p.297-318, September 2006 | block-cyclic distribution;MPI;distributed arrays;scheduling;HPF;redistribution |
275931 | Timestep Acceleration of Waveform Relaxation. | Dynamic iteration methods for treating certain classes of linear systems of differential equations are considered. It is shown that the discretized Picard--Lindelf (waveform relaxation) iteration can be accelerated by solving the defect equations with a larger timestep, or by using a recursive procedure based on a succession of increasing timesteps. A discussion of convergence is presented, including analysis of a discrete smoothing property maintained by symmetric multistep methods applied to linear wave equations. Numerical experiments indicate that the method can speed convergence. | Introduction
. Much of modern chemical and physical research relies on the numerical
solution of various wave equations. Since these problems are extremely demanding of both
storage and cpu-time, new numerical methods and fast algorithms are needed to make optimal
use of advanced computers. The dynamic iteration or waveform relaxation (WR) method [9,
11] is an iterative decoupling scheme for ordinary differential equations which can facilitate
concurrent processing of large ODE systems for applications such as VLSI circuit simulation
[6, 15] and partial differential equations [1, 4].
In this article, accelerated dynamic iteration schemes are used to solve systems of linear
differential equations, with emphasis on the ordinary differential equations arising from discretization
of linear wave equations. Although our experiments use finite differences for the
spatial derivatives, other spatial discretizations could be used. For time discretization, we use
symmetric multistep methods, although other choices may also be appropriate. As is the case
for stationary iterative methods applied to spatially discretized elliptic PDEs, it is found that
finer fixed step (time) discretizations slow the convergence of the WR iteration, while large
timesteps can be used to resolve the slow modes. The idea that is explored here is to use
a coarse timestep on the defect equations to speed up convergence of the fine grid iteration.
Nevanlinna already pointed out [14, 13] that for general applications of WR it makes sense from
Department of Mathematics, University of Kansas, Lawrence, KS 66045, leimkuhl@math.ukans.edu. This
work was supported by NSF grant DMS-9303223
an efficiency standpoint to use coarser discretization in the early sweeps (when the iteration
error is large), and then incrementally to refine the time discretization near convergence. Our
point of view is rather to vary the timestep to resolve different modes present in the solution,
using two time stepsizes (or multiple stepsizes). The current article is related to recent work of
Horton and Vandewalle [4] and Horton, Vandewalle and Worley [5] which considered space-time
multigrid methods for solving parabolic equations.
The new scheme will be referred to as timestep acceleration since it relies on adjustment of
the integration timestep to accelerate the dynamic iteration. This approach shares some features
of multigrid methods. For the convenience of the reader generally familiar with multigrid
methods, we outline the algorithm in the abstract setting of solving an unspecified dynamical
system as follows:
Accelerated Waveform Relaxation. Given: fine timestep h and an approximation u 0 to
the solution with fixed stepsize h.
1. Smoothing Starting from u 0 , perform a fixed number of iterations of a smoothing waveform
relaxation iteration with timestep h.
2. Correction Compute the defect (residual) in this solution on the fine time mesh.
If the timestep sufficiently large, solve the discrete defect equation restricted
to the coarse time mesh directly (i.e. without relaxation).
Else recursively apply some number of iterations of the algorithm using stepsize H to
the defect equation restricted to the coarse time mesh.
Next, correct the solution after prolonging onto the fine time mesh.
3. Smoothing Apply a fixed number of iterations of the fine stepsize smoothing iteration.
(In x6, we present and analyze a more precisely defined version of this algorithm, TAWR.)
A major barrier to efficient solution of large scale wave equations is the need for small
timesteps. Due to the sequential character of standard ODE methods, this effectively reduces
the potential for parallel speedups. Compared to standard timestepping schemes, the method
discussed here directly addresses this problem by enabling the use of larger timesteps to recover
at least a portion of the dynamics. Another important obstacle to computation-particularly
in the case of high dimensional problems-is the necessary storage. The new method actually
exacerbates this problem since solution information at many points must be stored. However, in
waveform relaxation based on a block splitting, the storage is naturally segmented according to
the decoupling, so the scheme may be appropriate for a parallel computer based on a distributed
memory architecture.
Although standard analytical results for multigrid methods or coarse-grid acceleration are
typically developed for finite dimensional Hermitian positive definite problems, these can be
relaxed to give at least partial convergence results. In fact proving theoretical convergence
for timestep acceleration is easier than for standard multigrid due to the strong smoothing
properties of the Picard-Lindel-of operator (it is a contraction on small intervals). Analysis of
the behavior of the iteration on special linear model problems is also possible and is briefly
discussed here.
The scheme is found to work well in simple numerical experiments with linear wave equa-
tions. Although our experiments are conducted in one space dimension, nothing in principle
prevents application in higher dimensions (although many practical issues will need to be dealt
with).
2. Waveform Relaxation. Consider a second order linear system of differential equation
(1)
where the eigenvalues of the matrix A are assumed to lie in the left half plane. A special
case that we will frequently refer to is the 1-D wave equation U discretized with finite
differences on the unit square with periodic boundary conditions:
. 0
A is called the discrete Laplacian. Here is a vector of approximations
at nodes x i ,
Another potential application is to the Schr-odinger equation. Discretizing, for example,
with finite differences leads to
d
dt
(2)
where A is the discrete Laplacian. If v(x; t) is the potential energy function of the corresponding
classical system, we have V In simplified settings
v(x; t) is time-independent, hence so is V .
The waveform relaxation method for (1) is based on a splitting results
in ODE IVPs
For example, we might choose A+ to be the diagonal of A (Jacobi splitting), a block-diagonal
part of A (block-Jacobi splitting), the lower triangular part of A (Gauss-Seidel splitting), etc.
Much work involving the discrete Laplacian in elliptic PDEs is based on Gauss-Seidel splitting
in red-black ordering. Another useful splitting is the damped Jacobi splitting where
with D the diagonal of A. The extreme case is called the Picard splitting.
When referring to (1) and (3) we will generally limit discussion to the case where A and
are symmetric negative semidefinite matrices.
The WR iteration proceeds as follows: starting from a given initial waveform u
(which may be constant), we solve (3) with as a forced linear system for u 1 over some
time interval, say [0; T ]. (This interval is referred to as the window). The function u 1 then
yields a forcing for the next iteration or sweep, and the process repeats. In practice, the systems
are solved numerically over the entire interval, and the storage of the resulting discrete
approximation is an important drawback of the method which may place severe limitations
on the size of the time window. On the other hand, we gain in two ways: first, the systems
we solve at each iteration can be decoupled into problems of reduced dimension, and second,
the decoupled problems can often be solved on separate processors of a parallel computer. An
alternative approach would be based on solving the linear equations that result at each step
of a standard discretization using a parallel algorithm, however, depending on the computer
architecture employed, the flexibility in the choice of window size may reduce the overall communication
cost, e.g. by eliminating some of the time spent in initializing the transfer of data
between processors.
Preliminary convergence results for WR appear in the paper by Lelarasmee et al [9].
Miekkala and Nevanlinna [11, 12] and Nevanlinna [13] have developed an extensive theory for
studying waveform relaxation for linear systems. Lubich and Osterman proposed to combine
the WR method with spatial multigrid schemes [10]. Recent work by Horton and Vandewalle
[4] and by Horton, Vandewalle and Worley [5] has shown that a careful implementation of (spa-
tial) multigrid-WR methods for parabolic PDEs can provide excellent parallel speedups. The
use of waveform relaxation for solving hyperbolic partial differential equations and relations to
domain decomposition were explored by Bj-rhus [1].
3. Mathematical Background. In this section we state some elementary results concerning
the iteration (3). The reader is directed to the papers of Nevanlinna and Miekkala for
basic theory.
The waveform relaxation method for (1) can be viewed as an iteration u
As shown in [11], implying superlinear convergence. On the other hand,
for stiff dissipative linear systems, it makes sense to allow T ! 1 in which case meaningful
spectral information is obtained [11]. Since the solution to the equations (1) and (2) does not
in general lie in L 2 ([0; 1)), this approach must be modified. A reasonable practical approach
is that taken in [13] where an exponential weighting function e \Gammafft is inserted into the usual L 2
norm. For ff ? 0, the space L 2
ff is normed by
kuk ff :=
-Z 1je \Gammafft u(t)j 2 dt
and ae ff refers to spectral radius in that space.
If we take the Laplace transform of (3), we obtain:
u(0).
The following results are proved by Nevanlinna and Miekkala [11]:
ae ff
Rez-ff
which follows from the Paley-Wiener theorem (the second expression follows from a maximum
principle after a suitable remapping of the domain) and
which follows from Parseval's identity.
We now provide some simple estimates for the response of the iteration operator in weighted
2-norm.
First, consider the behavior of the solution operator of (1) in the
weighted space, where d=dt. Examining the spectral radius of the normal matrix L(z)
along the line Rez = ff, we find the eigenvalues are:
are the eigenvalues of A. Hence
By maximizing these functions over y, we can compute the moduli of the eigenvalues of solution
operator in the weighted space.
Theorem 3.1. Define
Then
ae ff (L
In particular eigenvalues near zero have the strongest influence. When A is the discrete
Laplacian, or any symmetric negative semidefinite matrix with an eigenvalue at zero, we have
ae
We can use this to estimate the norm of the iteration matrix, since
where the -
are determined from Theorem 3.1 with - i the eigenvalues of A+ rather than A.
Asymptotic estimates for the relation between spectral radius of S(z) and
ff are given in [8].
Let us consider the wave equations with periodic boundary conditions on the square as a
model problem. We will use damped Jacobi iteration with parameter ! 2 (0; 1]. In this case,
A, are all diagonalized by the discrete Fourier transform, so we arrive readily at
the eigenvalues
e
and
In this case the spectral radius can be readily computed. We have
\Deltax
pp
pp
and
ae ff
We are interested in moderate weights ff which we define to mean ff !
ae(A). (Intuitively, this
corresponds to looking in the time domain on intervals greater than the smallest period of the
motion.)
In the standard theory, one uses the value of ! in the damped Jacobi splitting to enhance a
smoothing property: a damping in the iteration of the modes corresponding to larger eigenvalues.
However, the important consideration for timestep acceleration is not the way in which the
smoother acts on fast "spatial" modes but rather the response of the smoother to high frequency
forcings. In fact, the real smoothing property we are interested in has to do with the shape of
the graph of ae(S(ff + iy)) as a function of y. For example, when a damped Jacobi splitting is
applied to solve the semidiscrete wave equation, we find that the spectral radius of S achieves
its maximum on Rez = ff at the point (if ff !
(or at
). The maximum is typically achieved well away from
For the Picard splitting, it is easy to see rather that the maximum occurs at In
this case, we say that the iteration has a smoothing property with respect to high frequency
forcings. It is not necessary to use a slowly converging splitting such as the Picard splitting
to obtain a good smoothing property. A typical feature of a good splitting for this purpose
is that A+ would have an eigenvalue at or near the origin. Thus the smoothing property of a
block-Jacobi splitting of the discrete Laplacian would improve with the block size.
To illustrate this smoothing concept, consider the time-dependent Schr-odinger equation
(2). The iteration becomes
d
dt
We could again use a Jacobi or damped Jacobi splitting, but in practice, a more useful choice
might be
or, more simply
After time discretization, these choices will lead to equations at each timestep which can be
efficiently solved by, for example, using a parallel implementation of the fast Fourier transform
Still another possibility is to work directly in the Fourier coefficients. Let QAQ
where
and
so that the equations become
d
dt
\Psi:
We can then apply a Jacobi splitting to this problem. One finds that the diagonal of QV (t)Q H
is dI where
The computation can be implemented efficiently using the FFT.
The symbol of the WR iteration operator R for the Schr-odinger equation with V
constant is
and, trivially,
ff
Using the Laplacian+potential splitting, or one of its cousins can be expected to yield a
good smoothing property with respect to high frequency forcings.
4. Discretization. In this section, we focus on (1) and apply a discrete transform as in
[12] to analyze the symmetric multistep methods commonly used for integrating oscillatory
problems.
Multistep methods construct an approximating sequence fu n g to fu(t n )g at successive time
points use fu k
n g to refer to the numerical solution generated at the kth sweep of
waveform relaxation. Symmetric multistep methods for -
take the form:
with sequences, for which ff
These methods are used for integration of second order oscillatory
problems. An important feature of this class of methods is their time-reversibility. Note that
the multistep methods require k starting values.
In discretizing dissipative problems it is sensible to replace the space L 2 by l 2
h with norm
. For our investigations, we use the weighted space with norm
je \Gammanhff u
which can be viewed as a discretization of the L 2
ff norm.
Following the usual practice we define operators a and b on sequences by
We also use the symbols a and b to refer to the corresponding characteristic polynomials:
To preserve the intuitive correspondence of results from the continuous-time to discrete
worlds, define a discrete transform which takes fu n g to - u(z) by -
tially a discretization of the Laplace transform, and equivalent to the discrete Laplace or i
transform).
Applying (5) to the linear problem (3) and computing the discrete transform, we find
where
and OE includes the effects due to the k starting values. We are going to assume that these starting
values are exact (for the unsplit discrete problem) so they do not effect the convergence of the
iteration.
For example, if
where
In the general case, discrete versions of the Paley-Wiener theorem and Parseval's identity
give (after modifying results in [12] to take into account the exponential weight):
ae h;ff (S h
Rez-ff
ae(S h (z))
and
In order for the discretized operator to be bounded, we evidently need to require
for any - 2
We now consider an example. Ignoring rounding error, the popular leapfrog method for
second order systems is equivalent to St-ormer's rule (also known as the Verlet method), a
symmetric 2-step method with ff 1. Applying this
scheme to the WR iteration for the linear problem and taking the discrete transform gives
The function
is an an O(h 2 z 4 ) approximation to z 2 . The poles of the transformed discrete iteration operator
z =h
cosh
with - an eigenvalue of A+ . Explicit multistep scheme are always conditionally stable meaning
that the stability of the schemes will depend on the stepsize being restricted roughly in inverse
relation to the square root of the spectral radius of A+ . For the St-ormer method, the stability
condition is that - ! 0 and \Gammah 2 - != 2, which is also the condition that the poles of S h remain
on the imaginary axis. The function Im cosh(-) is monotone in the real variable - on [\Gamma1; 1],
hence the ordering of the poles is preserved along the imaginary axis.
Another popular second order method is the (implicit) trapezoidal rule which has transform
4.1. Decay of the discrete symbol. Theory due to Miekkala and Nevanlinna [12] compares
the convergence of the discrete iteration in l 2
h to that of the continuous time iteration
for dissipative problems and for methods that are not weakly stable. We need to modify this
mechanism to cover convergence for stable methods for second order differential equations in
the weighted spaces rather than L 2 and l 2
h . In what follows, it is assumed that the k starting
values are held fixed as we iterate. These could also be obtained by some convergent process,
but this does not seem a meaningful generalization.
In the case of the discretized iteration we need to examine the images of segments ff
. The situation for representative and in Figure 1 we see
the images of this line for the trapezoidal and St-ormer discretizations, for various values of the
stepsize.
Putting real ff into each of the functions P t:r:
h and z 2 one can show that for sufficiently
small hff
which means, somewhat surprisingly, that in the neighborhood of y = 0, the St-ormer discretization
actually leads to a slightly more stable overall iteration than that generated by the
trapezoidal rule.
h=.2
h=.3
h=.4
h=.1 h=.2
Fig. 1. Approximation of z 2 by St-ormer (dashed lines) and trapezoidal rule (dotted lines)
The situation for large h is more dramatic. For nonstiff problems with eigenvalues - very
near the origin in the complex plane, large steps should be possible and one might suppose
that the St-ormer and trapezoidal rule discretizations would behave similarly with respect to
WR convergence. In fact, this is not the case and it turns out that the St-ormer method yields
a much more stable WR iteration than the trapezoidal rule over comparable time intervals.
Figure
2 shows the image of 1
h and z 2 .
Figure
also indicates that results such as Proposition 9 of [10] and Theorem 3.4 of [12] which bound the
spectral radius of the discrete iteration in terms of that of the continuous iteration for A-stable
multistep methods will not typically hold in our setting.
The problem with generalizing the results of [12] is that they were based on the strengthened
stability assumption that the stability region includes a disk on the negative real axis touching
the origin, whereas many of the symmetric methods we consider (e.g. St-ormer's rule) do not
satisfy this condition.
We will use the exponential weight to correct for the weak stability of the method. For
define the fl-stability
region\Omega fl of the method as the set of all - 2 -
C such that the roots
lie in the disk jij - e fl and are simple on the boundary. The iteration
operator S h is bounded in l 2
h;ff if
2.5 -2 -1.5
-113s.r. h=2
t.r. h=2
Fig. 2. Large time-step comparison of images of 1
Now observe that
We can directly relate the spectral radius of the discrete iteration to that of the continuous
time iteration. In fact,
ae ff;h (S h
Rez-ff
ae
Rez-ff
ae
s
Let the notation bdyW be used to indicate the boundary of the set W . Since f
r
bdy\Omega hff , we have, analogous to a result in [12].
Theorem 4.1. Suppose
int\Omega ffh , then
ae ff;h (S h
int\Omega ffh
and
@\Omega ffh g:
Theorem 4.2. If the dynamic iteration converges in L 2
ff and the symmetric multistep
method is irreducible and convergent then the discretized iteration converges in l 2
ff;h for sufficiently
small h and
ae ff;h (S h
We will outline a proof of this result since the reasoning is somewhat different than that
used in [12].
be the k zeros of a, with being the principle root counted with
multiplicity two. For simplicity, assume that these zeros all lie on the unit circle S 1 and that
they are ordered counterclockwise about the unit circle, thus fi
(It would not be difficult to treat the case where some zeros lie inside the unit circle.) From
consistency, we must have that fi double root, while all of the other roots are simple.
We can view ja(e hff w)j 2 as a function of w on S 1 . For at the
zeros of a; for h sufficiently small and ff ? 0, it has local minima located near the points
. We can expand a in Taylor's series about the fi i to obtain
Only multiple root of a, hence a 0 1. This means that, for
e hff w in the vicinity of fi i , must have a(e hff must
also hold at the local minimum.
Using this, we can prove a small lemma which shows that the spectral radius is determined
for small h by the approximation property of the principle root.
Lemma 4.3. For h sufficiently small,
ae(S
ae(S
and a similar result holds for kS h k.
Proof: By symmetry, ' sufficiently small,
the global minimum of ja(e hff e ihy )j 2 on I h must occur at one of its local minima over that
interval or at the endpoints. Since b(e hz ) can be uniformly bounded in any bounded region, it
is straightforward to see that the quantity - h (y) defined by
s
satisfies
min
and hence that
ae(S
Due to symmetry, the behavior of ae is the same on the intervals ['
other words, we need only look in the latter subinterval for the maximizing value. 2.
Given -
consistency implies that
0-y-y
Choose a large enough rectangular neighborhood N of the origin so that e.g. ae(S(z)) ! ae ff (S)=2
for z outside N . Now j- h (' 2 =h)j - h \Gamma1=2 , thus for h sufficiently small, the curve \Gamma h := f- h (y) :
leaves N . After leaving N , it cannot reenter N (or j- h (y) 2 j would have another
local minimum). Within N , the curve \Gamma h will approximate the line segment ff + iy to O(h).
Thus for h sufficiently small, the maximum value of ae(S h (z)) will occur when - h (y) lies within
N , and since this point lies within O(h) of ff iy we can see that asymptotically, the spectral
radius of S h in the weighted space can differ by only O(h) from that of S. This concludes the
proof of the theorem.
5. Aliasing effects. Consider the transformed discrete iterator S h
on the vertical line ff R. The degree to which an eigenvalue \Gamma! 2 of A+ has an impact
on the solution at frequency - depends inversely on the separation between P h (ff + i-) and
\Gamma! 2 . Those frequencies - for which P h (ff lies far from the spectrum of A+ will be only
weakly propagated by the iteration.
For any multistep method, the function P h (ff + i-) is actually periodic in - with period
2-=h. This aliasing effect means that high frequencies can be excited with large stepsizes.
Frequencies give the same response. Actually, the situation is even somewhat
worse due to the symmetry about the real axis: the response to \Gamma- will be the same as
the response to - 0 . Of course, if there are no frequencies present in the forcing function above
say -=h then these anomalous excitations do no harm.
We will illustrate with the wave equation example. We first look at the response of the
discrete solution operator for the (unsplit) spatially discretized wave equation along
the line 1 iy. The curves shown in Figure 3 show the spectral radii (hence also the norm) of
versus y for :25. The maximum value is achieved near expected
from Theorems 3.1 and 4.2. An increase in the stepsize h provides accuracy for small y while
introducing some extraneous excitation at 2-k=h, k 2 Z.
solid h=1
-. h=.5
. h=.25
Fig. 3. Spectral radius of L(1+iy), wave equation, N=32.
We next examine the spectral radius of the transformed discrete iteration operator S(z)
for the the Jacobi splitting of the spatially discretized wave equation along the line
iy. The curves shown in Figure 4 show the spectral radii versus y for h in the progression
As h decreases, the spectral radius has increasing maxima achieved at
increasing values of y.
solid h=1
-. h=.5
. h=.25
Fig. 4. Spectral radius of S(1+iy), wave equation, N=32.
Using a large stepsize to resolve the small y response will not apparently improve the
convergence of the small step Jacobi iteration, since the large stepsize solution operator does
not even act on the high frequencies (where the spectral radius is large). The exception to this
will be in the case that h is so small that the "coarse" grid is not coarse at all (in which case,
little is gained through iteration). Moreover, unless ae(S h (z)) is small outside of an interval
about the origin of length roughly 2-=H , the artifacts introduced at the high frequencies on
the coarse grid will not be damped out.
To see a substantial improvement, our iteration operator should be designed to achieve its
maximum at or near As mentioned previously, for the wave equation, a natural (if slow)
choice is standard Picard iteration.
If we turn to the Schr-odinger equation and consider for example the splitting (4) for V
constant. In case V is not too large, we would expect here that the maximum of ae(R(z)) is
achieved near has an eigenvalue near the origin) and that substantial improvement
may be possible by exploiting a coarse time step solution.
6. Timestep Acceleration of WR. The examples of the previous section suggest that
an approach in which different time meshes are used at each sweep could be successful. The
goal, as for standard multigrid is to iterate on successively coarser grids, thus resolving those
components of the residual that are most difficult to obtain on the fine grid. We envision
ultimately combining spatial multigrid with this timestep acceleration scheme. For the formulation
and analysis of standard multigrid methods in the context of elliptic PDEs, the reader
is referred to [2].
We will now define the steps of the algorithm described in the introduction. Let b and a
represent the operators which define the discretization. We use h to represent the fine timestep,
and H to represent the coarse timestep. Normally, in solving elliptic PDEs, we use In
our case, this choice may or may not be appropriate; for the purposes of discussing an algorithm,
we assume that the stepsize changes by a common factor at each iteration, but this is perhaps
not essential in practice. Let I H
h and I h
prolongation operators,
respectively, which act between the fine and coarse time-meshes, thus I H
H;ff and
I h
h;ff .
Note that whether we wish to solve problems with or without forcing, description for a
forced problem permits an easy recursive definition.
Algorithm TAWR(h). Given: fine timestep h, a sequence ff n g 2 l 2
h;ff , an approximation
u 0 to the solution with fixed stepsize h, and a splitting , the following algorithm
solves
subject to k prescribed starting values ' i ,
1. Small-timestep Pre-Smoothing. Starting from u 0 , perform - sweeps of WR iteration
with timestep
where u l+1
2. Large-timestep Correction. Compute the defect fd n g satisfying d
and
from the formula
If the timestep sufficiently large, solve
directly (i.e. without relaxation) using zeros for starting values.
Else apply - iterations of TAWR(H) to (9), using zeros for starting values.
Next, correct:
3. Small-timestep Post-Smoothing Apply - iterations of the fine mesh smoothing operation
(8).
Notes:
ffl For this is the V-cycle, for - 2, it is called the W-cycle.
ffl Different numbers of smoothing steps could be used in the pre- and post-smoothings.
ffl To solve the problem using timestep acceleration, we first compute ff n g :=
)g.
7. Convergence Analysis. In this section we present an elementary general convergence
result regarding two-grid acceleration. This result could be easily extended to the full
timestep acceleration iteration. The iteration operator in the two stepsize case can be written
as S -
h where S h represents the smoothing sweep and C h;H represents the coarse-grid
correction. In general it is enough to understand
h . The operator C h;H can be
where we have denoted the prolongation and restriction operators by p and r, respectively. On
the other hand,
It is enough to show that
is O(h \Gamma2 ) in l 2
ff;h , while the norm of
is O(h 2 OE(-)), where OE tends to zero as - ! 0. In fact we anticipate that the situation is often
rather better than this result would indicate, but this approach allows us to state a quite general
convergence result.
Based on the relation R \Gamma1
h , the fact that b is a bounded operator, and the
theorems of the last section, we have:
Lemma 7.1. Suppose consistent, stable linear multistep is used and the restriction and prolongation
operators are bounded operators. Then kM sufficiently
small.
The proof follows since (i) kL \Gamma1
O(h), (ii) the same thing holds for h
replaced by H and qh, and (iii) the restriction and prolongation operators are bounded.
2.
For the smoothing, we have
We therefore have
This converges to zero provided
ae ff;h (S h
Thus we can state:
Theorem 7.2. If the undiscretized smoothing iteration is convergent (ae ff (S) ! 1), a consis-
tent, stable linear multistep is used, the restriction and prolongation are bounded operators between
l 2
h;ff and l 2
H;ff and enough smoothing iterations are performed, then the timestep-accelerated
waveform relaxation algorithm converges.
Because of the strong contractivity of the Picard operator on small time intervals, it would
be straightforward to extend this result to the full multiple mesh recursive acceleration scheme.
On the other hand, besides proving asymptotic convergence, this simplified approach provides
no practical estimates of convergence.
7.1. Treatment of Model Problems. A key observation is that two modes are coupled
via restriction. It is possible to write a formula for the "symbol" of the iteration operator as a
matrix operating on the pair of coupled modes e hnz and e hn(z+i-) . As an example, taking
the operators
\Theta 111
(full weighting restriction) and linear interpolation), and assuming any
symmetric multistep method (a; b), then we find the action of M on the pair of modes is given
by
where
and
(\Omega is the Kronecker product).
Now for Jacobi or Picard iteration on the wave equation, for example, the matrix -
M is
easily reduced to a diagonal matrix of 2 \Theta 2 blocks, so the asymptotic convergence behavior
can be determined relatively easily. For red-black Gauss-Seidel iteration on the square, we get
a further pairing of the spatial modes, so -
M actually is reduced to 4 \Theta 4 blocks. By studying
the spectra of these blocks, various ODE discretizations could be compared for their effect on
asymptotic rate of convergence, as could other choices of restriction/prolongation.
Note that besides the restriction and prolongation having an adjoint relationship
if we choose the second order, two-step discretization
then we find that also rR h This appears to be the only consistent, stable two-step
scheme for which this property holds with the given r and p. These resemble the conditions
for "variational form", however the operator R and its discretization are not self-adjoint in our
setting, so we do not have the space decomposition
l 2
and the standard theoretical results cannot be directly applied.
8. Numerical Experiments. We performed experiments using the two-grid iteration
on linear wave equations. We found that the performance improvements were sensitive to
many factors, including timestep, time window length, and splitting. Unfortunately, we cannot
expect to have complete flexibility in the choice of the time interval or "window" as this may
be determined from a storage or communication limitation. Similarly, the timestep is typically
chosen for accuracy reasons.
Consider the standard 1D wave equation (1), N=16, using Jacobi iteration for the smooth-
ing. We used the discretization (10) together with full weighting restriction and piecewise linear
interpolation. We did not anticipate very good behavior since the smoothing property is relatively
weak for this splitting (fast modes are not very strongly damped). Indeed, this is what
we observed. For most values of the stepsize, the 2-grid acceleration improved convergence,
but not by very much. (In some cases performance was even slightly degraded.) In each of
the Figures the 2-norm of the error is graphed as a function of the sweep number s and the
timestep n. In Figure 5 the error in Jacobi WR is indicated for stepsize
shows the mild improvement in the error when a coarse grid correction is applied at each Jacobi
WR sweep.
We next examined a modified wave equation of the form
where A is the discrete Laplacian and - is a scalar parameter. We used "Laplacian splitting"
into A and -I . It is easy to see that this splitting possesses a strong "smoothing property". We
first chose means that we had a substantial perturbation of
timestep
sweep
error
Fig. 5. Errors in Jacobi WR, h=.025, 40 steps without correction.103050515255001500timestep
sweep
error
Fig. 6. Errors with coarse grid correction, h=.025, 40 steps, showing poor acceleration. The benefit of coarse
grid correction is diminished by the poor smoothing property of the Jacobi smoother.
the discrete Laplacian. Initial data excited the first two eigenfunctions of the Laplacian (slow
modes), although this choice was not critical to the results we obtained. Twenty timesteps of
size were used. In this case, the coarse grid corrections offer substantial improvement, as
shown in Figure 7. The left figure shows the log 10 error versus timestep and sweep number; on
the right we have shown the log 10 ratio of the errors with and without the two-grid acceleration.
The improvement evidenced here is as much as a factor of 10 5 in 20 sweeps, or a little under
a factor of 2 per sweep on average.
The improvement is evident until the error reaches the level of roundoff. At the weaker
perturbation the effect somewhat diminished (Figure 8). If we instead increased the
strength of the applied field 100), the coarse grid correction continued to offer substantial
acceleration; this is indicated in Figure 9. At larger or smaller timesteps, the improvement
slightly diminished. A linear acceleration effect was observed on longer time intervals (Figure
timestep iteration sweep
log
-5
timestep iteration sweep
log
error
ratio
Fig. 7. (a) log error and (b) log error ratio, steps . The splitting provides a strong
smoothing property, and a substantial improvement is possible with the two-grid iteration.515020
timestep iteration sweep
log
error
ratio515020
timestep iteration sweep
log
error
Fig. 8. (a) log error and (b) log error ratio,
Although these experiments suggest that time-mesh coarsening accelerations hold promise
for improving the parallel waveform relaxation algorithm, they certainly do not settle all the
issues. In particular, we do not have an easy and robust mechanism for determining what
timestep iteration sweep
log
-5
-1timestep iteration sweep
log
error
ratio
Fig. 9. (a) log error and (b) log error ratio,
-5
timestep iteration sweep
log
error
ratio1030020
-55timestep iteration sweep
log
error
Fig. 10. (a) log error and (b) log error ratio, steps . On longer time intervals, the
convergence acceleration factor (ratio of errors with and without acceleration) becomes approximately linear in
the sweep.
splittings will benefit from acceleration, or for determining various parameters such as number
of smoothing sweeps, optimal coarsening, etc. We also have not yet experimented with the use
of more than two levels of time-mesh acceleration.
Acknowledgements
: The author is indebted to Pawel Szeptycki for several helpful discussions
during the early stages of this work. Stefan Vandewalle read a preliminary version of
the manuscript and contributed several very useful comments. The computers of the Kansas
Institute for Theoretical and Computational Science were used for the numerical experiments.
--R
New York
Solving Ordinary Differential Equations
A spacetime multigrid method for parabolic PDEs
An algorithm with polylog parallel complexity for solving parabolic partial differential equations
Zukowski D.
Estimating waveform relaxation convergence
The waveform relaxation method for time-domain analysis of large scale integrated circuits
Multigrid dynamic iteration for parabolic equations
Convergence of dynamic iteration methods for initial value problems
Sets of convergence and stability regions
Remarks on Picard Lindel-of iteration
Power bounded prolongations and Picard-Lindel-of iteration
Partitioning algorithms and parallel implementation of waveform relaxation algorithms for circuit simulation
--TR
--CTR
D. Guibert , D. Tromeur-Dervout, Parallel adaptive time domain decomposition for stiff systems of ODEs/DAEs, Computers and Structures, v.85 n.9, p.553-562, May, 2007 | multigrid methods;waveform relaxation;wave equation |
275958 | A Fast Iterative Algorithm for Elliptic Interface Problems. | A fast, second-order accurate iterative method is proposed for the elliptic equation \[ \grad\cdot(\beta(x,y) \grad u) =f(x,y) \] in a rectangular region $\Omega$ in two-space dimensions. We assume that there is an irregular interface across which the coefficient $\beta$, the solution u and its derivatives, and/or the source term f may have jumps. We are especially interested in the cases where the coefficients $\beta$ are piecewise constant and the jump in $\beta$ is large. The interface may or may not align with an underlying Cartesian grid. The idea in our approach is to precondition the differential equation before applying the immersed interface method proposed by LeVeque and Li [ SIAM J. Numer. Anal., 4 (1994), pp. 1019--1044]. In order to take advantage of fast Poisson solvers on a rectangular region, an intermediate unknown function, the jump in the normal derivative across the interface, is introduced. Our discretization is equivalent to using a second-order difference scheme for a corresponding Poisson equation in the region, and a second-order discretization for a Neumann-like interface condition. Thus second-order accuracy is guaranteed. A GMRES iteration is employed to solve the Schur complement system derived from the discretization. A new weighted least squares method is also proposed to approximate interface quantities from a grid function. Numerical experiments are provided and analyzed. The number of iterations in solving the Schur complement system appears to be independent of both the jump in the coefficient and the mesh size. | Introduction
. Consider the elliptic equation
r (fi(x;
2\Omega
Given BC on
in a rectangular
domain\Omega in two space dimensions. Within the region, suppose there
is an irregular interface \Gamma across which the coefficient fi is discontinuous. Referring to
Fig 1, we assume that fi(x; y) has a constant value in each sub-domain,
The interface \Gamma may or may not align with a underline Cartesian grid.
Depending on the properties of the source term f(x; y), we usually have jump
conditions across the interface \Gamma:
This work was supported by URI grant #N00014092-J-1890 from ARPA, NSF Grant DMS-
9303404, and DOE Grant DE-FG06-93ER25181.
y Department of Mathematics, University of California at Los Angeles, Los Angeles, CA 90095.
(zhilin@math.ucla.edu).
Z. LI
(a)
-0.4
Interface
(b)
Fig. 1. Two typical computational domains and interfaces with uniform Cartesian grids.
where (X(s); Y (s)) is the arc-length parameterization of \Gamma, the superscripts \Gamma or
denotes the limiting values of a function from one side or the other of the interface.
These two jump conditions can be either obtained by physical reasoning or derived
from the differential equation, see [2, 9, 14] etc. Note in potential theory, v(s) 6j 0
corresponds to a single layer source along the interface \Gamma, while w(s) 6j 0 corresponds
to a double layer source. The normal derivative u n usually has a kink across the
interface due to the discontinuity in the coefficient fi. If w(s) 6j 0; then the solution
would be discontinuous across the interface.
There are many applications in solving elliptic equations with discontinuous coef-
ficients, for example, steady state heat diffusion or electrostatic problems, multi-phase
and porous flow, solidification problems, and bubble computations etc. There are two
main concerns in solving (1.1)-(1.4) numerically:
ffl How to discretize (1.1)-(1.4) to certain accuracy. It is difficult to study the
consistency and the stability of a numerical scheme because of the discontinuities
across the interface.
ffl How to solve the resulting linear system efficiently. Usually if the jump in the
coefficient is large, then the resulting linear system is ill-conditioned, and the
number of iterations in solving such a linear system is large and proportional
to the jump in the coefficient.
There are a few numerical methods designed to solve elliptic equations with discontinuous
coefficients, for example, harmonic averaging, smoothing method, and finite
element approach etc., see [2] for a brief review of different methods. Most of these
methods can be second order accurate in the l-1 or the l-2 norm, but not in the l-1
norm, since they may smooth out the solution near the interface.
A. Mayo and A. Greenbaum [14, 15] have derived an integral equation for elliptic
interface problems with piecewise coefficients. By solving the integral equation, they
can solve such interface problems to second order accuracy in the l-1 norm using
the techniques developed by A. Mayo in [13, 14] for solving Poisson and biharmonic
equations on irregular regions. The total cost includes solving the integral equation
and a regular Poisson equation using a fast solver, so this gives a fast algorithm. The
possibility of extension to variable coefficients is mentioned in [14].
R.J. LeVeque and Z. Li have recently developed a different approach for discretizing
A FAST ALGORITHM FOR INTERFACE PROBLEMS 3
elliptic problems with irregular interfaces called the immersed interface method (IIM)
[2, 9], which can handle both discontinuous coefficients and singular sources. This
approach has also been applied to three dimensional elliptic equations [7], parabolic
wave equations with discontinuous coefficients [4, 5],
and the incompressible Stokes flow problems with moving interfaces [3, 6]. L. Adams [1]
has successfully implemented a multi-grid algorithm for the immersed interface method.
However, there are some numerical examples with large jumps in the coefficients in
which the immersed interface method may fail to give accurate answers or converge
very slowly.
In this paper, we propose a fast algorithm for elliptic equations with large jumps
in the coefficients. The idea is to precondition the elliptic equation before using the
immersed interface method. In order to take advantage of fast Poisson solvers on
rectangular regions, we introduce an intermediate unknown function [u n ](s) which is
defined only on the interface. Then we discretize a corresponding Poisson equation,
which has different sources from the original one, using the standard five-point stencil
with some modification in the right hand side. Our discretization is equivalent to using
a second order difference scheme to approximate the Poisson equation in the interior
a second order discretization for the Neumann-like interface
condition
Thus from the error analysis for elliptic equations with Neumann boundary conditions,
for example, see [17], we would have second order accurate solution at all grid points
including those near or on the interface. A GMRES method is employed to solve
the Schur complement system derived from the discretization. A new weighted least
squares method is proposed to approximate interface quantities such as u \Sigma
n from a
grid function defined on the entire domain. This new technique has been successfully
applied in the multi-grid method for interpolating the grid function between different
levels [1] with remarkable improvement in the computed solution. These ideas will be
discussed in detail in the following sections. The method described in this paper seems
to be very promising not only because it is second order accurate, but also because the
number of iterations in solving the Schur complement system is almost independent
of both the jumps in the coefficients and the mesh size. This has been observed from
our numerous numerical experiments, though we have not been able to prove this
theoretically. Our new method has been used successfully for the computation of some
inverse problems [20].
This paper is organized as follows. In Section 2, we precondition (1.1)-(1.4) to
get an equivalent problem. In Section 3, we use the IIM idea to discretize the equivalent
problem and derive the Schur complement system. The weighted least squares
approach to approximate u \Sigma
n from the grid function u ij is discussed in Section 4. Some
implementation details are addressed in Section 5. Brief convergence analysis is given
in Section 6. An efficient preconditioner for the Schur complement system is proposed
in Section 7. Numerical experiments and analysis can be found in Section 8. Some
new approaches in the error analysis involving interfaces are also introduced there.
2. Preconditioning the PDE to an equivalent problem. The problem we
intend to solve is the following:
4 Z. LI
Problem (I).
r (fi(x;
(2.6a)
Given BC on
with specified jump conditions along the interface \Gamma
(2.7a)
Consider the solution set u g (x; y) of the following problem as a functional of g(s).
Problem (II).
f
if x
f
if x
(2.8a)
Given BC on
with specified jump conditions 1
(2.9a)
Let the solution of Problem (I) be u (x; y), and define
along the interface \Gamma. Then u (x; y) satisfies the elliptic equation (2.8a)-(2.8b) and
jump conditions (2.9a)-(2.9b) with g(s) j g . In other words, u g (x; y) j u (x; y),
and
@n
is satisfied. Therefore, solving Problem (I) is equivalent to finding the corresponding
and then u g (x; y) in Problem (II). Notice that g is only defined along the
interface, so it is one dimension lower than u(x; y): Problem (II) is an elliptic interface
problem which is much easier to solve because the jump condition [u n ] is given instead
of [fiu n ]. With the immersed interface method, it is very easier to construct a second
order scheme which also satisfies the conditions of the maximum principle. In this
paper, we suppose fi is piecewise constant as in (1.2), so Problem (II) is a Poisson
equation with a discontinuous source term and given jump conditions. We can then
use the standard five-point stencil to discretize the left hand side of (2.8a), but modify
the right hand side to get a second order scheme, see [2, 9] for the detail. Thus we
can take advantage of fast Poisson solvers for the discrete system. The cost in solving
1 The jump conditions (2.9a) and (2.9b) depend on the singularities of the source term f(x; y)
along the interface. However, in the expression of (2.8a), we do not need information of f(x; y) on
the interface \Gamma, so there is no need to write f(x; y) differently.
A FAST ALGORITHM FOR INTERFACE PROBLEMS 5
Problem (II) is just a little more than that in solving a regular Poisson equation on the
rectangle with a smooth solution. For more general variable coefficient, the discussions
in this paper are still valid except we can not use a fast Poisson solver because of the
convection term (rfi \Delta ru)=fi in (2.8a). However a multi-grid approach developed by
L. Adams [1] perhaps can be used to solve Problem (II).
We wish to find numerical methods with which we can compute u g (x; y) to second
order accuracy. We also hope that the total cost in computing g and u g is less
than that in computing u g through the original Problem (I). The key to success is
computing g efficiently. Below we begin to describe our method to solve g . Once
is found, we just need one more fast Poisson solver to get the solution u (x; y).
3. Discretization. We use a uniform grid on the rectangle [a; b] \Theta [c; d] where
the Problem (I) is defined:
We assume that simplicity. We use a cubic spline ~
passing through a number of control points (X
express the immersed interface, where s is the arc-length of the interface and (X
the position of the k-th point on the interface \Gamma. Other representations of the interface
are possible. A level set formulation is currently under investigation.
Any other quantity q(s) defined on the interface such as w(s) and g(s) can also
be expressed in terms of a cubic spline with the same parameter s. Since cubic splines
are twice differentiable we can gain access to the value of q(s) and its first or second
derivatives at any point on the interface in a continuous manner.
We use upper case letters to indicates the solution of the discrete problem and
lower case letters for the continuous solutions.
Given W k and G k , the discrete form of jump conditions (2.9a) and (2.9b), with
the immersed interface method, the discrete form of (2.8a) can be written as
where
is the discrete Laplacian operator using the standard five-point stencil. Note that if
happens to be on the interface, then f ij =fi ij is defined as the limiting value
from a pre-chosen side of the interface. C ij is zero except at those irregular grid
points where the interface cuts through the five-point stencil. A fast Poisson solver,
for example, FFT, ADI, Cyclic reduction, or Multi-grid, can be applied to solve (3.12).
The solution U ij depends on G k , W k , continuously. In matrix and vector
form we have
is the discrete linear system for the Poisson equation when W k , G k
are all zero. The solution is smooth for such a system. B(W;G) is a mapping from
(3.12). From [2, 9] we
6 Z. LI
know that B(W;G) depends on the first and second derivatives of w(s), and the first
derivative of g(s), where the differentiation is carried out along the interface. At this
stage we do not know whether such a mapping is linear or not. However in the discrete
case, all the derivatives are obtained by differentiating the corresponding splines which
are linear combination of the values on those control points. Therefore B(W;G) is
indeed linear function of W and G and can be written as
are two matrices with real entries. Thus (3.13) becomes
The solution U of the equation above certainly depends on G and we are interested in
finding G which satisfies the discrete form of (2.7b)
where the components of the vectors U
are discrete approximation of the
normal derivative at control points from each side of the interface. In the next section,
we will discuss how to use the known jump G, and sometimes also V , to interpolate
U ij to get U
n in detail. As we will see in the next section, U
depend
on U , G and V linearly
where E, D, and -
are some matrices and
P . Combine (3.14) and (3.16) to
obtain the linear system of equations for U and G:
G
F
The solution U and G are the discrete forms of u g (x; y) and g , the solution of
Problem (II) which satisfies (2.11).
The next question is how to solve (3.17) efficiently. The GMRES method applied
to (3.17) directly or the multi-grid approach [1] are two attractive choices. However,
in order to take advantage of fast Poisson solvers, we have decided to solve G in (3.17)
first, and then to find the solution U by using one more fast Poisson solver. Eliminating
U from (3.17) gives a linear system for G
(D
F
This is an n b \Theta n b system for G, a much smaller linear system compared to the one
for U . The coefficient matrix is the Schur complement of D in (3.17). In practice,
the matrices A, B, E, D, P , and the vectors -
F are never formed. The matrix and
vector form are merely for theoretical purposes. Thus an iterative method, such as
the GMRES iteration [18], is preferred. The way we compute (3.16) will dramatically
change the condition number of (3.18).
A FAST ALGORITHM FOR INTERFACE PROBLEMS 7
4. A weighted least squares approach for computing interface quantities
from a grid function. When we apply the GMRES method to solve the Schur
complement system of (3.18) for G , we need to compute the matrix-vector multiplica-
tion, which is equivalent to computing U \Gamma
n with the knowledge of U ij and the
jump condition [U n ]. This turns out to be a crucial step in solving the linear system
(3.18) for G . Our approach is based on a weighted least squares formulation. The
idea described here can also be, and has been, applied to the case, where we want to
approximate some quantities on the interface from a grid function. For example, interpolating
U ij to the interface to get U \Gamma (X; Y ) or U
on the interface. This new approach has also been successfully applied to the multi-grid
method for interpolating the grid function between different levels by L. Adams[1]
with remarkable improvement in the computed solution.
We start from the continuous situation, the discrete version can be obtained ac-
cordingly. Let u(x; y) be a piecewise smooth function, with discontinuities only along
the interface. We want to interpolate u(x get approximations to the normal
derivatives
are only defined on the interface, to second
order.
Our approach is inspired by Peskin's method in interpolating a velocity field u(x; y)
to get the velocity of the interface using a discrete delta function. The continuous and
discrete forms are the following:
ZZ\Omega
where ~
is a discrete delta function. A commonly used one is
Notice that ffi h (x) is a smooth function of x. Peskin's approach is very robust and only
a few neighboring grid points near ~
are involved. However this approach is only first
order accurate and may smear out the solution near the interface.
Our interpolation formula for
n , for example, can be written in the following form
(j ~
where d ff (r) is a function of the distance measured from the point ~
X,
d ff
Q is a correction term which can be determined once fl ij are known. Although we are
trying to approximate the normal derivatives here, the same principle also applies to
the function values U as well with different choices of fl ij and Q. Note no extra effort
is needed to decide which grid points should be involved. Therefore, expression (4.21)
is robust and depends on the the grid function u ij continuously, two very attractive
8 Z. LI
properties of Peskin's formula (4.20). In addition to the advantages of Peskin's ap-
proach, we also have flexibility in choosing the coefficients fl ij and the correction term
Q to achieve second order accuracy. The parameter ff in (4.21) can be fixed or chosen
according to problems, see Section 8.
Below we discuss how to use the immersed interface method to determine the
coefficients and the correction term Q. They are different from point to point on
the interface. So they should really be labeled as fl ij; ~
etc. But for simplicity of
notation we will concentrate on a single point ~
drop the subscript ~
X.
Since the jump condition is given in the normal direction, we introduce local
coordinates at (X; Y ),
where ' is the angle between the x-axis and the normal direction. Under such new
coordinates, the interface can be parameterized by -(j); j. Note that
and, provided the interface is smooth at (X; Y well. The solution of
the Poisson equation Problem (II) will satisfies the following interface relations, see
[2, 9] for the derivation,
be the -j coordinates of
where the sign depends on whether (- lies on the side of \Gamma.
Using Taylor expansion of (4.25) about (X; Y ) in the new coordinates, after collecting
terms we have
(j ~
a 9
where the a j are given by
A FAST ALGORITHM FOR INTERFACE PROBLEMS 9
(j ~
a
(j ~
a
(j ~
a
(j ~
a
(j ~
a
(j ~
(j ~
a
(j ~
a
(j ~
a
(j ~
(j ~
a
(j ~
From the interface relations (4.24) we know that all the jumps in the expression above
can be expressed in terms of the known information. Since
- , we obtain the
linear system of equations for the coefficients
a 9
Note that we would have the exact same equation if we want to interpolate a smooth
function to get an approximation u n at ~
X to second order accuracy. The discontinuities
across the interface only contribute to the correction term Q. This agrees
with our analysis in [2, 9] for Poisson equations with discontinuous and/or singular
sources, where we can still use the classical five-point scheme but add a correction
term to the right hand side at irregular grid points.
If the linear system (4.26) has a solution, then we can obtain a second order
approximation to the normal derivative
n by choosing an appropriate correction term
Q. Therefore we want to choose ff big enough, say ff - 1:6h, such that at least six
grid points are involved. Usually we have an under-determined linear system which
has infinitely many solutions. We should then choose the one fl
ij with the minimal
2-norm
subject to (4:26):
For such a solution, each fl
ij will have roughly the same magnitude O(1=h); so
ij d ff (j ~
is roughly a decreasing function of the distance measured from ~
X.
This is one of desired properties of our interpolation. In practice, only a hand full of
grid points, controlled by the parameter ff, are involved. Those grid points which are
closer to (X; Y ) have more influence than others which are further away.
Z. LI
When we know the coefficients we also know the a k 's. From the a k 's and the
interface relations (4.24), we can determine the correction term Q easily,
Thus we are able to compute
n to second order accuracy. We can derive a formula
for
n in exactly the same way. However, with the relation u
g, we can write
down a second order interpolation scheme for
immediately
(j ~
is the solution we computed for
n . In the next section, we will explain an
important modification of either (4.21) or (4.28) depends on the magnitude of fi \Gamma or
We should mention another intuitive approach, one-sided interpolation, in which
we only use grid points on the proper side of the interface in computing a limiting
value at the interface:
This approach does not make use of the interface relations (4.24), so we have to have
at least six points from each side in order to have a second order scheme. Note that
we can also use the least squares technique described in this section for one-sided
interpolation. This approach has been tested already. The weighted least squares
approach using the interface relations (4.24) appears superior in practice. It has the
following advantages:
ffl Fewer grid points are involved. When we make use of the interface relation,
compared to the one-sided interpolation, the number of grid points which are
involved is reduced roughly by half.
ffl Second order accuracy with smaller error constant. The grid points involved
in our approach are clustered around the point (X; Y ) on the interface, and
those which are closer to (X; Y ) have more influence than those which are
further away in our weighted least squares approach. We have smaller error
constant in the Taylor expansions compared to the one-sided interpolation.
The error constant can be as much as 8 - 27 times smaller as the one-sided
interpolation. In two dimensional computation, we can not take m and n to
be very large, to have a smaller error constant sometimes is as important as
to have a high order accurate method.
ffl Robust and smoother error distribution. We have a robust way in choosing
the grid points which are involved. The interpolation formulas (4.21) and
depend continuously on the location (X; Y ) and the grid points
and so does the truncation error for these two interpolation schemes. In other
words, we will have a smooth error distribution. This is very important in
moving interface problems where we do no want to introduce any non-physical
oscillations.
A FAST ALGORITHM FOR INTERFACE PROBLEMS 11
downs. In one-sided interpolation, sometimes we can not find enough
grid points in one particular side of the interface, then the one-sided interpolation
will break down. In our approach, every grid point on one side is
connected to the other by the interface relations (4.24). So no break down will
occur.
ffl Trade off or disadvantages. The only trade off of our weighted least squares
approach is that we have to solve a under-determined 6 by p linear system of
equation (where p - instead of solving one that is 6 by 6. The larger ff is,
the more computational cost in solving (4.26). Fortunately, the linear system
has full row rank and can be solved by the LR-RU method [8] or other efficient
least squares solvers.
5. Some details in implementation. The main process of our algorithm is to
solve the Schur complement system (3.18) using the GMRES method with an initial
guess
G (0)
We need to derive the right hand side, and explain how to compute the matrix-vector
multiplication of the system without explicitly forming the coefficient matrix. The
right hand side needs to be computed just once which is described below.
5.1. Computing the right hand side of the Schur complement system.
If we take apply one step of the immersed interface method to solve
Problem (II) to get U(0), then
With the knowledge of U(0) and G = 0; we can compute the normal derivatives on
each side of the interface to get U \Sigma
using the approach described in the previous
section. Thus the right hand side of the Schur complement system is
F
The last two equalities are obtained from (3.16) and (3.18) with Now we are
able to compute the right hand side of the Schur complement system.
5.2. Computing the matrix-vector multiplication of the Schur comple-
ment. Now consider the matrix-vector multiplication
(D
of the Schur complement, where Q is an arbitrary vector of dimension n b . This involves
essentially two steps.
1. A fast Poisson solver for computing
which is the solution of Problem (II) with
Z. LI
2. The weighted least squares interpolation to compute U
The residual vector in the flux jump condition is
which is the same residual vector of the second equation in (3.17) from our definition.
In other words, see also (3.16)
The matrix-vector multiplication (5.29) then can be computed from the last equality
of the following derivation:
F from (5.30)
V from (5.32):
Note that from the second line to the third line we have used the following
which is defined in (3.18).
It worth to point out that once our algorithm is successfully terminated, which
means that the residual vector is close to the zero vector, we not only have an approximation
Q to the solution G , an approximation U(Q) to the solution U , bult also
approximations U \Sigma
n (Q) to the normal derivatives from each side of the interface. The
normal derivative information is very useful for some moving interface problems where
the velocity of the interface depends on the normal derivative of the pressure.
6. Convergence Analysis. As to this point, we have had a complete algorithm
for solving the original elliptic equations of the form Problem (I). We have transformed
the original elliptic equation to a corresponding Poisson equation with different source
term and jump conditions, or internal boundary conditions, (2.9b) and (2.11). The
jump condition (2.11) is Neumann-like boundary condition which involves the normal
derivatives from each side of the interface. In our algorithm, the classical five-point
difference scheme at regular grid points is used. This discetization is second order
accurate. As discussed in Section 4, the Neumann-like internal boundary condition
(2.11) is also discretized to second order. So from the analysis in Chapter 6 of [17] on
Neumann conditions, we should be able to conclude second order convergence globally
for our computed solution, provide that we can solve the Poisson Problem (II) to second
order accuracy. This is confirmed in our early work [2, 9]. Numerical experiments have
confirmed second order accuracy of the computed solution for numerous test problems,
see Section 8.
7. An efficient preconditioner for the Schur complement system. With
the algorithm described in previous sections, we are able to solve Problem I to second
order accuracy. In each iteration we need to solve a Poisson equation with a
modified right hand side. A fast Poisson solver such as a fast Fourier transformation
method etc. [19], can be used. The number of iterations of
A FAST ALGORITHM FOR INTERFACE PROBLEMS 13
the GMRES method depends on the condition number of the Schur complement. If
we make use of both (4.21) and (4.28) to compute U \Sigma
n , the condition number seems
to be proportional to 1=h. Therefore the number of iterations will grow linearly as we
increase the number of grid points.
Below we propose a modification in the way of computing U \Sigma
which seems to
improve the condition number of the Schur complement system dramatically.
If
n and
n are the exact solutions, that is
then we can solve
n or u
n in terms of v, It is easy to get
or
The idea is simple and intuitive. We use one of the formulas (4.21) or (4.28) obtained
from the weighted least squares interpolation to approximate
n or u
n , and then use
or (7.33) to approximate u
n or
n to force the solution to satisfy the flux jump
condition. This is actually an acceleration process, or a preconditioner for the Schur
complement system (3.18).
With this modification, the number of iterations for solving the Schur complement
system seems to be independent of the mesh size h, and almost independent of the
jump [fi] in the coefficient as well, see the next section for more details. Although
we have not been able to prove this claim, the algorithm seems to be extraordinary
successful.
Whether we use the pair (4.21) and (7.34) or the other (4.28) and (7.33) have only
a little affect on the accuracy of the computed solutions and the number of iterations.
The algorithm otherwise behaves the same and the analysis in the next section seems
to be true no mater what pair we choose.
We have been using the following criteria to choose the desired pair
n is determined by (4.28)
n is determined by (4.21)
which seems always better than the choice of the other way around.
8. Numerical Experiments. We have done many numerical experiments with
different interfaces and various test functions. Since our scheme can handle jumps in
the solution, we have great flexibility in choosing test problems. From the numerical
tests we intend to determine:
ffl The accuracy of computed solutions. Are they second order accurate?
14 Z. LI
ffl The numbers of iterations as we change the mesh size h and the ratio of the
discontinuous coefficient,
ffl The ability of the algorithm to deal with complicated interfaces and large
jumps in the coefficient.
All the experiments are computed with double precision. The computational parameters
include:
Computational rectangle [a; b] \Theta [c; d].
ffl The number of grid points m and n in the x- and y- directions respectively,
we assume that is the mesh size.
ffl The number of control points n b . The interface is expressed in terms of cubic
splines passing through the control points.
ffl The parameter ff in the weighted least squares interpolation. We take
specified differently.
The maximum norm is used to measure the errors in the computed solution U ij ,
and the normal derivatives U \Gamma
n p from each side of the interface at the p-th
control point. The relative errors are defined as follows
where ~
is one of control points on the interface. Each grid point is labeled
as either in the
or the
outside\Omega + of the interface and the exact solution is
determined accordingly. In other words, the exact solution is not determined from the
exact interface but the discrete one. In Table 1, r i , 3 is the ratio of successive
errors. A ratio of 4 corresponds to second-order accuracy. In Table 1, k is the number
of iterations required in solving the Schur complement system (3.18). The ratio of
coefficients is defined . In the figures, we use S to express the
slopes of least squares line of experimental data (log(h i ); log(E i )).
Example 1. Consider the following interface
where
within the computational domain Fig 1 shows some interfaces with
different parameters r 0 , . Dirichlet boundary conditions, as well as
the jump conditions [u] and [fiu n ] along the interface, are determined from the exact
A FAST ALGORITHM FOR INTERFACE PROBLEMS 15
solution
if (x; y)
r 4
if (x; y)
. The source term can be determined accordingly:
if (x; y)
if (x; y)
We provide numerical results for three typical cases below.
Case A. The interface parameters are chosen as r
the interface is a circle centered at the origin, see Fig 2(a). With C
the solution is continuous everywhere, but u n and fiu n are discontinuous across the
circle. It is easy to verify that [fiu n when we take C Fig 3(a) is the
plot of the solution \Gammau with
Case B. The interface parameters are chosen as r
20,
Fig 2(a). We shift the center of the interface a little bit to have a non-symmetric
solution relative to the grid. We want our test problems to be as general
as possible. The interface is irregular but the curvature has modest magnitude. So
with a reasonable number of points on the interface, we can express it well. Now it is
almost impossible to find an exact solution which is continuous but not smooth across
the interface, so we simply set C Fig 3(b) is the plot of the
solution \Gammau with
Case C. The interface parameters are chosen as r
20,
Fig 2(b). Now the magnitude of the curvature is very large at some
points on the interface and we have to have enough control points to resolve it. The
solution parameters are set to be the same as in Case B.
Fig 4-6 and Table 1 are some plots and data from the computed solutions which
we will analyze below.
8.1. Accuracy. Table 1 shows the results of grid refinement analysis for Case
A with two very different ratio 0:5, the ratio r i
are very close to 4 indicating second order convergence. With
the error in the solution drops much more rapidly. This is because the solution in
approaches a constant as fi becomes large, and it is quadratic
order accurate method would give high accurate solution in both regions. So it is not
surprising to see the ratio r 1 is much larger than 4. For the normal derivatives, we
expect second order accuracy again since fi
n is not quadratic and has magnitude of
O(1). This agrees with the results r 2 and r 3 in Table 1.
In Fig 4 we consider the opposite case when fi . In this case
the solution is not quadratic so we see the expected second order accuracy. Fig 4(a)
Z. LI
(a)
-0.4
A
(b)
-0.4
Fig. 2. Different interfaces of Example 1. (a) Case A and B. (b) Case C.
(a)
-0.4
-0.3
-0.2
-0.4
-0.3
-0.2
-0.1Fig. 3. The solutions \Gammau of Example 1 with 1. (a) Case A, a circular
interface where the solution is continuous but [fiun Case B, an irregular interface where
both the solution and the flux [fiun ] are discontinuous.
Table
Numerical results and convergence analysis for Case A with
A FAST ALGORITHM FOR INTERFACE PROBLEMS 17
(a)
(b)
-5
log(h)
log(E
Fig. 4. (a): Error distribution for Case A. (b): Errors E i vs the mesh size h in log-log scale for
Case A with
(a)
log(h)
log(E
(b)
-6.4 -6.2 -6 -5.8 -5.6 -5.4 -5.2 -5 -4.8
-5
log(h)
log(E)
Fig. 5. Errors E i vs the mesh size h in log-log scale for Case B with
The solid line: n
. The dotted line: n (b) The solid line and
dotted line are the same as in (a) but on a different scale. The dash-dotted line is the result obtained
with
log(h)
log(E
2:71 2:07
Fig. 6. Errors E i vs the mesh size h in log-log scale for Case C with 1. The
solid line: fixed n b (n 520). The dotted line: n
Z. LI
plots the error distribution over the region. The error seems to change continuously
even though the maximum error occurs on or near the interface. Usually if the curvature
is very big in some part of an interface, for example, near a corner, then we would
observe large errors over the neighborhood of that part of the interface.
For interface problems, the errors usually do not decrease monotonously as we
refine the grid unless the interface is aligned with one of the axes. We need to study the
asymptotic convergence rate which is usually defined as the slope of the least squares
line of the experimental data (log(h i ); log(E i )). Fig 4(b) plots the errors versus the
mesh size h in log-log scale for the case n. The asymptotic convergence rate is
about 2:62 compared to 2 for a second order method. As h gets smaller we can see
the curves for the errors become flatter indicating the asymptotic convergence rate will
approach 2.
The dotted curves in Fig 5 and Fig 6 are the results for case B and C, where
the interfaces are more complicated compared to case A. Again we take
The asymptotic convergence rates are far more than two. Such behavior can also be
observed from Example 4 in [16]. Does it mean that our method is better than second
order? Certainly this is not true from our discretization. Below we explain what is
happening.
For interface problems, the errors depend on the solution u(x; y), the mesh size h,
the jump in the coefficient [fi], the interface \Gamma and its relative position to the grid, and
the maximal distance between control points on the interface, h b . We can write the
error in the solution, for example, as follows
The first term in the right hand side of (8.4) is the error from the discretization of
the differential equation. The term C (u; h; h b ; [fi]; \Gamma) has magnitude O(1) but does
not necessarily approach to a constant. The second term in the right hand side of
is the error from the discretization of the interface \Gamma. If we use a cubic spline
interpolation, then q ? 2. For Case A, the interface is well expressed and the first term
in (8.4) is dominant, we have clearly second order convergence. For Case B and C,
the interfaces are more complicated and the second term in (8.4) is dominant. That is
why we have higher than second order convergence. Eventually, the error in the first
term will dominate and we will then observe second order convergence.
To further verify the arguments above, we did some tests with fixed number of
control points n b . For example, we take n the solid line in
Fig 6. Presumably the interface is expressed well enough and the second term in (8.4)
is negligible. We see the slopes of the least squares line of the errors E 1 and E 2 are
2:15 and 2:07 respectively indicating second order convergence. Usually
the error in the normal derivatives
n and u
behaves the same, so we only need to
study one of them. If we let n b change with the same speed as the number of grids m
and n, then the second term in (8.4) is dominant and the slopes of the least squares
line of the errors E 1 and E 2 are 2:71 and 2:69 respectively. Once n b is large enough,
the first term will dominate in (8.4) and the error will decrease quadratically. This
can also be seen roughly from Fig 6. Note that the errors oscillate as n gets large
whether we fix n b or not. But the fluctuation becomes smaller as we refine the grid.
The upper envelop of E 1 behaves the same as the least squares line of the experimental
data (log(h i ); log(E i )). So it is reasonable to use the asymptotic convergence rate to
discuss the accuracy when errors do not behave monotonously.
A FAST ALGORITHM FOR INTERFACE PROBLEMS 19
As another test, we let n b change slower than m and n. The solid lines in Fig 5(a)
are obtained with
Now we have roughly
and the errors decrease quadratically with the mesh size h, but not h b . The
slopes of the least squares line of the errors E 1 and E 2 are 2:23 and 2:22 respectively.
We now discuss the effect of the different choice of the parameter ff in the least
squares approximation described in Section 4 on the solution. Most of the computations
are done with Fig 5(b), the dash-dotted line where
As we can expect, the smaller ff is, the higher accuracy in the computed solution because
the points involved are clustered together and the error in the Taylor expansion
will be smaller. However, the smaller ff is, the more oscillatory in the error as we
refine the grid. For larger ff, the computation cost increases quickly, but the error
behaves much smoother with the mesh size h. Usually we can take small ff for smooth
interfaces, and larger ff if we want a smoother error distribution for more complicated
interfaces.
8.2. The number of iterations versus the mesh size h. Fig 7(a), also see
Fig 10(a) for Example 2, shows the number of iterations versus the number of grid
points m and n for case A, B, and C. It is not surprising to see that the number of
iterations depends on the shape of the interface. The number of iterations required for
Case C is larger than that for Case A and B. But it is wonderful to see that the number
of iterations is almost independent of the mesh size h. For Case A, where the interface
is a circle, we only need about iterations for all choices of the mesh size h for two
extreme cases We will see in the next paragraph that this is also
true for different choices of the ratio . Note that the number of iterations is
about two or three fewer than the numbers of calls of the fast Poisson solver. We need
two or three of them for initial set up of the Schur complement system. In Fig 7(a),
the lowest curve corresponds to case A with the lowest but
the second curve corresponds to 1. For case B, the number of
iterations required is about 17 - 21 for respectively. For case C,
the most complicated interface, the number of iterations is about 46 with reasonable
number of control points on the interface for
8.3. The number of iterations versus the jump ratio Fig 7(b),
also see Fig 10(b) for Example 2 , plots the number of iterations versus the jump ratio
ae in log-log scale with fixed number of grids goes away from
the unit we have larger jump relatively in the coefficient. The number of iterations
increases proportional to jlog(ae)j when ae is small but soon reaches a point after which
the number of iterations will remain as a constant. Such points depends on the shape
of the interface. For Case A, it requires only about 5 - 6 iterations at the most for
iterations for ae ? 1 in solving the Schur complement system
using the GMRES method. For Case B, the numbers are about 17 - 22. For Case C,
the most complicated interface in our examples, the numbers are about 47 - 69. As
we mentioned in the previous paragraph, also see Fig 7(a), for Case C, with only 160
control points we can not express the complicated interface Fig 2(b) very well. If we
take more control points on the interface, then the number of iterations will be about
Z. LI
(a)
A
(b)
A
6 91769
Fig. 7. The number of iterations for Example 1 with vs the number of grids
n. Case A: lower curve, Case B: lower curve,
the ratio of
jumps in log-log scale with
when
Example 2. This geometry of this example is adapted from Problem 3 of [1]. The
solution domain is the [\Gamma1; 1] \Theta [0; 3] rectangle and the interface is determined by
Fig 8(a) show the solution domain and the interface \Gamma with x
Again Dirichlet boundary condition, as well as the jump conditions [u] and [fiu n ] are
determined from the exact solution
The source term can be determined accordingly:
Fig 8(b) is a plot of the computed solution.
This example is different from Example 1 in several ways. The solution is independent
of the coefficient fi. But the magnitude of the jump [fiu n ] and the source
increase with the magnitude of the jump [fi]. However we have observed
similar behaviors in the numerical results as we discussed in Example 1. Example 1
and Example 2 are two extreme samples of elliptic interface problems. So we should
be able to get some insights about the method proposed in this paper.
Fig 9 shows errors E i versus mesh size h in log-log scale with different choice of
b . In Fig 9(a), . The solid lines correspond to a fixed discretization
of the interface, n As we expected, the asymptotic convergence rate for
are 2:1272. They are all close to 2 indicating
A FAST ALGORITHM FOR INTERFACE PROBLEMS 21
(a)
(b)0.51.52.5-11
y
x
Fig. 8. (a) The interface of Example 2. (b) The solution of Example 2.
second order accuracy. The dotted line in Fig 9(a) correspond to a variable n b which
changes in the same rate as the number of grid point m in x-direction. The asymptotic
convergence rate of E i for are S 3:3473. They are
Fig 9(b).
These numbers are all larger than 2 similar to the cases we saw in Fig 5, and Fig 6.
We have explained such phenomena already.
Fig 10(a) plots the number of iteration versus the number of grids n with
Again we consider two extreme cases,
with Once the interface is well expressed somewhere after n ? 180, the
number of iteration will slightly decrease to a constant which is about 28 for
and 34 for Fig 10(b) plots the number of iteration versus the ratio ae with
fixed grid
ae ? 1. We observe the same behavior as in Fig 7(b). Initially the number of iterations
increases proportional to jlog(ae)j as ae goes away from the unit, but it soon approaches
a constant which is about 28 for ae ! 1 and 34 for ae ? 1.
(a)
-6.2 -6 -5.8 -5.6 -5.4 -5.2 -5 -4.8 -4.6
log(h)
3:35
2:14
(b)
-6 -5.9 -5.8 -5.7 -5.6 -5.5 -5.4 -5.3 -5.2 -5.1
log(E
Fig. 9. Errors E i vs the mesh size h in log-log scale for Example 2 with
1: The solid line: fixed n b , dotted line: n
Summary of the numerical experiments. In our computations, the largest
error usually occurs at those points which are close to the part of the interface which
22 Z. LI
(a)
(b)
)28Fig. 10. The number of iterations for Example 2 with vs the number of
grids n. Lower curve: vs the ratio
of jumps in log-log scale with fixed grid
has large curvature. Depending on the shape of the interface, we should take enough
control points on the interface so the error in expressing the interface does not dominate
the global error. However, once such a critical number is decided, we do not need to
double it as we double the number of grid points, which saves some computational cost.
We should still be able to maintain second order accuracy. The number of iteration for
solving the Schur complement system using a GMRES method is almost independent
of both the mesh size h as well as the jump in the coefficient.
9. Conclusions. We have developed a second-order accurate fast algorithm for
a type of elliptic interface problems with large jumps in the coefficient across some
irregular interface. We precondition the original partial differential equation to obtain
an equivalent Poisson problem with different source terms and a Neumann-like interface
condition. The fast Poisson solver proposed in [2, 9] can be employed to solve
the Schur complement system for the intermediate unknown, the jump in the normal
derivative along the interface. Then we proposed a preconditioning technique for the
Schur complement system which seems to be very successful. Numerical tests revealed
that the number of iterations in solving the Schur complement system is independent
of both the mesh size h and the jump in the coefficient, though we have not proved this
strictly in theory. The idea introduced in this paper might be applicable to other related
problem, for example, to domain decomposition techniques. A new least squares
approach to approximate interface quantities from a grid function is also proposed. By
analyzing the numerical experiments, we have discussed some issues in error analysis
involving interfaces.
There is still a lot of room for improving the method described in this paper. For
example, we have used cubic spline interpolations for closed interfaces. There are some
advantages of this approach. But large errors can occur at the connection of the first
and the last control points when we try to make the curve closed. That might also
be one of reasons why the error does not decrease monotonously. As an alternative, a
level set formulation is under investigation.
The next project following this paper is to study the case with variable coefficients.
A FAST ALGORITHM FOR INTERFACE PROBLEMS 23
We can rewrite (1.1) either as (2.8a) or
r
if x
r
f
if x
are the averages of the coefficients fi from each side of the interface.
Whether (2.8a) or (9.5) is used, we shall still introduce an intermediate unknown, the
jump in the normal derivative across the interface if the jump condition is given in the
form of [fiu n ]. In this way, the coefficients of the difference scheme would be very close
to those obtained form the classical five-point stencil. We can not take advantage of
the fast Poisson solvers for variable coefficient anymore, but we can make use of the
multi-grid method developed by L. Adams in [1].
10.
Acknowledgments
. It is my pleasure to acknowledge the encouragements
and advice from various people including Prof. Randy LeVeque, Stanley Osher, Tony
Chan, Loyce Adams, Jun Zou and Barry Merriman. Thanks also to Prof. Yousef Saad
and Dr. Victor Eijkhout for helping me to implement and understand the GMRES
method.
--R
A multigrid algorithm for immersed interface problems.
The immersed interface method for elliptic equations with discontinuous coefficients and singular sources.
Simulation of bubbles in creeping flow using the immersed interface method.
Immersed interface methods for wave equations with discontinuous coefficients.
Finite difference methods for wave equations with discontinuous coefficients.
Immersed interface method for Stokes flow with elastic boundaries or surface tension.
A note on immersed interface methods for three dimensional elliptic equations.
Uniform treatment of linear systems - algorithm and numerical stability
The Immersed Interface Method - A Numerical Approach for Partial Differential Equations with Interfaces
Immersed interface method for moving interface problems.
ADI methods for heat equations with discontinuties along an arbitrary interface.
On the rapid evaluation of heat potentials on general regions.
The fast solution of Poisson's and the biharmonic equations on irregular regions.
The rapid evaluation of Volume
Fast parallel iterative solution of Poisson's and the biharmonic equations on irregular regions.
A fast poisson solver for complex geometries.
Numerical Solution of Partial Differential Equations.
GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems.
Fast Poisson solver.
Computing some inverse problems.
--TR
--CTR
Kazufumi Ito , Zhilin Li, Solving a Nonlinear Problem in Magneto-Rheological Fluids Using the Immersed Interface Method, Journal of Scientific Computing, v.19 n.1-3, p.253-266, December
Songming Hou , Xu-Dong Liu, A numerical method for solving variable coefficient elliptic equation with interfaces, Journal of Computational Physics, v.202 n.2, p.411-445, 20 January 2005
M. Oevermann , R. Klein, A Cartesian grid finite volume method for elliptic equations with variable coefficients and embedded interfaces, Journal of Computational Physics, v.219 n.2, p.749-769, December, 2006
Petter Andreas Berthelsen, A decomposed immersed interface method for variable coefficient elliptic equations with non-smooth and discontinuous solutions, Journal of Computational Physics, v.197 n.1, p.364-386, 10 June 2004
Shaozhong Deng , Kazufumi Ito , Zhilin Li, Three-dimensional elliptic solvers for interface problems and applications, Journal of Computational Physics, v.184 n.1, p.215-243,
Peter Schwartz , Michael Barad , Phillip Colella , Terry Ligocki, A Cartesian grid embedded boundary method for the heat equation and Poisson's equation in three dimensions, Journal of Computational Physics, v.211 n.2, p.531-550, 20 January 2006
Xu-Dong Liu , Thomas C. Sideris, Convergence of the ghost fluid method for elliptic equations with interfaces, Mathematics of Computation, v.72 n.244, p.1731-1746, October
Shi Jin , Xuelei Wang, Robust numerical simulation of porosity evolution in chemical vapor infiltration: II. Two-dimensional anisotropic fronts, Journal of Computational Physics, v.179 n.2, p.557-577, July 2002
Ming-Chih Lai , Zhilin Li , Xiaobiao Lin, Fast solvers for 3D Poisson equations involving interfaces in a finite or the infinite domain, Journal of Computational and Applied Mathematics, v.191 n.1, p.106-125, 15 June 2006
Do Wan Kim , Young-Cheol Yoon , Wing Kam Liu , Ted Belytschko, Extrinsic meshfree approximation using asymptotic expansion for interfacial discontinuity of derivative, Journal of Computational Physics, v.221 n.1, p.370-394, January, 2007
I. Klapper , T. Shaw, A large jump asymptotic framework for solving elliptic and parabolic equations with interfaces and strong coefficient discontinuities, Applied Numerical Mathematics, v.57 n.5-7, p.657-671, May, 2007
Carlos J. Garca-Cervera , Zydrunas Gimbutas , Weinan E., Accurate numerical methods for micromagnetics simulations with general geometries, Journal of Computational Physics, v.184 n.1, p.37-52,
B. P. Lamichhane , B. I. Wohlmuth, Mortar finite elements for interface problems, Computing, v.72 n.3-4, p.333-348, May 2004
Chohong Min , Frdric Gibou , Hector D. Ceniceros, A supra-convergent finite difference scheme for the variable coefficient Poisson equation on non-graded grids, Journal of Computational Physics, v.218 n.1, p.123-140, 10 October 2006
Frederic Gibou , Ronald P. Fedkiw , Li-Tien Cheng , Myungjoo Kang, A second-order-accurate symmetric discretization of the Poisson equation on irregular domains, Journal of Computational Physics, v.176 n.1, p.205-227, February 10, 2002
John K. Hunter , Zhilin Li , Hongkai Zhao, Reactive autophobic spreading of drops, Journal of Computational Physics, v.183 n.2, p.335-366, December 10
Y. C. Zhou , G. W. Wei, On the fictitious-domain and interpolation formulations of the matched interface and boundary (MIB) method, Journal of Computational Physics, v.219 n.1, p.228-246, 20 November 2006
Y. C. Zhou , Shan Zhao , Michael Feig , G. W. Wei, High order matched interface and boundary method for elliptic equations with discotinuous coefficients and singular sources, Journal of Computational Physics, v.213 n.1, p.1-30, 20 March 2006
Xiaolin Zhong, A new high-order immersed interface method for solving elliptic equations with imbedded interface of discontinuity, Journal of Computational Physics, v.225 n.1, p.1066-1099, July, 2007
George Biros , Lexing Ying , Denis Zorin, A fast solver for the Stokes equations with distributed forces in complex geometries, Journal of Computational Physics, v.193 n.1, p.317-348, January 2004
Sining Yu , Yongcheng Zhou , G. W. Wei, Matched interface and boundary (MIB) method for elliptic problems with sharp-edged interfaces, Journal of Computational Physics, v.224 n.2, p.729-756, June, 2007 | GMRES method;immersed interface method;discontinuous coefficients;elliptic equation;schur complement;preconditioning;cartesian grid |
275961 | Inner and Outer Iterations for the Chebyshev Algorithm. | We analyze the preconditioned Chebyshev iteration in which at each step the linear system involving the preconditioner is solved inexactly by an inner iteration. We allow the tolerance used in the inner iteration to decrease from one outer iteration to the next. When the tolerance converges to zero, the asymptotic convergence rate is the same as for the exact method. Motivated by this result, we seek the sequence of tolerance values that yields the lowest cost to achieve a specified accuracy. We find that among all sequences of slowly varying tolerances, a constant one is optimal. Numerical calculations that verify our results are presented. Asymptotic methods, such as the W.K. B. method for linear recurrence equations, are used with an estimate of the accuracy of the asymptotic result. | Introduction
The Chebyshev iterative algorithm [1] for solving linear systems of equations often requires
at each step the solution of a subproblem i.e. the solution of another linear system. We
assume that the subproblem is also solved iteratively by an "inner iteration". The term
"outer iteration" refers to a step of the basic algorithm. The cost of performing an outer
iteration is dominated by the cost of solving the subproblem, and it can be measured by the
number of inner iterations. A good measure of the total amount of work needed to solve
the original problem to some accuracy ffl is then, the total number of inner iterations. To
reduce the amount of work, one can consider solving the subproblems "inexactly" i.e. not
to full accuracy. Although this diminishes the cost of solving each subproblem, it usually
slows down the convergence of the outer iteration.
It is therefore interesting to study the effect of solving each subproblem inexactly on the
performance of the algorithm. We consider two measures of performance: the asymptotic
convergence rate and the total amount of work required to achieve a given accuracy ffl.
The accuracy to which the inner problem is solved may change from one outer iteration
to the next. First, we evaluate the asymptotic convergence rate when the tolerance values
converge to 0. Then, we seek the "optimal strategy", that is, the sequence of tolerance
values that yields the lowest possible cost for a given ffl.
The present results, contained in Giladi [2], extend those of Giladi [3]. The asymptotic
convergence rate of the inexact Chebyshev iteration, with a fixed tolerance for the inner
iteration, was derived in Golub and Overton [4] (see also [5], [6], [7], [8], [9], [10]). Previous
work has mainly concentrated on the convergence rate, whereas we emphasize the cost of
the algorithm.
In section 2, we review the Chebyshev method and present the basic error bound for
the inexact algorithm. Then, in section 3 we evaluate the asymptotic convergence rate
when the sequence of tolerance values gradually decreases to j - 0. In section 4 we seek
the "best strategy" i.e the one that yields the lowest possible cost. In section 5, we obtain
an asymptotic approximation for the error bound when the sequence of tolerance values
is slowly varying. In section 6 we analyze the error in this asymptotic approximation and
present a few numerical calculations that demonstrate it's accuracy. In section 7 we use
the analysis of section 5, to show that for the Chebyshev iteration, the optimal strategy is
constant tolerance. We also estimate the optimal constant. Then, in section 8 we present
a few numerical calculations that demonstrate the accuracy of the analysis of section 7. In
Section 9, we generalize this result to other iterative schemes.
iteration
Chebyshev iteration (see Manteuffel [11]) to solve the real n \Theta n system of linear equations
uses the splitting
It requires that the spectrum of M \Gamma1 A be contained in an ellipse, symmetric about the
real axis, in the open right half of the complex plane. We denote the foci of such an ellipse
by l and u. Furthermore, we assume that M \Gamma1 A is diagonalizable.
The exact Chebyshev method is defined by
where
c k+1 (-)
In (3), the initial iterate x 0 is given, and in (7), c k denotes the Chebyshev polynomial of
degree k.
The inexact Chebyshev method is obtained by solving (5) iteratively for z k . This results
in replacing (5) by
In the variable strategy scheme the tolerance ffi k tends to j - 0 as k increases, while in the
constant strategy scheme
is constant.
We denote the error at step k by
We also define K, V , \Sigma and oe j by
We use the same derivation as in [4] to show that when -oe
where ffi represents a sequence of tolerance values fffi k g 1
. In equation (11), ae is defined by
The function -(k; ffi) satisfies the recurrence equation
with initial conditions
The constant \Delta in (13) is given by
ae
The bound (11) is the product of two terms: ae k
and -(k; ffi). The former
is the bound for the exact algorithm and it is exponentially decaying. The latter is a
monotonically increasing term which accounts for the accumulation of errors introduced by
solving the inner problem inexactly. We shall obtain asymptotic approximations to -(k; ffi)
under various assumptions on the sequence ffi k in order to analyze the performance of the
inexact algorithm.
3 Asymptotic convergence rate
We shall now estimate the asymptotic convergence rate of the inexact Chebyshev algorithm
when the sequence of tolerance values for the inner iteration gradually decreases to 0. Our
goal is to show that then, the asymptotic convergence rate of the inexact algorithm is the
same as that of the exact scheme. This is in contrast to the case of constant tolerance for
which the asymptotic convergence rate of the inexact algorithm is lower than that of the
exact algorithm [4].
We base our analysis on the bound (11). Therefore, we wish to compute
ae k -(k; ffi)
In order to do so, we need to estimate the asymptotic behavior for large k of -(k; ffi). By
making mild assumptions on the rate at which ffi k ! 0, we will show that
lim
Upon using (17) in (16), we find that the asymptotic convergence rate of the algorithm is
lim
ae
where ae e is the asymptotic convergence rate of the exact algorithm.
Equation (17) holds for many sequences ffi k of tolerance values. In order to obtain a
general result, we shall assume only that
The positive constant C in (19) is arbitrary. Hence, if C AE 1 the sequence of tolerance
values can decay quite slowly.
We show that (17) holds under assumption (19) in two steps. First we show that -(k; ffi)
in (13), is bounded by the function oe(k; - ffi), where oe(k; -
replaced by
. Then, we show that lim k!1 oe(k; - ffi) 1=k = 1. As a first step, we prove the following
proposition
Proposition 1 Let -(k; ffi) be a solution to (13) and (14) and let oe(k; -
ffi) be the solution to
the same equation with replaced by -
. Assume that - and that oe(0; -
Then,
for all k.
We prove this proposition by induction. For we obtain from (14) that
Then, we assume that assertions (20) and (21) are true for all In view of
and
By the induction hypothesis, oe(N; -
-(N; ffi). Furthermore, -
so the right side of (23) is greater than or equal to the right
side of (24). We conclude that
and that oe(N
We shall now obtain the asymptotic behavior of oe(k; ffi) for large k from (13) with
. We use the method of [12].
We first replace -(k; ffi) by oe(k; -
ffi) in (13) and set -
C=k. Then we introduce the
stretched variables
to obtain
We seek for R(x) an asymptotic approximation valid for ffl - 1 of the form
The functions /(x), K are to be determined so that R(x) satisfies equation
(27). The constant c(ffl) is to be determined so that R(x) is independent of ffl. After
substituting (28) into (27), we express each side of the resulting expression in power series
in ffl 1=2 assuming that /(x 2ffl). can be expanded in
Taylor series in powers of ffl. Then, we equate the coefficients of each power of ffl 1=2 on the left
side of the resulting expression, to the same power of ffl 1=2 on the right side. The coefficients
of ffl and of ffl 3=2 , yield the following equations for /(x; ffi) and K 0 (x; ffi) respectively:
x
x
Upon solving (29) for / we find
2C \Deltax: (31)
Introducing the right side of (31) into (30) and solving the resulting equation for K 0 we
obtain
To find the constant D in (32) we could match (28) to another expansion which satisfies
the initial conditions (14). However, the value of D is unimportant for our purposes since
We substitute (31) for / and (32) for K 0 into (28) for R. Then, we use the change of
variables (26) to obtain
To make the right side of (33) independent of ffl, we require that and we obtain
Therefore,
lim
In a realistic numerical computation ffi k is bounded below by the machine precision j 0 .
Moreover, the analysis of the iteration with sufficiently small,
the performance of the inexact algorithm is for all practical purposes indistinguishable from
that of the exact algorithm. Indeed, solving (13) with
where
It follows from (16), (18) and (36) that the asymptotic convergence rate is
ae e e OE( -
The number N(ffl; - ffi) of outer iterations required to achieve an accuracy ffl with tolerance - ffi
is approximately
log ffl
log
'e:
Hence, if
the inexact scheme requires no more than one more iteration
per thousand than the exact scheme. The difference is undetectable when N(ffl;
This leads us to evaluate the asymptotic convergence rate when To
obtain the behavior of oe(k; ffi) in (13) for large k, when
into (13) to obtain (27) and seek an expansion for R(x) of the form
We introduce (40) into (27) to obtain, after some manipulation, equations for / and K
x
Then, we solve (41) and (42) and substitute the results into (40) to obtain, with \Phi(j)
defined in (37), and D a constant
Hence,
lim
In view of (38) and (44) the asymptotic convergence rate is the same as that with
The results (34) and (43) of this formal analysis can be made rigorous. We summarize
the above analysis in the following theorem:
Theorem 1 Assume that a linear system of equations is solved to accuracy ffl, using the
Chebyshev iteration, with a variable strategy fffi k g. Assume that
that positive constant C. Then, the asymptotic convergence
rate of the Chebyshev iteration with the variable tolerance is the same as the asymptotic
convergence rate of the scheme with the fixed tolerance j.
4 The optimal strategy problem
Motivated by the result of section 3, we now wish to find the "best" sequence of tolerance
values for the inner iterations. More precisely, we seek the sequence of tolerances that
yields the lowest possible cost for the algorithm.
To formalize this problem, we let
, be a sequence of tolerance values. The
jth component of ffi, is the tolerance, required in the solution of the subproblem at outer
iteration j. Therefore and the number of inner iterations at step j is d \Gamma log
e.
In this estimate, ae is the convergence factor of the method which is used in the solution of
the subproblem. Then, we define N(ffl;ffi) to be the number of outer iterations needed to
reduce the initial error by a factor ffl when the problem is solved with strategy ffi: It follows
that the total number of inner iterations required to achieve this accuracy ffl is proportional
to
log
Our objective is to minimize C(ffl; ffi ) with respect to ffi.
We consider the set S of slowly varying strategies
In (46), the function ffi(x) is assumed to be twice continuously differentiable and ffi 0 denotes
it's derivative. The condition
ensures that ffi(fik) varies slowly as a function of k
if fi - 1.
In order to simplify the analysis, we use the fact that
log
Z N(ffl;ffi)log ffi(fit)dt;
and redefine the cost as
Z N(ffl;ffi)log ffi(fit)dt: (47)
We can now restate the problem as follows. Find ffi 2 S such that
5 Error bound for slowly varying strategies
Now we shall approximate the error bound (11), under the assumption that ffi 2 S. First,
we obtain an asymptotic approximation for -(k; ffi), valid for fi - 1. To emphasize the fact
that -(k; ffi) depends on fi, we denote it -(k; ffi; fi).
To simplify the analysis we assume that the function ffi(x) is constant on [0; fi]. This
assumption is not very restrictive since it requires only that we change the value of ffi 0 to
equal . Moreover, since ffi k is slowly varying the impact of this change on the cost is
negligible.
The method we use is similar to the W.K.B method [13] for linear ordinary differential
equations with a small parameter, and the ray method Keller [14] for linear partial differential
equations with a small parameter. These methods have recently been adapted to
linear difference equations with small parameters [12], [15].
We now obtain an approximate solution to equation (13) when belongs to
S. Since we are looking for an asymptotic expansion of -(k; ffi; fi) for small fi, we introduce
the new scaled variables
Upon performing the change of variables (49) in (13), we obtain
We seek an asymptotic expression for R(x; ffi; fi) for small fi, in the form
The functions /(x; ffi), K(x; ffi), K 1 are to be determined to make R satisfy (50).
Substitution of (51) into (50), and multiplication by e \Gamma/=fi yields
e
We now express each side of (52) in powers of fi, assuming that /(x
etc. can be expanded in Taylor series in powers of fi. Then, we equate
coefficients of powers of fi. The coefficients of fi 0 and of fi 1 yield
tanh
Solving (53) for / x yields
with \Phi(ffi) given by (37). Integrating (55) yields, with a a constant of integration
We now rewrite (54) as
cosh / x
sinh / x
Integrating (57), with b a constant of integration, gives
Now, we use expression (55) for / x in (58) to obtain
To obtain the leading order term in -(k; ffi; fi), we substitute the two values (56) for /
into (51) for R and add the two terms. Then, we use the result in (49) and set x j fik to
find
R fik\Phi(ffi(t))dt
Here \Phi(ffi) is defined in (37) and K(x; ffi) is given by (59). The constants A and B are
determined to make (60) satisfy the initial conditions (14):
R fi
R fik\Phi(ffi(t))dt
Since ffi(x) is constant on [0; fi], (59) shows that K(0;
R fi
\Phi(ffi(0)). We substitute (61) into (60) to obtain, after some manipulation,
sinh
Z fik\Phi(ffi(t))dt
R fik\Phi(ffi(t))dt
When implies that / shows that K is also
constant. Hence, (62) simplifies to the exact solution (36) of (13) and (14) when
a constant.
The exponentially decaying term in (62) can be neglected after a few outer iterations.
Then we set in (62) and introduce the function
sinh
Now, we approximate -(k; ffi) by oe(k; ffi), and the bound for the error in the right hand side
of (11) becomes
In the next section, we shall analyze the validity of the approximation (64).
6 Validity of the asymptotic expansion
Now we shall show that the leading order expression for -(k; ffi), given by (62), is indeed
asymptotic to -(k; ffi) as fi ! 0. We denote this expression by - (k; ffi) and define the residual
associated with it by r(k; ffi):
To evaluate r(k; ffi) we substitute (60) for -(k; ffi) into (65) and then expand / and K in
Taylor series, with remainders up to order fi 3 and fi 2 , respectively. We use (59) and (56)
in the resulting expression to obtain, after some manipulation,
Here and is independent of k and fi.
The error in the asymptotic approximation, e(k;
This equation is obtained by subtracting (13) from (65). The initial conditions for e(k; ffi)
are
Our goal is to show that for any constant C and all k - C
To estimate the left side of (69), we obtain an explicit formula for e(k; ffi), by solving
(67) and (68). We use the method of reduction of order [13]. Specifically, we seek a solution
of the form
where -(k; ffi) is the solution to equation (13), (14) and x k is to be determined. Upon
substituting (70) into (69) we find that (69) will hold if
We obtain an expression for x k by substituting (70) for e(k; ffi) into (67). Then, we
eliminate from the resulting expression by using (13) and we find that
Now, we introduce
into (72) to obtain a linear first order equation for X k . The initial conditions (68) yield
The solution of (72) and (74) is
We take the absolute value of each side of (75) and use (66) to obtain
e
R k\Phi(ffi(fit))dt
Here \Phi(ffi) is defined in (37).
In lemma 1 we shall show that
is bounded by a constant independent on
k and fi. In lemma 2 we shall show that for a non-increasing strategy ffi(x) in S
e
where the constant P is independent of k and fi. We now use these bounds in the right
side of (76) and conclude that for all k - 1
where the constant C is independent of k and fi.
Equation (73) and the condition for x 1 in (74) determine x k through
To derive the bound (71) for jx k j, we take the absolute value of each side of (79) and use
(78) to obtain
We summarize the above analysis in the following theorem:
Theorem 2 Let -(k; ffi) satisfy (13) and (14). Let - (k; ffi; fi) be the expression on the right
side of (62). Assume that ffi(x) is a non-increasing strategy and that ffi(x) 2 S with S defined
in (46). Then, fi fi fi fi fi
Furthermore, the coefficient of fi 2 in (81) is bounded by a linear function of k.
We now briefly discuss the validity of the approximation (63). When
is constant,
(63) is exact up to an exponentially decaying term, and it is very accurate after a few
iterations. When ffi is not a constant, the approximation is based on (62), which is valid for
Therefore, the accuracy decreases as the number of outer iterations
k !1, and for a fixed k, increases as fi ! 0. At the end of this section we present a few
numerical calculations that demonstrate the accuracy of the expansion for a few variable
strategies in S. As we shall see, even for large values of k, it is very accurate.
and
Proof: Inequality (82) is shown by induction. For it follows from initial conditions
(14). Now assume by induction that (82) holds for all 1. Then from (13)
By the induction hypothesis
We use (85) in (84) to complete the induction.
In order to prove (83), we recall from (46) that ffi k - j and we use this bound in (82)
to obtain
Furthermore, we note that
Y
We use (86) in (87) to obtain
It follows that
Inequality (83) follows from inequality (89).
be a non-increasing strategy such that ffi(x) 2 S, with S defined in
(46). Then
e
R k\Phi(ffi(fit))dt
where the constant P is independent of k and fi.
Proof: We note that when ffi(x) is a non increasing function of x, it follows from the
monotonicity of \Phi(ffi) in (37) that
We introduce the right side of (91) into (90) and use (37) for \Phi(x), to obtain
e
R k\Phi(ffi(fit))dt
We now seek a lower bound on -(k; ffi). In view of the left condition in (14), we can
as the product
where
It follows from (13) and (14) that ae j satisfies the equation
with
To obtain a lower bound for the product in (93), we introduce the sequence ae
ae
The number ae
k is computed with the aid of the intermediate quantities ae
as follows:
ae
ae
We define
ae
In order to demonstrate (97), we show by induction on j that for all
ae
For it follows from (96), (98) and the fact that ffi(x) is non-increasing that
ae
Now, we assume that (101) is true for all it follows from (95), (99)
and the fact that ffi(x) is non-increasing that
ae
ae
ae
The next step in the proof is to evaluate ae
k explicitly and obtain a lower bound for
it. This is done by solving the non-linear recurrence equation (99) for ae
k;j , subject to the
initial condition (98). We solve this equation with a method analogous to the one described
in section 16.7 of [16] and obtain
ae
where
From equation so that
Furthermore, it follows from (105) and the definition of j in (46)
where the equality on the right defines the constant -. We use (107) and (106) in (104)
and obtain, in view of (100),
ae
Further manipulation of (108) yields
ae
Finally, we note that 1=(1 the latter
inequality follows from (105). We use these bounds in (109) and use (97) to get
We are now ready to prove the lemma. First, we substitute the right side of (93) for
-(k; ffi) in (92). Then, we use (110) and (96) to find
e
The infinite product
convergent because
1. Hence, the right side
of (111) is bounded by a number P which is independent of fi and k.
We now present a few numerical calculations that demonstrate the accuracy of the
expansion derived in section 5. First, we solve (13) for -(k; ffi; fi) by iteration and then we
compute the approximate solution oe(k; ffi) given by (63), for all 2 - k - 2000. We present
the relative error in this approximation.
We use strategies from the three parameter family
A
The minimal tolerance in (112) is In all our calculations ffi k AE j and for all
practical purposes j can be neglected. The value of parameters A and fl is fixed at 1. The
parameter B and the value of fi vary from one calculation to the other. The value of \Delta in
(13) is set to 37. We performed analogous calculations with larger values of \Delta and with
obtained similar results.
In table 1, we present the maximum with respect to k, of the absolute value of the
relative error in percent. Each entry in this table corresponds to a calculation with a
different strategy. The strategy is determined by the parameters B and fi. Figure 1 depicts
the relative error in percent between oe(k; ffi) and -(k; ffi; fi), for all 2 - k - 2000. Each graph
corresponds to different values of B and fi. We note that the approximation is accurate
even for large values of k.
Relative
error
in
20000.020.06Relative
error
in
-0.20.20.6Relative
error
in
-0.020.020.06Relative
error
in
Figure
1: The relative error j- (n; ffi; Each
graph corresponds to different values of B and fi.
1:01 0:74 0:05
1:50 0:74 0:05
2:00 0:73 0:05
5:00 0:72 0:07
10:00 0:71 0:07
100:00 0:70 0:11
Table
1: The maximum over 2 - n - 2000 of j- (n; ffi;
7 Constant strategy is optimal
using (47) and (64) we seek the optimal strategy for the Chebyshev iteration. The
numbers N(ffl; ffi) and C(ffl; ffi) in (47), are hard to determine precisely. Therefore, we introduce
the quantities NB (ffl; ffi) and CB (ffl; ffi), which are the number of outer iterations required to
reduce the error bound (64) to ffl and the associated cost, respectively. The following
theorem shows that a constant strategy is optimal.
Theorem 3 Suppose that a linear system of equations is solved to accuracy ffl by the Chebyshev
iteration using inner iterations with a sequence of tolerances fffi k g in S. There exists
a constant strategy - ffi(ffi; ffl), for which the cost is smaller, i.e.
Proof: Given the variable strategy ffi and the accuracy ffl used in the solution of the linear
system, we define the associated constant strategy -
R NB (ffl;ffi)
In Lemma 3, we show that NB (ffl; -
Therefore,
In Lemma 4, we show that
Using (115) in the right hand side of (114), proves the theorem.
Lemma 3
Proof: By definition of NB (ffl; ffi) the bound for the error B(k; ffi) in (64) satisfies
Therefore, to prove (116) it is sufficient to show that after NB (ffl; ffi) outer iterations, the
bound for the error associated with the variable strategy is greater than the one associated
with the constant strategy. Hence we need to show
We see from (64) that (117) is equivalent to the inequality
where oe is defined in (63). To prove (118) we begin by rewriting expression (63) for oe(k; ffi)
with
K(fiNB (ffl; ffi); ffi)
sinh
Then, we note from (37) that \Phi is monotonically increasing and that for all non-negative
(46). Therefore,
R NB (ffl;ffi)
Furthermore, we see that K(fiNB (ffl; ffi); ffi)=K(0; ffi) - 1 from equation (59). Using this and
(121) in the right hand side of (119) we obtain
sinh@ NB (ffl; ffi)\Phi\Phi \Gamma1@
R NB (ffl;ffi)
Lemma 4
Proof: The definition (37) of \Phi shows that \Phi
. Therefore,
strictly convex on the interval ffi)g. It follows
from Jensen's inequality that
log \Phi \Gamma1@
R NB (ffl;ffi)
R NB (ffl;ffi)
Multiplying (123) by NB (ffl; ffi) proves the lemma.
We now show how to estimate the optimal constant - ffi. We note from (64) that for any
iteration N
Then, by equating the right side of (124) to ffl and using (37), we obtain
log
Re(cosh
An estimate of the cost is then
Re(cosh
The right side of (126) can be minimized easily with respect to -
using a standard minimization
technique. The original variational problem (48) is thus reduced to a simple
optimization problem. Since B(N; -
ffi) approximates a bound for the error, the tolerance
obtained by this method will be a lower bound for the optimal tolerance. The estimation
of the optimal constant depends on the parameters - and ae in expression (126). These are
often determined adaptively while solving the system [17].
8 Numerical calculations
We now present a few numerical calculations that verify the analysis of section 7. In
each experiment, we solve a linear system with Chebyshev iteration to accuracy ffl, using a
variable strategy ffi. Then, we solve the same system with the associated constant strategy
defined in (113) with NB (ffl; ffi) replaced by N(ffl; ffi). We recall that N(ffl; ffi)
is the exact number of outer iterations required to achieve an accuracy ffl, when solving the
problem with strategy ffi. This number is obtained from our numerical experiment. Our
goal is to verify that the predictions of lemma 3 and theorem 3 hold in practice.
In section 4 we define the cost at outer iteration j by using
log
for the number of inner iterations required to achieve accuracy ffi j instead of d \Gamma log
e. Here,
ae is the convergence factor for the inner iteration. If ae is close to 1, then the relative error
in using (127) is usually small and the cost (45) is truly proportional to the total number
of inner iterations. In this case, we expect good agreement between the analysis and
the numerical calculations. Moreover, we expect some fluctuations around the predicted
behavior when ae - 1. We covered both cases in our experiments.
We solve the symmetric system
arising from the central difference discretization of the operator
in the interval [0; 1] with homogeneous Dirichlet boundary conditions. The right side b in
(128) is chosen at random. The splitting matrix M is obtained from the discretization of
the operator
with homogeneous Dirichlet boundary conditions. The mesh parameter in this discretization
is 1=100. The tolerance for the outer iteration is . The initial iterates
for both the inner and outer iterations are 0.
In all our experiments, we use strategies from the family (112). The values of fl and A
are fixed at 1. The parameter B and the value of fi vary from one experiment to the other.
For each variable strategy ffi, the associated constant strategy - ffi is computed using (113)
with NB (ffl; ffi) replaced by N(ffl; ffi). We note from (113) that \Phi depends on \Delta. We evaluate
exactly but find that - ffi is not very sensitive to the value of \Delta. We performed calculations
with various values of C in (129) and (130) and we shall report on a representative sample
obtained with
We use two methods for the inner iteration. The symmetric Gauss Seidel, with the
convergence factor 0:993, close to 1, and the symmetric successive over relaxation method
[18] (S.S.O.R) with the smaller convergence factor 0:925. In the S.S.O.R iteration, the
relaxation parameter ! is the optimal parameter ! of S.O.R. In each experiment, we
record the number of outer iterations and the total number of inner iterations for the
variable and constant strategy cases.
Tables
2-5 correspond to the case where the inner iteration is symmetric Gauss Seidel.
In table 2 we report the difference in the total number of inner iterations between the
variable strategy case and the associated constant strategy case. All entries in the table
are in (%) and are computed from
N in (ffl; ffi) \Gamma N in (ffl; - ffi)
N in (ffl; -
100: (131)
Here N in (ffl; ffi) is the total number of inner iterations performed when solving the system
to accuracy ffl with strategy ffi.
Each entry in table 2 corresponds to a different strategy. The strategy is determined
by the parameters B and fi. Note that not all strategies are slowly varying since fi 6- 1
in the two rightmost columns of that table. The important thing to note in table 2 is
that all entries are positive. Therefore, the number of inner iterations associated with
the variable strategy is greater than or equal to the number of inner iterations with the
constant strategy. Hence, there is agreement with Theorem 3.
In table 3, we present the difference in the number of outer iterations between the
variable strategy case and the constant strategy case i.e. N(ffl; We see that all
entries are non-negative and there is very good agreement with Lemma 3.
In table 4 we present the total number of inner iterations with the associated constant
strategy. The lowest number of inner iterations is found at the top left entry. This entry
corresponds to the lowest tolerance for the inner iteration. Table 5 presents the total
number of outer iterations. We see that the top left entry maximizes the number of outer
iterations. Hence, among all strategies considered in this table, the strategy which yields
the lowest convergence rate also yields the lowest cost.
Tables
6 and 7 present the difference in number of inner and outer iterations, respec-
tively, when the inner iteration is S.S.O.R. Since the convergence factor is not close to 1
some fluctuations from the predicted behavior are expected. Indeed, two entries in table
6 are negative. However, the fluctuations are small and the constant strategy performs
essentially as well as the variable one.
In our numerical calculations we have used both slowly varying strategies,
"rapidly" varying ones, Although our theory was developed for slowly varying
strategies, the conclusion of theorem 3 is found to hold for all the strategies considered.
9 Generalization to other iterative procedures
We now consider a general iterative algorithm in which, at iteration k, a subproblem is
solved by an inner iteration to accuracy ffi k . The norm of the error at step k, e k , satisfies
Table
2: The difference in number of inner iterations (N in (ffl; ffi) \Gamma N in (ffl; - ffi))=N in (ffl; -
ffi) in
(%). The tolerances ffi k and -
are defined by (112) and (113), respectively. Inner iteration
is symmetric Gauss Seidel.
Table
3: The difference in number of outer iterations N(ffl;
ffi). The tolerances
are defined by (112) and (113), respectively. Inner iteration is symmetric Gauss
Seidel.
1:50 4458 5144 5369 6203
2:00 4666 5143 5571 6324
5:00 5199 6208 6444 7447
10:00 5697 6722 7167 8093
100:00 8660 9423 10228 11137
Table
4: The number of inner iterations N in (ffl; -
ffi) with - ffi given in (113). Inner iteration is
symmetric Gauss Seidel.
Table
5: Number of outer iterations
given in (113). Inner iteration is
symmetric Gauss Seidel.
1:01 2:36 8:60 4:29 8:62
1:50 2:26 3:66 4:34 3:42
2:00 0:98 3:43 4:79 4:76
5:00 0:64 5:87 4:64 5:44
10:00 \Gamma0:77 \Gamma0:67 5:93 0:14
100:00 0:42 0:60 0:78 1:03
Table
The difference in number of inner iterations (N in (ffl; ffi) \Gamma N in (ffl; - ffi))=N in (ffl; -
ffi) in
(%). The tolerances ffi k and -
are defined by (112) and (113), respectively. Inner iteration
is S.S.O.R.
Table
7: The difference in number of outer iterations N(ffl;
and - ffi are defined by (112) and (113), respectively. Inner iteration is S.S.O.R.
the relation
In (132), ae(k; x convergence factor at step k, depends on the initial iterate x 0 and
on the sequence of tolerance values ffi. We assume that ae(k; x product
with
Hence, the only tolerance upon which ae(k; x the tolerance at outer
iteration k. Furthermore, the dependence of ae(k; x is the same at each iteration
of the algorithm. We can prove a result similar to the one of section 7 for an iteration
satisfying (133).
Theorem 4 Consider an iterative algorithm in which at step k, a subproblem is solved
by inner iteration to accuracy ffi k . Assume that the norm of the error satisfies (132), with
of the form (133). Assume that g(OE) is a convex non decreasing function. Let
be the reduction of the error after N outer iterations. Then, for any variable
strategy ffi and any number of outer iterations N , there exists a constant - ffi(N;
with the following properties.
1. After N outer iterations with the constant tolerance - ffi(N; ffi ) for the inner iteration,
the error is reduced by exactly ffl(N; ffi ).
2. The cost (45) of performing N outer iterations with the constant tolerance -
is lower than the cost of performing N outer iterations with the variable tolerance ffi.
In other words, for such an iteration a constant strategy is optimal.
Proof: From (132) and (133) we find that after N outer iterations of the algorithm with
the variable tolerance ffi, the error is reduced by
Let
and - ffi(N;
Then, it follows from equations (132) and (133) that after N iterations with the constant
strategy -
the error is reduced by
The right hand side of equation (137) is exactly ffl(N; ffi ).
Using (45) and (136) we find that the cost associated with N steps of the constant
tolerance iteration is
while the cost associated with the variable tolerance is
Now, the right side of (138) is no greater then the right side of (139) since g is convex.
The error bound (64) for the Chebyshev iteration is analogous to (135) with the sum
over g(OE) replaced by an integral and the term e
replaced by a function F (k; x 0 ),
independent of ffi. Hence, theorem 3 is essentially a continuous version of theorem 4. The
proof of the former is complicated by the presence of the amplitude term (59) in (63).
Acknowledgements
This work was supported in part by NSF under cooperative agreement no CCR-9120008
and grant CCR-9505393, ONR, and AFOSR.
--R
Chebyshev semi-iterative methods
Hybrid Numerical Asymptotic Methods.
On the interplay between inner and outer iterations for a class of iterative methods.
The convergence of inexact chebyshev and richardson iterative methods for solving linear systems.
On the local convergence of certain two step iterative procedures.
Accelerating the convergence of discretization algorithms.
On the convergence of two-stage iterative process for solving linear equa- tions
Inexact and preconditioned uzawa algorithms for saddle point problems.
The tchebyshev iteration for nonsymmetric linear systems.
Eulerian number asymptotics.
Advanced Mathematical Methods for Scientists and En- gineers
Rays, waves and asymptotics.
The wkb approximation to the g/m/m queue.
Ordinary Differential Equations.
Adaptive procedure for estimation of parameters for the nonsymmetric tchebychev iteration.
Matrix iterative analysis.
--TR | inexact iteration;iterative methods;inner iteration |
275988 | The Value Function of the Singular Quadratic Regulator Problem with Distributed Control Action. | We study the regularity properties of the value function of a quadratic regulator problem for a linear distributed parameter system with distributed control action. No definiteness assumption on the cost functional is assumed. We study the regularity in time of the value function and also the space regularity in the case of a holomorphic semigroup system. | Introduction
. In this paper we are concerned with a general class of finite
horizon linear quadratic optimal control problems for evolution equations with distributed
control and non-definite cost. More precisely, we consider the following
abstract differential equation over a finite interval [-; T
where A is the infinitesimal generator of a strongly continuous semigroup e At on a
Hilbert space X , B is a linear bounded operator from the control space U to X . With
the dynamics (1.1), we associate the cost functional
F
is the mild solution to equation (1.1) and F is the quadratic
(we denoted by h\Delta; \Deltai inner products in both the spaces X and U ). All the operators
Q, S, R and P 0 contained in the functional (1.2) are linear bounded operators in the
proper spaces, with
. We define as usual the value function
of the problem:
The goal of the present work is
ffl to characterize the property
This research was supported by the Italian Ministero dell'Universit'a e della Ricerca Scientifica e
Tecnologica within the program of GNAFA-CNR.
y Dipartimento di Matematica Applicata, Universit'a di Firenze, Via S. Marta 3, 50139 Firenze,
Italy (fbucci@dma.unifi.it).
z Dipartimento di Matematica, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino,
Italy (lucipan@polito.it), partially supported also by HCM network CEC n. ERB-CHRX-CT93-
ffl to study the regularity properties of the map
on the interval [0; T ], when x 0 is fixed.
We shall consider also the map x in a special case, see x6. It is well
known that if the regulator problem is standard, i.e.
then the solution to the operator Riccati equation corresponding to problem (1.1)-
(1.2) provides the synthesis of the unique optimal control. This problem is well
understood, both in finite and infinite dimensions, over a finite or infinite time horizon
(compare [10], [2, 3]).
The purpose of this paper is to examine the case when (1.5) fails, with special interest
in non-coercive R. We shall see that in this case the function
mild regularity properties, see x4. More regularity is obtained in the coercive case,
see x5.
The study of LQR problems with non-definite cost is related to a large variety
of problems. Among them, we recall the study of dissipative systems (see [20]), the
analysis of the stability of feedback systems ([14]), the analysis of second variations
of non-linear optimization problems (see [5], [15]). When game theory is studied
for linear systems then the quadratic form (1.3) is non-positive. In particular, the
suboptimal H 1 -problem can be recast in this setting ([1]). Finally, very recently
singular control theory has been used to obtain new results on regular control problems
for some class of boundary control systems: systems with input delays first [16], and
later systems described by wave- or plate-like equations with high internal damping
[9].
We recall that the existing results for finite dimensional systems over an infinite
time interval ([19, 21], see also [4]) were extended to distributed systems in [22, 23,
12, 13, 8]. If T ! +1 the only work we know in an infinite dimensional context,
in which a nonpositive cost functional is studied, is [6]. This paper considers even
time-varying systems, but under the restriction
2. A simple example. The interest of the results presented in this paper is
justified by the possible applications that we already quoted, for instance to H 1 -
control theory over a finite time interval, or to the analysis of the second variation
of general cost functionals. However the following example may help the reader to
understand our problem. The example is a bit artificial, since we want to present
a very simple one. Nevertheless it is suggested by non trivial problems in network
theory.
A delay line in its simpler form is described by an input-output relation
and the integral is a Stieltjes integral.
For simplicity we assume that the input u(\Delta) is continuous, a condition that can be
very much relaxed.
The simplest case described by (2.1) is
and corresponds to a jump function j, with jump at \Gamma1. If the system is started at
then the input (2.1) is read only for t ? 0, so that the output v(\Delta) from (2.1) is
given by
The function OE describes the "initial state" of the system (quite often it will be
In the case of equation (2.2) we have in particular
Notice that if OE(\Delta) and u(\Delta) are regular then v(t; solves the first order
hyperbolic equation
The function v can be interpreted as a delayed potential at the output of the network
produced by the potential u(\Delta) at the input. If the delay line is connected to a resistive
load, it produces a current
and the energy disspated by the load in
time T is given by
Z
Since
then
The energy that the load can dissipate is at most
u(\Delta)
We see from this that the load dissipates a finite amount of energy V (OE) if T ! 1,
described by the quadratic functional
Z
Otherwise, the load can dissipate as much energy as we want.
Hence it makes sense to study the energy function E(T )
In this example the function E(T ) is finite only if T ! 1, and in this case E(T ) is the
quadratic functional (2.3).
In this paper we consider an analogous problem in more generality: we study the
dependence on the interval [-; T ] of the "energy" dissipated by a certain linear time
invariant system.
4 F. BUCCI AND L. PANDOLFI
3. Preliminary results. We recall that the solution to (1.1) is
with
e A(t\Gammas) Bu(s) ds;
continuous
Note that t ! (L - u)(t) is a X-valued continuous function.
The adjoint L
- of
is given by
(L
e A (s\Gammat) f(s) ds;
continuous
Introduce also the bounded operator from U to X
e A(T \Gammas) Bu(s) ds
(which describes the map (3.1) from the input u to the solution of (1.1) at time
with initial time - and x 0). The adjoint of L -;T is the map given by
(L
Using (3.1), one can easily show the following
Lemma 3.1. The cost functional (1.2) can be rewritten as
with
selfadjoint, defined as follows
(R
We first state a Lemma, which will be useful later.
Lemma 3.2. If there exists - 0 and a constant fl such that
then R - fl I for any - 0 .
Proof. It is sufficient to notice that if - 0 we can write
where v(\Delta) is given by . Hence, from (3.7)
it follows that R - fl I for any - 2 [-
We shall use the following general result pertaining continuous quadratic forms
in Hilbert spaces whose proof is given for the sake of completeness.
Lemma 3.3. Let X and U be two Hilbert spaces, and consider
with
1. If there exists x 2 X such that
u2U
2. The infimum of f(x; \Delta) is attained if and only if the equation
is solvable and in this case any solution u of (3.8) gives a minimum.
3. If for each x 2 X there exists a unique u x such that
f(x;
then R is invertible (the inverse R \Gamma1 may not be bounded) and u
so that the transformation x ! u x is linear and continuous from X to U .
4. Let us assume that V (x) ? \Gamma1 for each x 2 X. Then there exists a linear
bounded operator P 2 L(X) such that
Proof. If there exists v such that hRv; vi ! 0 then f(x;
This proves the first item of the Lemma. The second item is well known ([23, Lemma
2.3]). To prove the third item we use item 2: the minimum u x is characterized by (3.8).
This equation is uniquely solvable for every x by assumption. Hence, ker
and im N ' im R. Consequently, acts from the closure
of the image of R. Hence, R \Gamma1 N is bounded since R \Gamma1 is closed and N is bounded.
The proof of the fourth item follows an approach in [7]. If R is coercive, then
it is boundedly invertible, so that f(x; \Delta) admits a unique minimum, namely
\GammaR
Hence, (3.9) holds true and we have obtained an explicit expression for P , i.e.
If we simply have R - 0, we consider the function
6 F. BUCCI AND L. PANDOLFI
Now
n I is coercive, hence
Vn
with Pn 2 L(X). By construction
is a decreasing numerical sequence for any x 2 X , and
hence there exists P 2 L(X) such that
To conclude, it remains to show that V (x) coincides with hx; Pxi for any x 2 X .
Assume by contradiction that V (x) ! hx; Pxi for a given x 2 X , and let ff ? 0 such
that
there exists u 2 U such that
Correspondingly, there exists an integer n 0 2 IN such that
From (3.11) and (3.12) it follows
which is a contradiction, compare (3.10).
The above lemma and (3.3) imply a first necessary condition for finiteness of the
value function.
Lemma 3.4. If there exists x 0 such that V (-; x
This observation is now used to obtain a necessary condition of more practical interest,
which is well known in the finite dimensional case. The symbol I denotes the identity
operator acting on a space which will be clear from the contest.
Proposition 3.5. If there exists - 0 2 [0; T ) and a constant fl - 0 such that
Consequently,
if there exists x 0 and - 0 such that V (-
(3.
Proof. We first consider the case hence by assumption R -0 - 0. By
contradiction, suppose that there exists a control u 0 2 U and a constant ff ? 0 such
that \Gammaff. Given a small ffl ? 0, choose a control u as follows:
ae
and compute
e A(t\Gammas) Bu 0 ds;
e A(t\Gammas) Bu 0 dsi dt
e A(T \Gammas) Bu 0 ds;
e A(T \Gammas) Bu 0 dsi dt
tends to 0:
Since ffl can be taken arbitrarily small, (3.15) yields hR -0 u; ui ! 0, and this contradicts
the assumption.
Assume instead R -0 - fl I ? 0. By choosing
direct computation yields
which implies hRu
Finally, if V (- , then from Lemma 3.4
it follows that R - is a non-negative operator for - 0 . Therefore from the previous
part of the proof, R - 0.
We now show that the value function satisfies the Bellmann's optimality principle
which is known, in the context of linear-quadratic problems, as "Linear Operator
Inequality" (LOI) or "Dissipation Inequality" (DI).
We begin with the following
Lemma 3.6. If for some number - and some x
then we have also V (t; denotes the value
at time t of the function given by (3.1), for any fixed control u(\Delta) on [-; t].
Proof. Let
ds
ds
F
now a control v j 0 on [-; t):
then
ds
8 F. BUCCI AND L. PANDOLFI
and
ds
Conclusion immediately follows since in fact
Theorem 3.7. Let - 2 [0; T ] and x 0 2 X be given. Let V be the value function
of problem (1.1), (1.2) and assume that V (-; x
ds
for any u(\Delta) 2 L 2 (-; T ; U) and any t 2 (-; T ), with Moreover, the
equality holds true if and only if the control u in(3.17) is optimal.
Proof. We return to the conclusion of the preceding Lemma, and observe again
that
while
hence plugging (3.18) into (3.16) and taking into account (3.19), we get
F
which is nothing but (3.17). Thus, if for a given initial datum x 0 there exists an
optimal control u then we can rewrite (3.16) and (3.19)
with is in fact an equality. Therefore (3.20) becomes an
equality as well. For these arguments compare also [11].
Viceversa, assume that (3.17) is satisfied for any control u 2 L 2 (-; T ; U) and it is an
equality for a given u . Then, passing to the limit, as
and assuming for the moment that
lim
we readily get
ds
that is
hence by definition u is optimal.
To conclude, it remains to show that if (x ; u ) satisfies
ds
then (3.21) holds true. From (3.22) it follows that there exists
lim
and by the very definition of the value function it follows
lim
To see this rewrite the above limit as
lim
By contradiction, assume now that
lim
where fl is a suitable positive constant. Then, there exists
for any t 2 Recall now that
hence we can rewrite
ds
ds
A3
Take a possibly smaller ffi, in order to get
so that (3.23) yields
Finally, let ffi such that
hQe A(s\Gammat) x (t); e A(s\Gammat) x (t)ids
Fix now t 2 (T \Gamma ffi; T ), so that (3.24) and (3.26) hold true. From (3.25) it follows
that there exists a control such that
that is, by means of (3.3),
with M t , N t , R t defined in (3.4), (3.5) and (3.6), respectively. We know that
R T
ds. Thus we cancel the
term A 1 , we take into account (3.26) and we obtain
In particular this implies that v t 6= 0. Notice now that
const \Deltajv t (\Delta)j 2
and therefore
lim inf
Hence there exists a sequence t n such
so that we see from (3.27) for n largejv t n
In other words J t n
this is a contradiction since by assumption J - (0; u)
is non-negative for any u 2
The next Proposition is an immediate consequence of Lemma 3.1 and of Lemma
3.3. We omit the proof.
Proposition 3.8. Let - 2 [0; T ]. If
there exists a selfadjoint operator W (\Delta) 2 L(X) such that W (T
4. Time regularity of the value function: the non-coercive case. In this
section we investigate the regularity properties of V (-; x 0 ) with respect to the initial
time - .
We note that several regularity results are known for the value function even
of non-linear systems, and with more general cost but under special boundedness
properties, which are not satisfied in the present case, compare [11, Ch. 6].
Our first result is:
Lemma 4.1. Let - be such that V (-
upper semicontinuous at - 0 .
Proof. Fix In order to show that
lim sup
we shall show that for any real number ff ? V (-
is taken small enough. We first consider the case when - 0 . Let u be an admissible
control such that
and define
e A(t\Gammas) Bu(s) ds:
It is readily verified that
1: lim
2: lim
ds
so that if
Finally, if - 0 , choose once more in such a way that (4.1) holds
true. It is now sufficient to repeat tha same arguments used before, after replacing u
with -
u defined as follows:
The proof is complete.
As to lower semicontinuity, the following result holds true.
Lemma 4.2. Let x 0 be such that is finite on [0; T ]. Then
ffl the map lower semicontinuous at - 0 provided that for each
element - n of a sequence f- n g which tends monotonically to - 0 there exists a
control such that
ii) there exists
Proof. Let be given, and consider a sequence f- n gn2I N such that - n # - 0 .
Introduce the inputs
and define
Notice that x -n
x -n (t) ! 0, as n !1, for any t, and that its norm is uniformly
bounded in L 2 (-
lim
Therefore
lim inf
where the last equality is due to i). On the other hand ii) implies the existence of an
admissible such that
as n !1. Now the map
is convex continuous, hence weakly lower semi-continuous, so that
To conclude the proof, we need to consider a sequence fr n gn2I N such that r n " - 0 . In
this case, we introduce
~
ae
Again from ii) it follows that there exists an input v 2 L 2 (- such that ~
in similar argument gives
which finally yields
Consequently, we can conclude
Theorem 4.3. Under the same assumptions as Lemma 4.2, the map - !
continuous for any - 2 [ 0; T ].
In the case that an optimal control exists for each - near - 0 , Lemma 4.2 takes
a simpler form. We state this form, under the assumption that an optimal control
exists for each - .
Corollary 4.4. Let x 0 2 X be fixed. Assume that
there exists an optimal control u
ii) there exists a constant fl ? 0, independent of - , such that
(4.
Under these conditions, the map
We note explicitly that if there exists an optimal control u for J -
each there exists an optimal control for J - 0 (x(-
It has some interest to see that if the operator A generates a strongly continuous
group then we can prove more:
Theorem 4.5. Let us assume that for each - 2 [0; T ) and each x 0 2 X there
exists a unique optimal control u At is a
strongly continuous group then the value function is continuous from the right.
Proof. We prove continuity from the right at a fixed - We know from
Lemma 3.3 item 2 that x
linear and continuous from X to L 2 (-;
for each - 2 [0; T ).
Now we consider points - 0 . We show that for each fixed - 0 there exists
It is sufficient to see for this that there exists a solution x 1 of
e A(- \Gammas) Bu
If this is true, unicity of the optimal control shows that (4.5) holds.
We noted above that ku so that the norm of the
operator
e A(- \Gammas) Bu ds
can be estimated as follows: kT x 1
We write Eq. (4.6) in the form
sufficiently small, is less then 1 hence Eq. (4.7) can be
continuously solved for x 1 and gives a linear continuous transformation x
which, of course, depends upon - . The vector x 1 ,
is continuous with respect to x 0 and also with respect to - if - is close to - 0 . In
bounded in a neighborhood of - 0 . Therefore,
Right continuity follows from Lemma 4.2.
The previous theorem presents a case in which the quite involved condition of
Lemma 4.2 is satisfied. The next example shows that the condition in that lemma
cannot be avoided if we are to obtain continuity of the value function.
We note first that the value function is not continuous in general, even for finite
dimensional systems: if the cost is jx(T )j 2 and the system is controllable then the
14 F. BUCCI AND L. PANDOLFI
value function has a jump at T . The following example shows that the value function
may be discontinuous even at points - ! T .
Example 4.6. Consider the delay system given by
with initial datum OE 0 =col[x 0). The quadratic functional is
Consequently
In particular J 1
On the other hand, if and it can be
arbitrarily fixed, by means of suitable choices of the control u, within the class of
W 1;2 functions which are zero at 1. This set is dense in L
suitable functions y can be found in order to drive x(t) to zero in time ffl ? 0, namely
remaining uniformly bounded. Therefore we have that
In conclusion, if and the value function is not continuous
at
Remark 4.7. The previous example shows that in the statement of Lemma 4.2-
which concerns lower semicontinuity of V (-; x 0 )-assumption ii) cannot be dispensed
with. In fact that assumption holds in the previous example for but not for
5. Time regularity of the value function: the coercive case. Let - 2 [0; T
be given, and consider the operator R -
- as defined in (3.6). Throughout this section
we shall assume that R - is coercive, i.e.
Our present goal is to show that under assumption (5.1) the value function V (-; x 0 )
displays better regularity properties with respect to - . We start by showing that the
is continuous for any - 2
We recall that from (5.1), by virtue of Lemma 3.2, it follows that R - fl for any
by continuity also on an interval (- Hence there exists a
constant fl 0 such that
Moreover (5.1) implies that for any initial time there exists a unique
optimal control u
- (\Delta) for short ), explicitly given in terms of the
initial state by
(compare item 3 of Lemma 3.3); and from (5.2) it follows
independent of - :
The following Theorem provides a simple explicit expression of the value function in
terms of the optimal pair which will be useful in the next section.
Theorem 5.1. Let R - be coercive, and let
pair for problem (1.1)-(1.2). Then
e A (t\Gamma- )
dt:
Proof. Since the infimum of the cost is attained at
- for short),
plugging (5.3) into (3.3) we easily obtain
The adjoint operator N
)-function v in
e A (t\Gamma- ) ((Q
hence (5.5) follows from (5.6) by a direct computation.
As a consequence of Corollary 4.4, we first have
Theorem 5.2. Let x 2 X be given. Assume that(5.1) is satisfied. Then - !
is continuous on [- ; T ].
Actually we are able to show that the value function satisfies a further regularity
property. Before we state a preliminary result.
Lemma 5.3. Assume that R - is coercive. If w(\Delta) is a continuous function, then
the function
(R
is continuous for any -
- .
In particular, if R - is coercive then the optimal control is continuous.
Proof. Since R -
- is coercive, R is coercive, so that we can assume that R = I .
Moreover, for any - , R - is coercive, hence invertible.
Let OE(t) := (R \Gamma1
- w)(t), with w(\Delta) continuous: we know that OE(\Delta) is at least an U -
valued
e A (s\Gammat) Q
e A(s\Gammar) BOE(r) dr ds
e A(t\Gammas)
e A (s\Gammat) SOE(s) ds
e A(T \Gammas) BOE(s) ds;
and the second hand side is apparently an U -valued continuos function.
The second statement follows from (5.3) since (N - x 0 )(\Delta) is a continuous function,
compare (3.5).
Theorem 5.4. Let x 2 D(A) be given. Assume that (5.1) is satisfied. Then the
differentiable in [- ; T ].
Proof. Let x
optimal control of
problem (1.1)-(1.2), -
- . As in (5.6)
with M - and N - given by (3.4), (3.5), respectively.
From the very definition of M - it readily follows that the derivative @
@-
exists for any x 0 2 D(A). In order to show that the the second summand in (5.8),
namely
is differentiable with respect to - , we first observe that the factor (N - x 0 )(\Delta) is differ-
entiable, with
@
Moreover, again from (3.5) it follows that (5.10) is a continuos function.
We next want to show that for each t ? - the U -valued function
first derivative with respect to - and that this is continuous. Fix - 0 and consider first
the case - 0 . Introduce the operator -
defined as follows:
By construction
and for instance
Moreover, we take into account (5.10) and we see that
lim
In fact it is sufficient to observe that
Now we compute, via
The first summand in (5.12) tends to
@
'-
, due to (5.11).
As to the second summand, it can be rewritten in the following way:
e A (s\Gammat) Q 1
e
a(-;s)
ds
We rewrite, in turn,
e
(r)dr
c(-;s)
Observe now that as a consequence of Lemma 5.3 we have
lim
while lim - +c(-;
Finally, since (-; s) ! a(-; s) is bounded, we can conclude that 1 converges to
\GammaR
e A (s\Gammat) Qe A(s\Gamma-
as - tends to -
. The convergence of the terms 2 and 3 can be proved even more
easily.
If - 0 we define instead
and rewrite the term R \Gamma1
(R
-0 (R -0
The rest of the proof is completely similar.
Therefore we have proved that for each - there exists @
- (t) and that
@
e A (s\Gammat) Qe A(s\Gamma-
ds
\GammaR
In conclusion we saw that the function (N - x 0 )(t)
- (t) is differentiable with respect
to - , and moreover its derivative is a continuous function in [-
Therefore (5.9) is differentiable, and
@
@-
We are now able to deduce a differential form of the Dissipation Inequality.
Proposition 5.5. Assume that (5.1) holds true. Then there exists a selfadjoint
operator W (\Delta) 2 L(X) such that
d
d-
for any (a; v) 2 D(A) \Theta U , for any - 2 [- ; T ].
Proof. We fix a 2 D(A), v 2 U , and take a control u(\Delta) 2 C 1 ([-; T ]; U) such that
u). It is well known (see for instance [2]) that in
this case x is a strict solution to (1.1), that is x
it satisfies (1.1) on [-; T ].
We write the dissipation inequality (3.17) for namely
F
If we divide in (5.15) by
d
ds
To conclude, substitute
We proved that if we replace an optimal pair in the left hand side of the dissipation
inequality in integral form, then we get an equality. Hence we get an equality also
in the differential form (5.14). In particular, we fix a 2domA and we see that
a) is a minimum of the left hand side of inequality (5.14). Hence we find
that
Since R is coercive then R is coercive too and we see that the optimal control has the
well known feedback form
(if a 2 D(A) and, by continuity, for each a 2 X , see item 3 of Lemma 3.3). Moreover,
as a)), the previous equality gives the feedback form of
the optimal control on the interval [0; T ]. We replace this expression for the unique
optimal control in the left hand side of (5.14) and we find a quadratic differential
equation for W (-) which is the usual Riccati equation.
Of course, the Riccati equation can be written provided that R \Gamma1 is a bounded
operator. But, an example in [6] shows that if R is not coercive then the minimum of
the cost may exist and be unique, in spite of the fact that the corresponding Riccati
equation is not solvable on [-; T ].
6. Space regularity of the value function. This section is devoted to the
study of some space regularity properties of the value function in the case that the
optimal control problem is driven by an abstract equation of parabolic type. See [17]
for analogous arguments. More precisely, we shall make the following assumption:
H1: A is the generator of an analytic semigroup e tA on X .
It is well known (see for instance [18]) that in this case there exists a ! 2 IR such
that the fractional powers are well defined for any ff 2 (0; 1), and moreover
there exist constants M ff , fi such that the following estimates hold true
(6.
20 F. BUCCI AND L. PANDOLFI
For the sake of simplicity we assume that the semigroup is exponentially stable, i.e.
that we can choose
We associate the following output to system (1.1):
where y belongs to a third Hilbert space Y and C 2 L(X; Y ),
assume that the cost penalizes the output y i.e. that the quadratic functional F in
(1.3) is given by
so that special and important case is
We now use similar arguments as in Lemma 3.3. Introduce a regularized optimal
control problem with cost given by
and observe that since the operator
n I is coercive for each n, then there
exists a unique optimal control u
n and
Vn
Arguing as in the proof of statement 4 in Lemma 3.3 we know that
(\Delta). Then we have the following
Lemma 6.1. Let assume that there
exists a number
Then there exists a constant c such that
Proof. The estimate is easily obtained as follows (note that 0 2 ae(A) since we
assumed
Remark 6.2. We stress that since
the estimate (6.4) is uniform with respect to n and - .
Lemma 6.3. Under the same assumptions of Lemma 6.1 there exists a constant
k such that
Proof. Let We recall that since by construction the operator R -;n relative
to Jn (- 0 ; u) is coercive for each fixed n, then the regularized control problem admits
a unique optimal pair
Theorem 5.1 yields
e A (t\Gamma- ) C y
The regularity assumptions on C and P 0 imply that Wn (-
Now, as a consequence of (6.4) there exists k such that
uniformly in n. Conclusion follows immediately by choosing -
Consequently we have the following
Theorem 6.4. Under the same assumptions of Lemma 6.1 the operator
admits a bounded extension to X for any fl !
.
--R
Representation and Control of Infinite Dimensional Systems
The Riccati equation
Singular Optimal Control: the Linear-Quadratic Problem
Linear quadratic optimal control of time-varying systems with indefinite costs on Hilbert spaces: the finite horizon problem
Spectral thoery of the linear quadratic optimal control problem: discrete-time single-input case
Equivalent conditions for the solvability of the nonstandard LQ-Problem for Pritchard-Salamon systems
A singular control approach to highly damped second-order abstract equations and applications
Differential and Algebraic Riccati Equations with Application to Boundary/Point Control Problems: Continuous Theory and Approximation Theory
Optimal Control Theory for Infinite Dimensional Systems
The frequency theorem for equations of evolutionary type
The Hilbert space regulator problem and operator Riccati equation under stabilizability
Some nonlinear problems in the theory of automatic control
Nonnegativity of a quadratic functional
The standard regulator problem for systems with input delays: an approach through singular control theory
Semigroups of Linear Operators and Applications to Partial Differential Equations
Least squares stationary optimal control and the algebraic Riccati Equation
The frequency theorem in control theory
--TR | distributed systems;value function;quadratic regulator |
276242 | Two-Dimensional Periodicity in Rectangular Arrays. | String matching is rich with a variety of algorithmic tools. In contrast, multidimensional matching has had a rather sparse set of techniques. This paper presents a new algorithmic technique for two-dimensional matching: periodicity analysis. Its strength appears to lie in the fact that it is inherently two-dimensional.Periodicity in strings has been used to solve string matching problems. Multidimensional periodicity, however, is not as simple as it is in strings and was not formally studied or used in pattern matching. In this paper, we define and analyze two-dimensional periodicity in rectangular arrays. One definition of string periodicity is that a periodic string can self-overlap in a particular way. An analogous concept is true in two dimensions. The self-overlap vectors of a rectangle generate a regular pattern of locations where the rectangle may originate. Based on this regularity, we define four categories of periodic arrays--- nonperiodic, lattice periodic, line periodic, and radiant periodic---and prove theorems about the properties of the classes. We give serial and parallel algorithms that find all locations where an overlap originates. In addition, our algorithms find a witness proving that the array does not self-overlap in any other location. The serial algorithm runs in time O(m2) (linear time) when the alphabet size is finite, and in O(m2log m) otherwise. The parallel algorithm runs in time O(log m) using O(m2) CRCW processors. | Introduction
String matching is a field rich with a variety of algorithmic ideas. The early string matching
algorithms were mostly based on constructing a pattern automaton and subsequently using it
to find all pattern appearances in a given text ([KMP-77, AC-75, BM-77]). Recently developed
algorithms [G-85, V-85, V-91] use periodicity in strings to solve this classic string matching problem.
Lately, there has been interest in various two-dimensional approximate matching problems, largely
motivated by low-level image processing ([KS-87, AL-91, AF-91, ALV-90]). Unlike string matching,
the methods for solving multidimensional matching problems are scant. This paper adds a new
algorithmic tool to the rather empty tool chest of multidimensional matching techniques: two-dimensional
periodicity analysis.
String periodicity is an intuitively clear concept and the properties of a string period are simple
and well understood. Two-dimensional periodicity, though, presents some difficulties. Periodicity
in the plane is easy to define. However, we seek the period of a finite rectangle. We have chosen
to concentrate on a periodicity definition that implies the ability for self-overlap. In strings such
an overlap allows definition of a smallest period whose concatenation produces the entire string.
The main contribution of this paper is showing that for rectangles also, the overlap produces a
"smallest unit" and a regular pattern in which it appears in the array. The main differences are
that this "smallest unit" is a vector rather than a sub-block of the array, and that the pattern is
not a simple concatenation. Rather, based on the patterns of vectors that can occur, there are four
categories of array periodicity: non-periodic, line periodic, radiant periodic and lattice periodic. As
in string matching this regularity can be exploited.
The strength of periodicity analysis appears to lie in the fact that it is inherently a two-dimensional
technique whereas most previous work on two-dimensional matching has reduced the matrix problem
to a problem on strings and then applied one-dimensional string matching methods. The
two dimensional periodicity analysis has already proven useful in solving several multi-dimensional
matching problems [ABF-94a, ABF-93, ABF-94b, KR-94]. We illustrate with two examples.
The original motivation for this work was our research in image preserving compression. We wanted
to solve the following problem: Given a two-dimensional pattern P and a two-dimensional text T
which has been compressed, find all occurrences of P in T without decompressing the text. The goal
is a sublinear algorithm with respect to the size of the original uncompressed text. Some initial
success in this problem was achieved in [ALV-90], but their algorithm, being automaton based,
seems to require a large amount of decompression. In [AB-92b, ABF-94b], we used periodicity to
find the first optimal pattern matching algorithm for compressed two-dimensional texts.
Another application is the two-dimensional exact matching problem. Here the text is not com-
pressed. Baker [B-78] and, independently, Bird [Bi-77] used the Aho and Corasick [AC-75] dictionary
matching algorithm to obtain a O(n 2 log j\Sigmaj) algorithm for this problem. This algorithm is
automaton based and therefore the running time of the text scanning phase is dependent on the
size of the alphabet. In [ABF-94a] we used periodicity analysis to produce the first two dimensional
exact matching algorithm with a linear time alphabet independent text scanning phase.
Since the work presented here first appeared [AB-92a], the analysis of radiant periodic patterns
has been strengthened [GP-92, RR-93], and periodicity analysis has additionally proven useful in
providing optimal parallel two dimensional matching algorithms [ABF-93, CCG+93], as well as in
solving a three dimensional matching problem [KR-94].
This paper is organized as follows. In Section 2, we review periodicity in strings and extend this
notion to two dimensions. In Section 3, we give formal definitions, describe the classification scheme
for the four types of two-dimensional periodicity, and prove some theorems about the properties
of the classes. In Section 4 we present serial and parallel algorithms for detecting the type of
periodicity in an array. The complexity of the serial algorithm is O(m 2 ) (linear time) when the
alphabet size is finite, and O(m 2 log m) otherwise. The parallel algorithm runs in time O(log m)
with O(m 2 ) CRCW processors. In addition to knowing where an array can self overlap, knowing
where it can not and why is also useful. If an overlap is not possible, then the overlap produces
some mismatch. Our algorithms find a single mismatch location or witness for each self overlap
that fails.
2 Periodicity in strings and arrays
In a periodic string, a smallest period can be found whose concatenation generates the entire string.
In two dimensions, if an array were to extend infinitely so as to cover the plane, the one-dimensional
notion of a period could be generalized to a unit cell of a lattice. But, a rectangular array is not
infinite and may cut a unit cell in many different ways at its edges.
Instead of defining two-dimensional periodicity on the basis of some subunit of the array, we instead
use the idea of self-overlap. This idea applies also to strings. A string w is periodic if the longest
prefix p of w that is also a suffix of w is at least half the length of w. For example, if
abcabcabcabcab, then abcabcab and since p is over half as long as w, w is periodic. This
definition implies that w may overlap itself starting in the fourth position.
The preceding idea easily generalized to two dimensions as illustrated in figure 1.
A be a two-dimensional array. Call a prefix of A a rectangular subarray that
contains one corner of A. (In the figure, the upper left corner.) Call a suffix of A a rectangular
subbarray that contains the diagonally opposite corner of A (In the figure, the lower right corner).
We say A is periodic if the largest prefix that is also a suffix has dimensions at least as large as
some fixed percentage d of the dimensions of A.
In the figure, if d - 5, then A is periodic. As with strings, if A is periodic, then A may overlap
itself if the prefix of one copy of A is aligned with the suffix of a second copy of A. Notice that
both the upper left and lower left corners of A can define prefixes, giving A two directions in which
it can be periodic. As we will describe in the next section, the classification of periodicity type for
A is based on whether it is periodic in either or both of these directions.
a
Figure
1:
a) A periodic pattern. b) A suffix matches a prefix.
3 Classifying arrays
Our goal here is classifying an array A into one of four periodicity classes. For clarity of presentation
we concentrate on square arrays. We later show how to generalize all results to rectangles. We
begin with some definitions of two-dimensional periodicity and related concepts (figure 2).
be an m \Theta m square array. Each element of A contains
a symbol from an alphabet \Sigma. A subarray of A is called a block. Blocks are designated by their
first and last row and column. Thus, the block A[0::m \Gamma is the entire array. Each corner
of A defines a quadrant. Quadrants are labeled counterclockwise from upper left, quadrants I , II ,
III and IV . Each quadrant has size q where 1 - q - d me. (Quadrants may share part of a row
or column). Quadrant I is the block 1]. The choice of q may depend on the
application. For this paper,
Definition 3 Suppose we have two copies of A, one directly on top of the other. The copies are
said to be in register because some of the elements overlap (in this case, all the elements) and
overlapping elements contain the same symbol. If the two copies can be repositioned so that A[0; 0]
overlaps and the copies are again in register, then we say that the array is
quadrant I symmetric, that A[r; c] is a quadrant I source and that vector c~x is a quadrant
I symmetry vector. Here, ~y is the vertical unit vector in the direction of increasing row index and
~x is the horizontal unit vector in the direction of increasing column index. If the two copies can
be repositioned so that A[m \Gamma and the copies are again in
register, then we say that the array is quadrant II symmetric, that A[r; c] is a quadrant II source
and that c~x is a quadrant II symmetry vector.
c
a
quadrant I source
quadrant II source
quadrant I symmetry vector
quadrant II symmetry vector
II III
IV
I
Figure
2:
Two overlapping copies of the same array.
a) A quadrant I source. b) A quadrant II source. c) The symmetry vectors.
Analagous definitions exist for quadrants III and IV , but by symmetry, if ~v is a quadrant III(IV )
symmetry vector, then \Gamma~v is a quadrant I(II) symmetry vector. We will usually indicate a vector
c~x by the ordered pair (r; c). Note that symmetry vector (r; c) defines a mapping between
identical elements, that is, (r; c) is a symmetry vector iff A[i;
elements are defined. In particular, if (r; c) is a symmetry vector, then it maps the block A[i::j; k::l]
to the identical block A[i
In the remainder of this paper, we use the terms source and symmetry vector interchangeably.
Definition 4 The length of a symmetry vector is the maximum of the absolute values of its coef-
ficients. The shortest quadrant I (quadrant II) vector is the smallest one in lexicographic order
first by row and then by column (first by column and then by absolute value of row). The basis
vectors for array A are the shortest quadrant I vector (r 1
and the shortest quadrant II
vector any). If the length of a symmetry vector is ! p where me then the vector is
periodic.
We are now ready to classify a square array A into one of four periodicity classes based on the
presence or absence of periodic vectors in quadrants I and II . Following the classification we prove
some theorems about the properties of the classes. In Section 4 we present algorithms for finding
all the sources in an array.
array.
The four classes of two-dimensional periodicity are (figures 3 - 6):
The array has no periodic vectors.
ffl Lattice periodic - The array has periodic vectors in both quadrants. All quadrant I sources
which occur in quadrant I fall on the nodes of a lattice which is defined by the basis vectors.
The same is true for quadrant II sources in quadrant II . Specifically, let ~v 1
) be the periodic basis vectors in quadrants I and II respectively. Then, an
element in quadrant I is a quadrant I source iff it occurs at index A[ir 1
Lattice periodic array.
for integers i; j. An element in quadrant II is a quadrant II source iff it occurs at index
for integers -.
ffl Line periodic - The array has a periodic vector in only one quadrant and the sources in that
quadrant all fall on one line.
ffl Radiant periodic-This category is identical to the line periodic category, except that in the
quadrant with the periodic vector, the sources fall on several lines which all radiate from the
quadrant's corner. We do not describe the exact location of the sources for this class, but see
[GP-92] for a detailed analysis of the source locations.
Next, we prove some theorems about the properties of the classes. All the theorems are stated in
terms of square arrays for clarity. At the end of the theorems we explain how they can be modified
to apply to any n \Theta m rectangular array.
Line periodic array.
In Lemmas 1-3, we establish the fact that if we have symmetry vectors for both quadrants I and II ,
and they meet a pair of constraints on the sum of their coefficients, then every linear combination
of the vectors defines another symmetry vector.
are symmetry vectors from quadrants I and II respectively, and
is either a quadrant I symmetry vector
or a quadrant II symmetry vector (r 1
Proof: We prove for the case r 1 - jr 2 j. The proof for the other case is similar. We show that
) is a quadrant I source.
O
O
III
O
O
O
O
O
O
O
O
I IV
II
Figure
Radiant periodic array. Three non-colinear sources are starred.
First, by the constraint on the c i , the fact that r 2 is negative and the assumption that r 1 - jr 2 j,
S is an element of A. Next, we show via two pairs of mappings that the quadrant I prefix block
is identical to the suffix block A[r 1
First
maps the resultant block to block A[r 1
Second
maps the resultant block to A[m
are two quadrant k symmetry vectors
and c 1
is also a quadrant k symmetry vector.
Proof: We prove for quadrant I . The proof for the other quadrant is similar. We show that
) is a quadrant I source.
First, by the restraints on the r i and the c i , S is an element of A. Next, by a pair of mappings, we
show that the quadrant I prefix block is identical to the
suffix block A[r 1
Recall that both r 1
and r 2
are postive.
First mapping: (r 1
maps the block
\Gamma 1] to the block
maps the resultant block to the block
Lemma 3 If ~v 1
are symmetry vectors from quadrants I and II respec-
tively, and c 1
is an element of A, (ir 1
) is a quadrant I symmetry vector. Similarly, for all - such
that
is an element of A, (-r 1
) is a quadrant II symmetry
vector.
Proof: We prove for vector (ir 1
equivalent to source S
The proof for the other vector is similar. Consider the lattice of elements in A defined by the
quadrant I and II vectors and with one element at A[0; 0]. (The lattice elements correspond
exactly to the elements S i;j .) Consider the line l that extends from element A[0; 0] through elements
We prove the lemma only for those lattice elements on or to the right of l. The
remaining elements are treated similarly.
Case 1: S i;0 on line l. By induction on i. For S 1;0
) is a symmetry vector by
hypothesis. Now, assume that (ir 1
) is a symmetry vector. For S
are both quadrant I symmetry vectors, by Lemma
) is a quadrant I
symmetry vector.
Case 2: S i;j j - 1 to the right of line l. Elements S i;j fall on lines l j which are parallel to line l. We
show that the uppermost element S i;j is a source. By application of Lemma 2, as in Case
the remaining sources on l j are established.
Consider a cell of the lattice with sides (r 1
corners
with S the
uppermost lattice element on line l j (figure 6):
(j
(j
(j
(j
The following are always true:
A
l
l 0
Figure
7:
A candidate source S in lemma 3. Here jr
is not an element of A. Otherwise e 2
- not S - is the top element on its line.
is an element of A. Otherwise S is not in A, S is not to the right of line l or r 1
Two possibilities remain. Either e 1
is an element of A or it is not. Our proof is by induction on i
and j. For the base cases we use ~v 1
which is either
a quadrant I vector (r 1 - jr 2 j) or a quadrant II vector (r 1
Subcase A: r 1
j.
is not an element of A. By the induction hypothesis, ~v e 4
is a symmetry
vector. Since e 1
is not on A, r e 4
. That is, the row coefficient in ~v e 4
is smaller than the
row coefficient in ~v 1:
Apply Lemma 1 to ~v e 4
and ~v 2
and S is a source.
is an element of A. By the induction hypothesis, ~v e 1
is a quadrant I symmetry
vector. From the base case, ~v 3
is a quadrant I symmetry vector. Apply Lemma 2
to ~v e 1
and ~v 3
and S is a source.
Subcase B: r 1
is not an element of A. Impossible, else S is not in A or S is not right of l.
is an element of A. Note that S is above row r 1
or else e 2
is on the array. The vector
is a quadrant II symmetry vector (because r e 2
is negative) by application
of Subcase A to quadrant II . Now, r e 2
=(the row index of S)- 0 so r 1
or jr e 2
By hypothesis, r 1
Apply Lemma 2 to ~v 1
and ~v e 2
and S is a
source. 2
The proof of Theorem 1 is simplified by the following easily proven observation.
) be symmetry vectors from quadrants I and II respectively,
and c 1
let L be an infinite lattice of points on the xy-plane also
with basis vectors (r
). If we put one copy of A on each lattice point by aligning
element A[0; 0] with the lattice point, then the copies are in register and completely cover the plane.
The next Lemma establishes that for a given lattice of elements in A, an element not on the lattice
has a shorter vector to some lattice point than the corresponding basis vector for the lattice. (Note
that a simplified version of the proof appeared in [GP-92] and we use essentially that same proof
here.)
Figure
8:
One of the vectors from e 1
to S or S to e 2
is a quadrant I vector shorter than ~v 1
Lemma 4 Let L be an infinite lattice in the xy-plane with basis vectors ~v 1
(quadrants I and II symmetry vectors respectively) where all the r i and c i are integers. Then, for
any point that is not a lattice element, where x and y are integers, there exists a lattice
point e such that the vector ~v from e to S (or S to e) is a quadrant I vector shorter than ~v 1
or a
quadrant II vector shorter than ~v 2
Proof: Let S be an element that does not fall on a lattice point. Consider the unit cell of the lattice
containing S (figure 8) with nodes labeled e 1
Connect S to the four corners of the unit cell to get four triangles. At least one of these triangles
has a right or obtuse angle. Wolog, let the triangle be on points e 1
and S. Then both the vector
from e 1
to S and the vector from e 2
to S is shorter than the vector from e 1
to e 2
. Since at least
one of the two is a quadrant I vector, we have a quadrant I vector shorter than ~v 1: 2
Our first main result is the following Theorem. It establishes that if an array has basis vectors in
both quadrants, then in a certain block of the array, which depends on the coefficients of the basis
vectors, all symmetry vectors are linear combinations of the basis vectors. We state the Theorem
in terms of quadrant I for simplicity. Since the array can be rotated so that any quadrant becomes
quadrant I , it applies to all quadrants.
Theorem 1 Let A be an array with basis vectors (r
in quadrants I and II
respectively with c 1
m. Let L be an infinite lattice with the same basis
vectors and containing the element A[0; 0]. Then, in the block
an element is a quadrant I source iff it is a lattice element.
Proof: By Lemma 3, if is a lattice element, then it is a source. Suppose that S is not a
lattice element, but that it is a quadrant I source. We will show that S can not occur within block
By way of contradiction, assume S does occur in prefix block
There is a quadrant I vector ~v associated with S that is not a linear combination of ~v 1
and ~v 2
By Observation 1, copies of A can be aligned with the points of lattice L and the copies will be
in register and cover the plane. Let A i.e. the suffix block originating at
element Because S is a source, ~v maps . For each copy of
A, remove all but A 0 . The copies of A 0 are in register. Since A 0 has dimensions at least r 1
, it is at least as large as a unit cell of the lattice and therefore, the copies of A 0 also cover
the plane. Now every element of the plane is mapped by ~v from an identical element, and there is
a complete copy of A at S. S falls within some cell of lattice L. By Lemma 4, there is a quadrant
I or quadrant II vector ~v 3
from S to some corner e of the cell (or from e to S) which is shorter
than the corresponding basis vector of L. Since there are complete copies of A at S and e, ~v 3
is a
symmetry vector and therefore ~v 1
and ~v 2
are not both basis vectors of A as assumed. 2
Since our quadrants are of size d me \Theta d me, they are no greater in size than the smallest block that
can contain only lattice point sources. The region that contains only lattice point sources can be
larger than the block described in Theorem 1, see [GP-92].
Next, we prove the following important trait about radiant periodic arrays that facilitates their
handling in matching applications [AB-92b, ABF-94b, KR-94]. Origins (A[0; 0]) of complete copies
of a radiant periodic array A that overlap without mismatch can be ordered monotonically.
Definition 5 A set of elements of an array B can be ordered monotonically if the elements can
be ordered so that they have column index nondecreasing and row index nondecreasing (ordered
monotonically in quadrant I) or row index nonincreasing (ordered monotonically in quadrant II).
Our theorem is stated in terms of quadrant I , but generalizes to quadrant II .
Theorem 2 Let A be a radiant periodic array with periodic vector in quadrant I . Let S 1
be quadrant I sources occuring within quadrant I . On each source, place one copy of A by aligning
A[0; 0] with the source. If every pair of copies is in register, then the sources can be ordered
monotonically in quadrant I .
Proof: Suppose two sources A[c 1
cannot be ordered monotonically. That is, c 1
but
. If there is no mismatch in the copies of A at these sources, then by the fact that
) is a periodic, quadrant II symmetry vector and
by definition, A is lattice periodic, a contradiction. 2
As stated earlier, our classification scheme applies to any rectangular array. The major modification
is a new definition of length.
Definition 6 The length of a symmetry vector of a rectangular array is the maximum of the
absolute values of its coefficients scaled to the dimensions of the array. Let A be n rows by m
columns with m - n. Let c) be a symmetry vector in A. Then the length of ~v scaled to the
dimensions of the array is max(r
4 Periodicity and Witness Algorithms
In this section, we present two algorithms, one serial and one parallel for finding all sources in an
array A. In addition, for each location in A which is not a source, our algorithms find a witness
that proves that the overlapping copies of A are not in register.
We want to fill out an array For each location A[i; j] that is a
quadrant I source,
some mismatch. Specifically A[r; c] 9). For each location A[i; j] that is a
quadrant II source,
where
mismatch
i+r
Figure
9:
The witness tables gives the location of a mismatch
(if one exists) for two overlapping patterns:
4.1 The Serial Algorithm
Our serial algorithm (Algorithm makes use of two algorithms (Algorithms 1 and 2) from [ML-84]
which are themselves variations of the KMP algorithm [KMP-77] for string matching. Algorithm 1
takes as input a pattern string w of length m and builds a table lppattern[0::m \Gamma 1] where lppattern[i]
is the length of the longest prefix of w starting at w i . Algorithm 2 takes as input a text string t of
length n and the table produced by Algorithm 1 and produces a table lptext[0::n \Gamma 1] where lptext[i]
is the length of the longest prefix of w starting at t i .
The idea behind Algorithm A is the following: We convert the two-dimensional problem into a
problem on strings (figure 10). Let the array A be processed column by column and suppose we
are processing column j. Assume we can convert the suffix block A[0::m \Gamma string
represents the suffix of row i starting in column j. This will serve as the
text string. Assume also that we can convert the prefix block A[0::m \Gamma string
represents the prefix of row i of length m \Gamma j. This will serve as the
pattern string. Now, use Algorithm 1 to produce the table lppattern for W j and Algorithm 2 to
produce the table lptext for T j . If a copy of the pattern starting at t i matches in every row to t
then is a source. If the pattern doesn't match and the first pattern row
to mismatch is row j] is not a source. The mismatch occurs
between the prefix of pattern row k and the suffix of text row need merely locate the
mismatch to obtain the witness.
In order to treat the suffix and prefix of a row as a single character, we will build a suffix tree for
the array. A suffix tree is a compacted trie of suffixes of a string Each node
v has associated with it the indices [a,b] of some substring of S. If u is the Least
Common Ancestor (LCA) of two nodes v and w, then S(u) is the longest common prefix of S(v)
and S(w) [LV-85]. A tree can be preprocessed in linear time to answer LCA queries in constant
m-j columns
Figure
10:
Representing a block of the array by a string.
is the text and W
is the pattern.
time [HT-84]. Thus, we can answer questions about the length of S(u) in constant time.
Algorithm A Serial algorithm for building a witness array and deciding periodicity class.
Step A.1: Build a suffix tree by concatenating the rows of the array. Preprocess the suffix tree
for least common ancestor queries in order to answer questions about the length of the common
prefix of any two suffixes.
Step A.2: For each column j, fill out
Step A.2.1: Use Algorithm 1 to construct the table lppattern for
w i is the prefix of row i of length j. We can answer questions about the equality
of two characters by consulting the suffix tree. If the common prefix of the two characters
has length at least m \Gamma j then the characters are equal.
Step A.2.2: Use Algorithm 2 to construct the table lptext for
is the suffix of row i starting in column j (also of length Again we test for equality
by reference to the suffix tree.
Step A.2.3: For each row i, if then we have found a quadrant I source
and otherwise, using the suffix tree, compare the suffix of text row
starting in column j with the prefix of pattern row lptext[i]. The length l of the
common prefix will be less than
Step A.3: Repeat step 2 for by building the automata and
processing the columns from the bottom up.
Step A.4: Select quadrant I and quadrant II basis vectors from Witness if they exist.
Step A.5: Use the basis vectors to decide to which of four periodicity classes the pattern belongs.
Theorem 3 Algorithm A is correct and runs in time O(m 2 log j\Sigmaj).
Proof: The correctness of Algorithm A follows from the correctness of Algorithms 1 and 2 [ML-84].
The suffix tree construction [W-73] takes time O(m 2 log j\Sigmaj) while the preprocessing for least common
ancestor queries [HT-84] can be done in time linear in the size of the tree. Queries to the suffix
tree are processed in constant time. The tables lppattern and lptext can be constructed in time
O(m) [ML-84]. For each of m columns, we construct two tables so the total time for steps 2 and 3 is
O(m 2 ). Step 4 can be done in one scan through the witness array and step 5 requires comparing all
vectors to the basis vectors in order to distinguish between the radiant and line periodic classes, so
the time for steps 4 and 5 is O(m 2 ). The total complexity of the pattern preprocessing is therefore
Recently, [GP-92] gave a linear time serial algorithm for the witness computation.
4.2 The Parallel Algorithm
Our parallel algorithm (Algorithm B) makes use of the parallel string matching algorithm (Algo-
rithm 3) from [V-85]. Algorithm 3 takes as input a pattern string w of length m and a text string t
of length n and produces a boolean table true if a complete
copy of the pattern starts at t i . Algorithm 3 first preprocesses the pattern and then processes the
text.
First, for a text of length m, we show how to modify Algorithm 3 to compute match[0::m \Gamma 1],
is a prefix of the pattern. For simplicity, we assume m is a
power of 2. Let
Figure
is a prefix of t i
is a suffix of w 0
For example,
is the prefix of w of length mand S 1
is a suffix of t of the same length. The
following observation embodies the key idea (figure 11):
is a prefix of w of length between m and m, then P 1
is a prefix of
is a suffix of w Similarly, if is a prefix of w of length
between mand m, then the prefix and suffix are P 2
, etc.
Now, for each k - 1, we attempt to match P k in S k\Gamma1 and S k in P k\Gamma1 . If a matched prefix begins
at t i and a matched suffix ends at is a prefix of w.
Using Algorithm 3, we first preprocess the P k and S k as patterns and then use these to process
the appropriate segments as text. We can additionally modify Algorithm 3 so that at every index
where a prefix or suffix does not match, we obtain the location of a mismatch. Since the sum of
the lengths of the P i and S i are no more than a linear multiple of the length of w, the modification
does not increase the complexity of the algorithm and therefore the time complexity of the modified
Algorithm 3 is O(log m) using O( m
log m ) CRCW processors, the same as the unmodified algorithm
[V-85]. In our parallel algorithm, only steps 2 differs from the serial algorithm.
Algorithm B Parallel algorithm for finding sources and building a witness array
Step B.2: For each column j, fill out
Step B.2.1: For each
Step B.2.1.1: Use W j to form P k and P k\Gamma1 and T j to form S k and S k\Gamma1 . Use modified
Algorithm 3 to match P k in S k\Gamma1 and S k in P k\Gamma1 . As in the serial algorithm, use the
suffix tree to answer questions about equality.
Step B.2.1.2: For each row i for
beginning
at t i and S k matches ending at using
the row r of mismatch from modified Algorithm 3, refer to the suffix tree to find the
column c of mismatch and set
Theorem 4 Algorithm B is correct and runs in time O(log m) using O(m 2 ) CRCW processors.
Proof: The suffix tree construction [AILSV-87] and preprocessing for LCA queries [SV-88] is done
in time O(log m) using O(m 2 ) CRCW processors. Step 2 is done in time O(log m) using O( m 2
log
CRCW processors [V-85]. Finding the basis vectors is done by prefix minimum [LF-80] in time
O(log m) using O( m 2
log m ) processors. Distinguishing the line and radiant periodic cases can be done
in constant time using O(m 2 ) processors. The total complexity is therefore O(log m) time using
processors.
--R
"Two-Dimensional Periodicity and its Application"
"Efficient Two-Dimensional Compressed Matching"
"Optimal Parallel Two Dimensional Text Searching on a CREW PRAM,"
"An Alphabet Independent Approach to Two-Dimensional Matching"
"Optimal Two Dimensional Compressed Match- ing,"
"Efficient String Matching"
"Efficient 2-dimensional Approximate Matching of Non-rectangular Figures"
"Fast Parallel and Serial Multidimensional Approximate Array Matching"
"Efficient Pattern Matching with Scaling"
"Parallel Construction of a Suffix Tree with Applications"
"A Technique For Extending Rapid Exact-Match String Matching to Arrays of More Than One Dimension"
"Two Dimensional Pattern Matching"
"A Fast String Searching Algorithm"
"Optimally fast parallel algorithms for preprocessing and pattern matching inone and two dimensions,"
"Note on two dimensional string matching by optimal parallel algorithms,"
"Optimal Parallel Algorithms for String Matching"
"Truly Alphabet Independent Two-Dimensional Pattern Match- ing,"
"Fast Algorithms for Finding Nearest Common Ancestors"
"Alphabet Independent Optimal Parallel Search for 3-Dimensional Patterns,"
"Fast Pattern Matching in Strings"
"Efficient Two Dimensional Pattern Matching in the Presence of
"Parallel Prefix Computation"
"Efficient string matching in the presence of errors"
"An O(n log n) Algorithm for Finding all Repetitions in a String"
"A Unifying Look at d-Dimensional Periodicities and Space Coverings,"
"On Finding Lowest Common Ancestors: Simplification and Parallelization"
"Optimal Parallel Pattern Matching in Strings"
"Deterministic new technique for fast pattern matching"
"Linear Pattern Matching Algorithms"
--TR
--CTR
Richard Cole , Zvi Galil , Ramesh Hariharan , S. Muthukrishnan , Kunsoo Park, Parallel two dimensional witness computation, Information and Computation, v.188 n.1, p.20-67, 10 January 2004
Amihood Amir , Gad M. Landau , Dina Sokol, Inplace 2D matching in compressed images, Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, January 12-14, 2003, Baltimore, Maryland
Amihood Amir , Gad M. Landau , Dina Sokol, Inplace 2D matching in compressed images, Journal of Algorithms, v.49 n.2, p.240-261, November
Chiara Epifanio , Filippo Mignosi, A multidimensional critical factorization theorem, Theoretical Computer Science, v.346 n.2, p.265-280, 28 November 2005
Filippo Mignosi , Antonio Restivo , Pedro V. Silva, On Fine and Wilf's theorem for bidimensional words, Theoretical Computer Science, v.292 n.1, p.245-262, January
Amihood Amir , Gad M. Landau , Dina Sokol, Inplace run-length 2d compressed search, Theoretical Computer Science, v.290 n.3, p.1361-1383, 3 January
Chiara Epifanio , Michel Koskas , Filippo Mignosi, On a conjecture on bidimensional words, Theoretical Computer Science, v.299 n.1-3, p.123-150, | string matching;witness;parallel algorithm;periodicity;sequential algorithm;two-dimensional |
276469 | Convergence Analysis of Pseudo-Transient Continuation. | Pseudo-transient continuation ($\Psi$tc) is a well-known and physically motivated technique for computation of steady state solutions of time-dependent partial differential equations. Standard globalization strategies such as line search or trust region methods often stagnate at local minima. \ptc succeeds in many of these cases by taking advantage of the underlying PDE structure of the problem. Though widely employed, the convergence of \ptc is rarely discussed. In this paper we prove convergence for a generic form of \ptc and illustrate it with two practical strategies. | Introduction
. Pseudo-transient continuation (\Psitc) is a method for computation of steady-state
solutions of partial differential equations. We shall interpret the method in the context of
a method-of-lines solution, in which the equation is discretized in space and the resulting finite
dimensional system of ordinary differential equations is written as x
@x
@t
and the the discretized spatial derivatives are contained in the nonlinear term F (x). Marching out
the steady state by following the physical transient may be unnecessarily time consuming if the
intermediate states are not of interest. On the other hand, Newton's method for F
usually not suffice, as initial iterates sufficiently near the root are usually not available. Standard
globalization strategies [12, 17, 25], such as line search or trust region methods often stagnate at
local minima of kFk [20]. This is particularly the case when the solution has complex features
such as shocks that are not present in the initial iterate (see [24], for example). \Psitc succeeds in
many of these cases by taking advantage of the PDE structure of the problem.
1.1. The Basic Algorithm. In the simple form considered in this paper, \Psitc numerically integrates
the initial value problem
to steady state using a variable timestep scheme that attempts to increase the timestep as F (x)
approaches 0. V is a nonsingular matrix used to improve the scaling of the problem. It is typically
diagonal and approximately equilibrates the local CFL number (based on local cell diameter and
local wave speed) throughout the domain. In a multicomponent system of PDEs, not already
Version of April 20, 1997.
y North Carolina State University, Department of Mathematics and Center for Research in Scientific Computation,
Box 8205, Raleigh, N. C. 27695-8205 (Tim Kelley@ncsu.edu). The research of this author was supported by
National Science Foundation grant #DMS-9321938.
z Computer Science Department, Old Dominion University, Norfolk, VA 23519-0162 (keyes@cs.odu.edu)
and ICASE, MS 132C, NASA Langley Research Center, Hampton, Virginia 23681-0001. The research of this author
was supported by National Science Foundation grant #ECS-9527169, and NASA grants NAGI-1692 and NAS1-19480,
the latter while the author was in residence at ICASE.
properly nondimensionalized, V might be a block diagonal matrix, with blocksize equal to the
number of components.
We can define the \Psitc sequence fx n g by
where F 0 is the Jacobian (or the Fr-echet derivative in the infinite-dimensional situation). Algorithmically
ALGORITHM 1.1.
1.
2. While kF (x)k is too large.
(a) Solve (ffi
(b)
(c) Evaluate F (x).
(d) Update ffi.
The linear equation
for the Newton step that is solved in step 2a is the discretization of an PDE and is usually very large.
As such, it is typically solved by an iterative method which terminates on small linear residuals.
This results in an inexact method [11], and a convergence theory for \Psitc must account for this; we
provide such a theory in x 3. We have not yet explained how ffi is updated, nor explored the reuse
of Jacobian information from previous iterations. These options must be considered to explain the
performance of \Psitc as practiced, and we take them up in x 3, too. In order to most clearly explain
the ideas, however, we begin our analysis in x 2 with the most simple variant of \Psitc possible,
solving (1.3) exactly, and then extend those results to the more general situation.
As a method for integration in time, \Psitc is a Rosenbrock method ([14], p. 223) if ffi is fixed.
One may also think of this as a predictor-corrector method, where the simple predictor (result from
the previous timestep) and a Newton corrector are used. To see this consider the implicit Euler step
from x n with timestep
z
In this formulation z n+1 would be the root of
Finding a root of G with Newton's method would lead to the iteration
If we take - the first Newton iterate is
This leaves open the possibility of taking more corrector iterations, which would lead to a different
form of \Psitc than that given by (1.2). This may improve stability for some problems [16].
PSEUDO-TRANSIENT CONTINUATION 3
1.2. Time Step Control. We assume that ffi n is computed by some formula like the "switched
evolution relaxation" (SER) method, so named in [21], and used in, e.g. [19], [24], and [33]. In
its simplest, unprotected form, SER increases the timestep in inverse proportion to the residual
reduction.
Relation (1.5) implies that, for n - 1,
In some work [16], ffi n is kept below a large, finite bound ffi max . Sometimes ffi n is set to 1
(called "switchover to steady-state form" in [13]) when the computed value of
In view of these practices, we will allow for somewhat more generality in the formulation of the
sequence fffi n g. We will assume that ffi 0 is given and that
for n - 1. The choice in [24] and [33] (equation (1.5)) is
OE(-:
Other choices could either limit the growth of ffi or allow ffi to become infinite after finitely many
steps. Our formal assumption on OE accounts for all of these possibilities.
ASSUMPTION 1.1.
1.
2. Either -
and the timesteps are held bounded. If - then switchover to steady-state form
is permitted after a finite number of timesteps.
In [16] the timesteps are based not on the norms of the nonlinear residuals kF (x n )k but on
the norms of the steps This policy has risks in that small steps need not imply small
residuals or accurate results. However if the Jacobians are uniformly well conditioned, then small
steps do imply that the iteration is near a root. Here formulae of the type
are used, where OE satisfies Assumption 1.1.
1.3. Iteration Phases. We divide the \Psitc iteration into three conceptually different and separately
addressed phases.
1. The initial phase. Here ffi is small and x is far from a steady state solution. This phase
is analyzed in x 2.3. Success in this phase is governed by stability and accuracy of the
temporal integration scheme and proper choice of initial data.
2. The midrange. This begins with an accurate solution x and a timestep ffi that may be small
and produces an accurate x and a large ffi. We analyze this in x 2.2. To allow ffi to grow without
loss of accuracy in x we make a linear stability assumption (part 3 of Assumption 2.1).
3. The terminal phase. Here ffi is large and x is near a steady state solution. This is a small
part of the overall process, usually requiring only a few iterations. Aside from the attention
that must be paid to the updating rules for ffi, the iteration is a variation of the chord method
[25, 17].
We analyze the terminal phase first, as it is the easiest, in x 2.1. Unlike the other two phases,
the analysis of the terminal phase does not depend on the dynamics of x (x). The
initial and midrange phases are considered in x 2.3, with the midrange phase considered first to
motivate the goals of the initial phase. This decomposition is similar to that proposed for GMRES
and related iterations in [22] and is supported by the residual norm plots reported in [24, 10].
2. Exact Newton Iteration. In this section we analyze the three phases of the solver in reverse
order. This ordering makes it clear how the output of an earlier phase must conform to the demands
of the later phase.
2.1. Local Convergence: Terminal Phase. The terminal phase of the iteration can be analyzed
without use of the differential equation at all.
LEMMA 2.1. Let fffi n g be given by either (1.6) or (1.8) and let Assumption 1.1 hold. Let
be nonsingular, and F 0 be Lipschitz continuous with Lipschitz constant fl in a
ball of radius ffl about x .
Then there are then the
sequence defined by (1.2) and (1.6) satisfies
and x
Proof. Let denote the error. As is standard, [12], [17], we analyze convergence in
terms of the transition from a current iterate x c to a new iterate x+ . We must also keep track of the
change in ffi and will let ffi c and ffi + be the current and new pseudo-timesteps.
The standard analysis of the chord method ([17], p. 76) implies that there are ffl 1 - ffl and K C
such that if
The constant K C depends only ffl 1 , F , and x and does not increase if ffl 1 is reduced.
Now let \Delta \Gamma1and ffl 1 be small enough to satisfy
and, in particular, F increase
if needed so that
PSEUDO-TRANSIENT CONTINUATION 5
where - denotes condition number. Equations (2.1) and (2.2) imply that
If fffi n g is computed with (1.6) we use the following inequality from [17] (p. 72)
and (2.3) to obtain
2:
We then have by Assumption 1.1 that
t is from Assumption 1.1.
If fffi n g is computed with (1.8), we note that
and hence
as before.
In either case, Therefore we may continue the iteration
and conclude that at least q-linearly with q-factor of 1=2.
If we complete the proof by observing that since
superlinear convergence follows from (2.1).
The following simple corollary of (2.1) applies to the choice OE(-.
COROLLARY 2.2. Let the assumptions of Lemma 2.1 hold. Assume that OE(-.
1. Then the convergence of fx n g to x is q-quadratic.
2.2. Increasing the Time Step: Midrange Phase. Lemma 2.1 states that the second phase
of \Psitc should produce both a large ffi and an accurate solution. We show how this happens if the
initial phase can provide only an accurate solution. This clarifies the role of the second phase
in increasing the timestep. We do this by showing that if the steady state problem has a stable
solution, then \Psitc behaves well. We now make assumptions that not only involve the nonlinear
function but also the initial data and the dynamics of the IVP (1.1).
ASSUMPTION 2.1.
1. F is everywhere defined and Lipschitz continuously Fr- echet differentiable, and there is
2. There is a root x of F at which F 0
then the solution of the initial value problem
converges to x as t !1.
3. There are ffl 2
for all
The analysis of the midrange uses part 3 of Assumption 2.1 in an important way to guarantee
stability. The method for updating ffi is not important for this result.
THEOREM 2.3. Let fffi n g be given by either (1.6) or (1.8) and let Assumption 1.1 hold. Let
Assumption 2.1 hold. Let ffi max be large enough for the conclusions of Lemma 2.1 to hold. Then
there is an ffl 3 ? 0 such that if
or x
Proof. Let 1 is from Lemma 2.1 and ffl 2 is from part 3 of Assumption
2.1. Note that
Now there is a c ? 0 such that
for all x such that kek - ffl 1 . Hence, reducing ffl 3 further if needed so that ffl 3 ! fi=(2c), we have
If
for all n - 1 and hence x n converges to x q-linearly with q-factor (1
This convergence implies that x
(1.8) is used.
This result says that once the iteration has found a sufficiently good solution, either convergence
will happen or the the iteration will stagnate with inf latter failure mode is, of
course, easy to detect. Moreover, the radius ffl 3 of the ball about the root in Theorem 2.3 does not
depend on inf ffi n .
2.3. Integration to Steady State: Initial Phase. Theorem 2.3 requires an accurate estimate
of x , but asks nothing of the timestep. In this section we show that if ffi 0 is sufficiently small, and
(1.6) is used to update the timestep, then the dynamics of (1.1) will be tracked sufficiently well for
such an approximate solution to be found. It is not clear how (1.8) allows for this.
THEOREM 2.4. Let fffi n g be given by (1.6) and let Assumption 1.1 hold. Let Assumption 2.1
hold. There is a -
ffi such that if ffi 0 - ffi then there is an n such that
Proof. Let S be the trajectory of the solution to (2.5). By Assumption 2.1 x satisfies the
assumptions [12, 17, 25], for local quadratic convergence of Newton's method and therefore there
are ffl 4 and ffl f such that if
then suffice for the conclusions of Theorem 2.3 to hold. Let
We will show that if ffi 0 sufficiently small, then . By
(1.7), if
then
as long as kF (x)k - ffl f and x k is within ffl 4 of the trajectory S.
Let z be the solution of (2.5). Let be such that for all t ? T ,
Consider the approximate integration of (2.5) by (1.2). Set
If holds. This cannot happen if n ?
(which implies that t n ? T ). Therefore the proof will be complete if we can show that
for all
Note that (1.2) may be written as
There is an m 1 such that the last term in (2.9) satisfies, for sufficiently small and
)k. Then we have, by our assumptions on F , that there is an
such that
Finally, there is an m 2 such that for for sufficiently small and
Setting we have for all n - 1 (as long as ffi n is sufficiently small and
As long as (2.7) holds, this implies that
Consequently, as is standard [14, 15],
and using (2.8),
then This completes the proof.
The problem with application of this proof to the update given by (1.8) is that bounds on ffi like
(2.7) do not follow from the update formula.
3. Inexact Newton Iteration. In this section we look at \Psitc as implemented in practice. There
are two significant differences between the simple version in x 2 and realistic implementations:
1. The Fr-echet derivative
recomputed with every timestep.
2. The equation for the Newton step is solved only inexactly.
Before showing how the results in x 2 are affected by these differences, we provide some more
detail and motivation.
Item 1 is subtle. If one is solving the equation for the Newton step with a direct method,
then evaluation and factorization of the Jacobian matrix is not done at every timestep. This is a
common feature of many ODE and DAE codes, [30, 26, 27, 3]. Jacobian updating is an issue in
continuation methods [31, 28], and implementations of the chord and Shamanskii [29] methods for
general nonlinear equations [2, 17, 25]. When the Jacobian is slowly varying as a function of time
or the continuation parameter, sporadic updating of the Jacobian leads to significant performance
gains. One must decide when to evaluate and factor the Jacobian using iteration statistics and (in
the ODE and DAE case) estimates of truncation error. Temporal truncation error is not of interest
to us, of course, if we seek only the steady-state solution.
CONTINUATION 9
In [16] a Jacobian corresponding to a lower-order discretization than that for the residual was
used in the early phases of the iteration and in [19], in the context of a matrix-free Newton method,
the same was used as a preconditioner.
The risks in the use of inaccurate Jacobian information are that termination decisions for
the Newton iteration and the decision to reevaluate and refactor the Jacobian are related and one
can be misled by rapidly varying and ill-conditioned Jacobians into premature termination of the
nonlinear iteration [30, 32, 18]. In the case of iterative methods, item 1 should be interpreted to
mean that preconditioning information (such as an incomplete factorization) is not computed at
every timestep.
means that the equation for the Newton step is solved inexactly in the sense of [11], so
that instead of
where s is given by (1.3), step s satisfies
for some small j, which may change as the iteration progresses. Item 1 can also be viewed as
an inexact Newton method with j reflecting the difference between the approximate and exact
Jacobians.
The theory in x 2 is not changed much if inexact computation of the step is allowed. The proof
of Lemma 2.1 is affected in (2.1), which must be changed to
This changes the statement of the lemma to
LEMMA 3.1. Let fffi n g be given by either (1.6) or (1.8) and let Assumption 1.1. Let F (x
be nonsingular, and F 0 be Lipschitz continuous with Lipschitz constant fl in a ball of radius
ffl about x .
Then there are ffl 1 ? 0, -
j for all n, and
then the sequence defined by (3.1), (3.2), and (1.6) satisfies
and x
Corollary 2.2 becomes
COROLLARY 3.2. Let the assumptions of Lemma 3.1 hold. Assume that OE(-.
1. Then the convergence of fx n g to x is q-superlinear if
locally q-quadratic if
The analysis of the midrange phase changes in (2.6), where we obtain
for some K j ? 0. This means that -
j must be small enough to maintain the q-linear convergence
of fx n g during this phase. The inexact form of Theorem 2.3 is
THEOREM 3.3. Let fx n g be given by (3.1) and (3.2) and let fffi n g be given by either (1.6)
or (1.8). Let Assumption 1.1 hold. Let Assumption 2.1 hold. Let ffi max be large enough for the
conclusions of Lemma 2.1 to hold. Then there are ffl 3 ? 0 and - j such that if j n -
and
or x
Inexact Newton methods, in particular Newton-Krylov solvers, have been applied to ODE/DAE
solvers in [1], [5], [4], [6], [7], and [9]. The context here is different in that the nonlinear residual
F (x) does not reflect the error in the transient solution but in the steady state solution.
The analysis of the initial phase changes through (2.10). We must now estimate
and hence, assuming that the operators are uniformly bounded, there is m 3
such that
ks
and hence
We express (3.5) as
Hence, if and the inexact form of Theorem
2.4:
THEOREM 3.4. Let fx n g be given by (3.1) and (3.2) and let fffi n g be given by (1.6) and let
Assumption 1.1 hold. Let Assumption 2.1 hold. Assume that the operators
are uniformly bounded in n. Let ffl ? 0. There are - ffi and -
j such that if
j then there
is an n such that
The restrictions on j in Theorem 3.4 seem to be stronger than those on the results on the
midrange and terminal phases. This is consistent with the tight defaults on the forcing terms for
methods when applied in the ODE/DAE context [1, 5, 6, 7, 9].
4. Numerical Experiments. In this section we examine a commonly used \Psitc technique,
switched evolution/relaxation (SER) [21], applied to a Newton-like method for inviscid compressible
flow over a four-element airfoil in two dimensions. Three phases corresponding roughly to
the theoretically-motivated iteration phases of x 2 may be identified. We also compare SER with a
different \Psitc technique based on bounding temporal truncation error (TTE) [20]. TTE is slightly
PSEUDO-TRANSIENT CONTINUATION 11
-0.4
Zoomed Grid
FIG. 4.1. Unstructured grid around four-element airfoil in landing configuration - near-field view.
more aggressive than SER in building up the time step in this particular problem, but the behavior
of the system is qualitatively the same.
The physical problem, its discretization, and its algorithmic treatment in both a nonlinear
defect correction iteration and in a Newton-Krylov-Schwarz iteration - as well as its suitability
for parallel computation - have been documented in earlier papers, most recently [10] and the
references therein. Our description is correspondingly compact.
The unknowns of the problem are nodal values of the fluid density, velocities, and specific
total energy, at N vertices in an unstructured grid of triangular cells
(see Fig. 4.1). The system F discretization of the steady Euler equations:
r
r
r
where the pressure p is supplied from the ideal gas law, and fl is the
ratio of specific heats. The discretization is based on a control volume approach, in which the
control volumes are the duals of the triangular cells - nonoverlapping polygons surrounding each
vertex whose perimeter segments join adjacent cell centers to midpoints of incident cell edges.
Integrals of (4.1)-(4.3) over the control volumes are transformed by the divergence theorem to
contour integrals of fluxes, which are estimated numerically through an upwind scheme of Roe
type. The effective scaling matrix V for the \Psitc term is a diagonal matrix that depends upon the
mesh.
The boundary conditions correspond to landing configuration conditions: subsonic (Mach
number of 0.2) with a high angle of attack of (5 ffi ). The full adaptively clustered unstructured grid
contains 6,019 vertices, with four degrees of freedom per vertex (giving 24,076 as the algebraic
dimension of the discrete nonlinear problem). Figure 4.1 shows only a near-field zoom on the
full grid, whose far-field boundaries are approximately twenty chords away. The initial pseudo-
\Gamma4 corresponds to a CFL number of 20. The pseudo-timestep is allowed to
grow up to six orders of magnitude over the course of the iterations. It is ultimately bounded at
guaranteeing a modest diagonal contribution that aids the invertibility of (ffi
The initial iterate is a uniform flow, based on the far field boundary conditions - constant
density and energy, and constant velocity at a given angle of attack.
The solution algorithm is a hybrid between a nonlinear defect correction and a full Newton
method, a distinction which requires further discussion of the processes that supply F (x) and F 0 (x)
within the code. The form of the vector-valued function F (x) determines the quality of the solution
and is always discretized to required accuracy (second-order in this paper). The form of the
approximate Jacobian matrix F 0 (x), together with the scaling matrix V and time step ffi, determines
the rate at which the solution is achieved but does not affect the quality of a converged result, and
is therefore a candidate for replacement with a matrix that is more convenient. In practice, we perform
the matrix inversion in (1.2) by Krylov iteration, which requires only the action of F 0 (x) on
a series of Krylov vectors, and not the actual elements of F 0 (x). The Krylov method was restarted
preconditioned with 1-cell overlap additive Schwarz (8 subdomains).
Following [5, 8], we use matrix-free Fr-echet approximations of the required action:
However, when preconditioning the solution of (1.2), we use a more economical matrix than the
Jacobian based on the true F (x), obtained from a first-order discretization of the governing Euler
system. This not only decreases the number of elements in the preconditioner, relative to a true
Jacobian, but also the computation and (in the parallel context) communication in applying the
preconditioner. It also results in a more numerically diffusive and stable matrix, which is desirable
for inversion. The price for these advantages is that the preconditioning is inconsistent with the
true Jacobian, so more linear subiterations may be required to meet a given linear convergence
tolerance. This has an indirect feedback on the nonlinear convergence rate, since we limit the work
performed in any linear subiteration in an inexact Newton sense.
In previous work on the Euler and Navier-Stokes equations [10, 23], we have noted that a \Psitc
method based on a consistent high-order Jacobian stumbles around with a nonmonotonic steady-state
residual norm at the outset of the nonlinear iterations for a typical convenient initial iterate far
from the steady-state solution. On the other hand, a simple defect correction approach, in which
is based on a regularizing first-order discretization everywhere it appears in the solution of
(1.2), not just in the preconditioning, allows the residual to drop smoothly from the outset. In
this work, we employ a hybrid strategy, in which defect correction is used until the residual norm
has fallen by three orders of magnitude, and inexact Newton thereafter. As noted in x 3, inexact
FIG. 4.2. SER convergence history
iteration based on the true Jacobian and iteration with an inconsistent Jacobian can both be gathered
under the j of (3.2), so the theory extends in principal to both.
With this background we turn our attention to Fig. 4.2, in which are plotted on a logarithmic
scale against the \Psitc iteration number: the steady-state residual norm jjF (x n )jj 2 at the beginning
of each iteration, the norm of the update vector jjx and the pseudo-timestep ffi n .
The residual norm falls nearly monotonically, as does the norm of the solution update. Asymptotic
convergence cannot be expected to be quadratic or superlinear, since we do not enforce
in (3.5). However, linear convergence is steep, and our experience shows that overall
execution time is increased if too many linear iterations are employed in order to enforce
asymptotically. In the results shown in this section, the inner linear convergence tolerance was set
at 10 \Gamma2 for the defect correction part of the trajectory, and at 10 \Gamma3 for the Newton part. The work
was also limited to a maximum of 12 restart cycles of 20 Krylov vectors each.
Examination of the pseudo-timestep history shows monotonic growth that is gradual through
the defect correction phase (ending at rapidly growing, and asymptotically
at (beginning at show momentary retreats from ffi max in
response to a refinement on the \Psitc strategy that automatically cuts back the pseudo-timestep by
a fixed factor if a nonlinear residual reduction of less than 3is achieved at the exhaustion of the
maximum number of Krylov restarts in the previous step (during the terminal Newton phase).
Close examination usually reveals a stagnation plateau in the linear solver, and it is more cost
effective to fall back to the physical transient to proceed than to grind on the ill-conditioned linear
problem. These glitches in the convergence of jjF are not of nonlinear origin.
Another timestep policy, common in the ODE literature, is based on controlling temporal
truncation error estimates. Though we do not need to maintain temporal truncation errors at low
levels when we are not attempting to follow physical transients, we may maintain them at high
levels as a heuristic form of stepsize control. This policy seems rare in external aerodynamic
simulations, but is popular in the combustion community and is implemented in [20]. The first
FIG. 4.3. TTE convergence history
neglected term in the Euler discretization of @x
@t
is reasonable mixed absolute-relative
bound on the error in the i th component of x at the n th step is
where
can be approximated
Taking - as 3and implementing this strategy in the Euler code in place of SER yields the results in
Fig. 4.3. Arrival at ffi max occurs at the same step as for SER, and arrival at the threshold jjF
occurs one iteration earlier. However, the convergence difficulties after having arrived at
are slightly greater.
5. Conclusions. Though the numerical experiments of the previous section do not confirm
the theory in detail, in the sense that we do not verify the estimates in the hypotheses, a reassuring
similarity exists between the observations of the numerics and the conceptual framework of the
theory, which was originally motivated by similar observations in the literature. There is a fairly
long induction phase, in which the initial iterate is guided towards the Newton convergence domain
by remaining close to the physical transient, with relatively small timesteps. There is a terminal
phase which can be made as rapid as the capability of the linear solver permits (which varies from
application to application), in which an iterate in the Newton convergence domain is polished.
Connecting the two is a phase of moderate length during which the time step is built up towards
the Newton limit of ffi max , starting from a reasonably accurate iterate. The division between these
phases is not always clear cut, though exogenous experience suggests that it becomes more so
when the corrector of x 1 is iterated towards convergence on each time step. We plan to examine
PSEUDO-TRANSIENT CONTINUATION 15
this region of parameter space in conjunction with an extension of the theory to mixed steady/\Psitc
systems (analogs of differential-algebraic systems in the ODE context) in the future.
Acknowledgments
. This paper began with a conversation at the DOE Workshop on Iterative
Methods for Large Scale Nonlinear Problems held in Logan, Utah, in the Fall of 1995. The authors
appreciate the efforts and stimulus of the organizers, Homer Walker and Michael Pernice. They
also wish to thank Peter Brown, Rob Nance, and Dinesh Kaushik for several helpful discussions on
this paper. This paper was significantly improved by the comments of a thoughtful and thorough
referee.
--R
The Numerical Solution of Initial Value Problems in Differential-Algebraic Equations
Some efficient algorithms for solving systems of nonlinear equations
VODE: A variable coefficient ODE solver
Using Krylov methods in the solution of large-scale differential-algebraic systems
Hybrid Krylov methods for nonlinear systems of equations
Pragmatic experiments with Krylov methods in the stiff ODE setting
Numerical Methods for Nonlinear Equations and Unconstrained Opti- mization
Towards polyalgorithmic linear system solvers for nonlinear elliptic problems
Numerical Initial Value Problems in Ordinary Differential Equations
Analysis of numerical methods
Robust linear and nonlinear strategies for solution of the transonic Euler equations
Iterative Methods for Linear and Nonlinear Equations
Termination of Newton/chord iterations and the method of lines
Aerodynamic applications of Newton-Krylov-Schwarz solvers
A parallelized elliptic solver for reacting flows
Experiments with implicit upwind methods for the Euler equations
Convergence of Iterations for Linear Equations
Application of Newton-Krylov methodology to a three-dimensional unstructured Euler code
A Newton's method solver for the Navier-Stokes equations
Iterative Solution of Nonlinear Equations in Several Variables
A description of DASSL: a differential/algebraic system solver
Description and use of LSODE
Driven cavity flows by efficient numerical techniques
A modification of Newton's method
Implementation of implicit formulas for the solution of ODEs
An error estimate for the modified newton method with applications to the solution of nonlinear two-point boundary value problems
Accurate and economical solution of the pressure head form of Richards' equation by the method of lines
Newton solution of inviscid and viscous problems
--TR
--CTR
W. K. Anderson , W. D. Gropp , D. K. Kaushik , D. E. Keyes , B. F. Smith, Achieving high sustained performance in an unstructured mesh CFD application, Proceedings of the 1999 ACM/IEEE conference on Supercomputing (CDROM), p.69-es, November 14-19, 1999, Portland, Oregon, United States
T. Coffey , R. J. McMullan , C. T. Kelley , D. S. McRae, Globally convergent algorithms for nonsmooth nonlinear equations in computational fluid dynamics, Journal of Computational and Applied Mathematics, v.152 n.1-2, p.69-81, 1 March
D. Gropp , Dinesh K. Kaushik , David E. Keyes , Barry Smith, Performance modeling and tuning of an unstructured mesh CFD application, Proceedings of the 2000 ACM/IEEE conference on Supercomputing (CDROM), p.34-es, November 04-10, 2000, Dallas, Texas, United States
Feng-Nan Hwang , Xiao-Chuan Cai, A parallel nonlinear additive Schwarz preconditioned inexact Newton algorithm for incompressible Navier-Stokes equations, Journal of Computational Physics, v.204
Howard C. Elman , Victoria E. Howle , John N. Shadid , Ray S. Tuminaro, A parallel block multi-level preconditioner for the 3D incompressible Navier--Stokes equations, Journal of Computational Physics, v.187 n.2, p.504-523, 20 May
Keyes , Lois Curfman Mcinnes , M. D. Tidriri, Globalized Newton-Krylov-Schwarz Algorithms and Software for Parallel Implicit CFD, International Journal of High Performance Computing Applications, v.14 n.2, p.102-136, May 2000
D. A. Knoll , D. E. Keyes, Jacobian-free Newton-Krylov methods: a survey of approaches and applications, Journal of Computational Physics, v.193 n.2, p.357-397, 20 January 2004 | pseudo-transient continuation;nonlinear equations;global convergence;steady state solutions |
276471 | Wavelet-Based Numerical Homogenization. | A numerical homogenization procedure for elliptic differential equations is presented. It is based on wavelet decompositions of discrete operators in fine and coarse scale components followed by the elimination of the fine scale contributions. If the operator is in divergence form, this is preserved by the homogenization procedure. For periodic problems, results similar to classical effective coefficient theory are proved. The procedure can be applied to problems that are not cell-periodic. | Introduction
In many applications the problem and solution exhibit a number of different scales.
In certain cases we are interested in finding the correct coarse-scale features of the
solution without resolving the finer scales. The fine-scale features may be of lesser
importance, or they may be prohibitively expensive to compute. However, the fine
scales cannot be completely ignored because they contribute to the coarse scale
solution: the high frequencies of solution may combine with the high frequencies of
Research supported by Office of Naval Research grants N00014-92-J-1890 P00003 and N00014-
95-I-0272
y Numerical Analysis and Computing Science, Royal Institute of Technology, Stockholm
(mihai@nada.kth.se).
z Department of Mathematics, University of California, Los Angeles, California 90024
(engquist@math.ucla.edu) and Numerical Analysis and Computing Science, Royal Institute of
Technology, Stockholm.
the differential operator to yield low frequency components.
The homogenization problem can be stated in various formulations. A classical
formulation, see e.g. Bensoussan et al. [1], is the following: Consider a family of
operators indexed by the small parameter ", and for a given f , let u " solve the
problem
Assume that homogenization problem is to find an operator
L and f such that:
For example, consider the following operator with oscillating coefficients
d
dx
d
dx
where a(x) is a positive 1-periodic function bounded away from zero. Then it is easy
to show that and the limit u satisfies a constant
coefficients equation. The coefficient is not the average of a(x) over a period, but
rather the harmonical average
a
also called the effective coefficient. The homogenized operator is
since
f:
In practice, we often need to solve the equation (1) for a small but fixed ". Since
close to u, we may solve the homogenized equation (2) instead of the original
equation. The homogenized equation is usually much simpler to solve. In the case
of effective coefficients, the solution of the homogenized equation contains no high
frequency components and thus it is an approximation to the coarse scale behavior
of
In a very interesting paper, M. E. Brewster and G. Beylkin [3] describe a homogenization
procedure based on a multi-resolution analysis (MRA) decomposition. They
consider integral equations, which may arise, e.g., from the Method of Lines discretization
of a PDE, and homogenize over the time-variable. In a MRA, the concept
of different scales is contained in the nested spaces V j . Homogenization is reduced
to projecting the solution of the original equation from the fine resolution space V n
onto the coarse resolution space V 0 . The homogenized operator, if it exists, operates
on the space V 0 , but in general it is not the projection of the original operator onto
the coarse space.
Many classical homogenization techniques are based on the essential assumption
that the coefficients are periodic on the fine scale. However, this does not hold
in many applications. The analytic expansions methods require an 'a priori known
number of scales, which again may be a serious restriction, see e.g., L. Durlofsky [7].
For two-dimensional elliptic operators with cell-periodic coefficients
@
the homogenized operator is
The effective coefficients A ij are found by computing
Z /
is the solution to the cell-problem:
@
@a ik
Wavelet-based homogenization can deal with both non-periodic coefficients on the
fine scale and use the all the scales involved from the fine-scale space V j to the
coarse-scale space V 0 .
Following the construction in [6], using the Haar system, we build a homogenized
operator L J for the discrete operator 1
. The grid-size is
are the forward and backward (undivided) difference operators. We show
that the homogenized operator has a natural structure of the form 1
where H is well approximated by a band diagonal matrix. In some cases, we prove
that H equals the effective coefficients predicted by the classical homogenization the-
ory, modulo a small error-term. In the two-dimensions, we show that our technique
preserves divergence form of operators.
We are concerned with a model of fine- and coarse-resolution spaces. The framework
is multi-resolution analysis or wavelet formalism. In this framework, we have the
concepts of fine and coarse scales together with the locality properties needed in
analyzing operators with variable coefficients.
For the precise definitions of a MRA, we defer the reader to the books by I.
Daubechies [4] and Y. Meyer [9]. We consider a ladder of spaces V J ae V J+1 which
are spanned by the dilates and integer translates of one shape-function
The functions ' J;k form an L 2 -orthonormal basis. The orthogonal complement of
V J in V J+1 is denoted by W J and it is generated by another orthonormal basis
is called the mother wavelet. The transformation
that mapping the basis f' J+1;k g into f/ J;k ; ' J;k g is an orthogonal operator and
we denote its inverse by W ? . The product W J
called the wavelet transform and it can be optimally implemented
(called the fast wavelet transform). We denote by P j and Q j the L 2 -projections onto
If an operator L J+1 is acting on the space V J+1 , it can be decomposed
into four operators L acting on the subspaces W J and V J ,
where
As a shorthand notation we have that
or simply
Note that if evaluated on a basis, the operator notation becomes a legitimate block-
matrix construction.
The identification of a function f 2 V J with the sequence c of coefficients in the
basis ' J;K is an isometry: If
Unless otherwise specified, the jj:jj notation refers to the corresponding 2-norm (con-
tinuous or discrete). The same holds for the inner-product notation.
Our results are proven in the simplest multi-resolution analysis, the Haar system.
The shape function is the indicator function of the interval [0 1] and the mother
wavelet is
The Haar system provides an orthonormal basis in both L 2 (R) and L 2 ([0 1]). The
space V J consists of piece-wise constant functions on a grid with step-size
It is identified with l 2 (or R 2 J
in the finite case).
The Haar transform from V J+1 to W J \Phi V J is simply:
3 Homogenization in the Haar Basis
Discretize the equation
d
dx
a
d
dx
on a uniform grid with using finite volumes. Let diag(a) be the diagonal
matrix containing the values of a(x) at the grid-points. As an operator on V J+1 ,
diag(a) represents multiplication by the grid-function a. The discrete equation
is split by the natural decomposition V
U l
F l
where the indices h and l denote the W J and V J components. The equation (3) is a
discretization of the continuous equation (a(x)u 0
The coarse scale solution of the discrete equation (3) is the projection of U onto V J ,
i.e. U l . Eliminating U h yields the equation for U l :
The homogenized operator is the Schur complement
Let us make some preliminary remarks:
ffl The homogenization procedure is in fact block Gaussian elimination. The idea
is not new, it can be found, e.g., in odd-even reduction techniques. There is a
real gain only if the homogenized operator L J can be well approximated by a
sparse matrix. It is the compression properties of wavelets that maintain the
homogenization procedure efficient, similar to the case of Calderon-Zygmund
operators as seen in [2, 5].
ffl The experience with the non-standard form representation of elliptic operators
indicate that A J has a strong diagonal dominance and thus its inversion will
not be as difficult as inverting the operator L J+1 , see [6].
ffl We expect that the homogenized operator L J will have a similar structure as
the operator L J . In fact we will see that if L J+1 is in divergence form,
where H is a strongly diagonal dominant matrix. We will call H the homogenized
coefficient matrix.
ffl The homogenization procedure can be applied recursively. If we have the
equation
that produces the solution on the scale then we
homogenize the operator L j+1 . This means that we produce the operator L j
on and the right-hand side f j such that the solution of the homogenized
equation
is
ffl If the homogenized operator has a rapid decay away from the diagonal, then
it can be well approximated by a band-diagonal operator. The same applies
for the matrix H.
The structure of the homogenized operator is given by the decomposition of the
discrete operators \Delta
and diag(a).
3.1 multiplication operator
We first examine the multiplication-by-functions operator diag(a). The following
lemma is obvious:
Lemma 1 If ' is the Haar system's shape function and / the mother wavelet, then
For we use the notation W J
a fi v denote the component-wise multiplication of vectors. We have the following
point-wise multiplication rule:
e a
e a fi e
Proof: Set a =
P a J+1;k ' J+1;k and
Using Lemma 1, we have
k;l
a J+1;k v J+1;l ' J+1;k ' J+1;l
a J+1;k v J+1;k
Thus point-wise multiplication of functions is the equivalent to component-wise
multiplication of the coefficients. Then we have:
e a k / J;k
e
which proves the statement.2
The high frequency components of a and v interact and contribute to the low frequency
part of the product av. This is modeled in the Haar basis by correcting the
product a fi v of the coarse scale coefficients with the fine scale contribution e a fi e v.
The structure of the pointwise multiplication operator is given by the following
statement:
Proposition 1 If W J
diag(a) diag(e a)
diag(e a) diag(a)7 5 :
The matrix diag(a) is the point-wise multiplication operator. We have the following
amusing result:
Proposition diag(a) be the multiplication-by-function operator on
. The coarse-grid projection P J M J+1 P J is multiplication by the arithmetical
averages (a 2k + a 2k+1 )=2 The homogenized operator M J is multiplication by the har-
monical averages ff
Proof: The coarse grid projection of diag(a) is 1=
2 diag(a), which is, in each
component, the arithmetical average (a 2k + a 2k+1 )=2 of the corresponding fine-grid
values. The homogenized operator ispi
Component-wise this
a k
2a 2k a 2k+1
which is the harmonical cell-average of the corresponding fine-grid values. 2
3.2 Decomposition of \Delta
We start by computing the decomposition W J \Delta +W
on the basis functions
of W J \Phi
Then we have
Similar computations yield
and then
Let S n be the shift matrix S n defined by S (n)
which is the projection of the
shift operator We have the following proposition:
Proposition 3 The decomposition of 1
in the Haar system is
Obviously, the structure is repeated at each level j. Since
, we have that
Dropping the diag notation in Proposition 1, we have that
p- A J
where A
3.3 Boundary conditions
The notations for the discrete difference operators and their corresponding
matrices. They can describe periodic, Neumann or Dirichlet boundary
conditions. They can also operate (as infinite matrices) on infinite sequences arising
from discretizing problems on the whole real axis.
The derivation of the decompositions of the L J+1 and the homogenized coefficient
matrix H are formally the same. However, in the periodic case, the operator L J+1
is singular and it is not trivial that A J is invertible.
In the periodic case, the matrices \Delta are circulant. This property is preserved
by the transform W J . If we define the shift matrices S \Sigma1 as circulant matrices,
then M is also circulant, and thus all the matrices corresponding to the level J have
the same property. In the infinite case, are trivially circulant.
With periodic boundary conditions, it is easy to show that L J+1 has a 1-dimensional
null-space spanned by the constant grid-functions: Since vanishes only on con-
stants, the ellipticity condition a ? 0 implies that any non-constant zero-eigenfunction
must satisfy constant. It follows that v is monotone, which contradicts
periodicity.
The null-space of L J+1 is transformed by W J into the one-dimensional space N
spanned by [0 is a constant grid-function. The quadratic form
J x is positive whenever x 62 N . In particular, putting
have
[y
for any y 6= 0. This proves that A J is positive definite and therefore invertible
and thus the homogenized operator L J is well-defined even with periodic boundary
conditions.
Both the equations L J+1 u need extra conditions. If we
decide e.g., to fix a boundary value, we can eliminate a row from both systems. This
elimination can be done after the homogenized operator is produced. Thus we need
not track the effect of the boundary condition through the homogenization process.
Other type of conditions, such as Dirichlet boundary conditions in the non-periodic
case or integral conditions can be handled in a similar fashion.
3.4 The homogenized coefficient matrix
Let us consider a discretization on the whole real axis, i.e., the case where the
matrices are infinite. The coarse-scale projection of L J+1 is
L J is the "wrong" operator for an obvious reason: the average coefficient is obtained
using only the even components of the the fine-scale coefficient a
L J is insensitive to variations of the odd-components in the original problem. Even
if the fine-scale is not present in a(x), i.e. e a = 0, L J still has the wrong coefficient
We build the homogenized operator as the Schur complement L
Proposition 4 The operator L J has a natural structure 1
a) (7)
Definition 1 We call H the homogenized coefficient matrix of the operator
The natural question to ask is if there is any connection between the homogenized
coefficient matrix H and the classical homogenized equations.
Proposition 2 gives that the Schur complement of the diagonal matrix a is the
diagonal matrix ff containing the harmonical averages of neighboring values. This
would then agree exactly with the classical homogenization theory, if the Schur
complement of \Delta could be expressed in terms of the Schur complement of the
middle factor a. Unfortunately, this is not the case, so we have to use the form given
in (7).
We look at the extreme case when is the sum of a constant and
the highest frequency represented on the grid, i.e., a(xm We have
that a and e a are represented as constant vectors in the bases of V J and W J . The
fact that a(x) ? 0 implies je aj ! a.
Since a and e a are constant vectors, we have
a)
where
Simple computations yield
and then
The homogenized coefficient matrix defined by (7) is
a)
Classical homogenization theory yields the effective equation ff d 2
where the effective
coefficient is given by the harmonical average:
Z 2ha(x)dx
a 0
In the rest of this section, we will be looking only at the coarse grid function space
J . For simplicity, we will let denote the grid-size of V J .
The following theorem shows that the numerically homogenized operator 1
equals the discrete form ff 1
of the classically homogenized equation, apart
from a second-order term in h.
Theorem 1 Let J+1 be such that a 2 V 0 is a constant and the
oscillatory part e
a 2 W J has constant amplitude and satisfies the condition j e aj ! a.
Let L
and ff be the harmonical average in (9). Then there exists
a constant C independent of the grid-size h such that if v is the discretization of a
function v(x) with a continuous and bounded fourth derivative, then
Proof: Let us show first that the high-frequency operator A J is invertible by showing
that it is diagonal dominant. We have
and the tridiagonal structure of A J is clearly seen. The ellipticity of L J+1 implies
that a. The diagonal entries of A J are larger then 4a while the sum of the
off-diagonal terms is smaller then the same amount. The diagonal dominance of A J
gives a rapid decay of the entries of A \Gamma1
J away from the diagonal. Indeed, we have
e a \Gamma a
1=2, the Neumann expansion for A \Gamma1
J is convergent and (10) reveals the
size of the off-diagonal entries:
A
I
Next we compute the row-sum of H. Note first that since A J is circulant, it has an
and the corresponding eigenvalue is 8a. A \Gamma1
J shares
the eigenvector c, which shows that all its row-sums are 1=(8a). Note that c is also
an eigenvector of I with the corresponding eigenvalue 2. Thus we have
a)
p2a
Finally we estimate L J v. Note that since
I
Assuming v is a discrete smooth function, Taylor expansion around v
Let us estimate the j component of Hv. Applying H to the first term in (12) produces
just ffv j . Due to symmetry, we have that
\Gamman ). Applying
H to the odd-order terms of (12) shifts in the j component quantities with opposite
signs and then adds them. The even-order terms contribute such that
show later that the coefficients fl n have exponential decay
and thus
is convergent for any k. Applying 1
comparing to
which in its turn gives the desired estimate with
It remains to be shown that the constant C is independent of the grid-size h. The
expansion (11) shows that A \Gamma1
J is generated infinite long stencil with exponential
decay rate 1. To build H from A \Gamma1
J , we first apply I+S 1 and I+S \Gamma1 , which
has the effect of adding together neighboring diagonals. Indeed, if A \Gamma1
P a n S n ,
we have
a
(a
, the elements of the product (I
are bounded
by 4K ae n . H is then found by multiplication with
a) 2 and the addition of a
diagonal term. The decay away from the diagonal of the terms fl n is
The exponential decay in fl n dominates the growth of n 2 and thus we find the
constant C:
ae
(log ae) 2
CRemark: The fact that Hv - ffv for smooth functions v can be also seen from
the Fourier analysis of H. By doing a discrete Fourier transform of H, we obtain a
diagonal matrix diag(-g). The diagonal -
g is given by the symbol of H which is
Note that -
g is just the Fourier transform of any row of H. It is therefore no surprise
that -
It turns out that d-g
d!
The approximation error is indeed quadratic in ! since -
If the Fourier coefficients of v decay sufficiently fast, then we have - g-v - ff-v, and by
the inverse transform, Hv - ffv. Note in Figure 1 that - g(!) has a moderate growth
even for large !.
Figure
1: The Fourier components of H and ffI (dashed line). - g(!) behaves like
multiplication with -
boundary conditions are assumed.
In practice, we want to approximate the homogenized operator L J by a sparse
approximation. Due to the diagonal decay, we can approximate L J by a band-
diagonal matrix L J;- where - is the band-width. Let us consider the operator band
defined by
ae
We have in fact two obvious strategies available for producing L J;- : We can set
directly or use the homogenized coefficient form and build
produce small perturbations of
L J . However, important properties, such as divergence form, are lost in the first
approach and numerical experiments show that - needs to be rather large to compensate
for this. The second approach produces L J;- in divergence form. Moreover,
the approximation error can be estimated, as in the following result:
Theorem 2 If the conditions of Theorem 1 are valid, then
If v is the discretization of a smooth function v(x), then
Proof: The exponential decay from the diagonal in H, given in (13), yields
If v is a smooth function, using the commutation property of H (and band(H)), we
have
where - is some point in R. Therefore we have
Taking the supremum over all - and then the maximum over all j yields the desired
estimate. 2
Remark: The above estimates hold also for Dirichlet boundary conditions. In the
case of periodic boundary conditions, the meaning of "away from the diagonal" is
different because the wrap-around effect. The diagonal band of width - is then
defined by 2(ji \Gamma jj ( mod is the size of the matrix.
3.5 Numerical experiments
We test the homogenization procedures on some examples.
ffl With periodic coefficients, wavelet and classical homogenization produce the
same discrete solution. With non-periodic variations of the coefficients, the
effective equations cannot extract the local features of the solution. Due to
the localization of the wavelet basis elements, such local features are preserved
by wavelet homogenization.
ffl Solution with several different scales. The test problem is (a(x; x="
1. We project the solution on spaces that resolve either both the scales "
or just the finest scale " 1 .
ffl Comparison of the solutions of the homogenized forms using the two truncation
strategies, band(L J ; -) and 1
different values for - We
see that truncation of the homogenized coefficient matrix H performs much
better.
3.5.1 Non-periodic variable coefficients
First we compare the exact, classical-homogenized, and wavelet-homogenized solutions
to a periodic problem. We consider the two-point boundary problem
The exact solution solves the discrete equation
We take a(x)
to have alternating values 1 and 100 on a fixed grid. The classical and wavelet
homogenized solutions are pictured in Figure 2.
Exact solution
Wavelet homog.
Classical homog.
solution
Wavelet homog.
Classical homog.
Figure
2:
Exact, classical homogenization and Haar basis homogenization solutions in the
periodic case. grid-points. The plot on the right is a detail of the left
image.
The wavelet solution is computed using 3 levels, i.e. the coarse scale contains eight
times fewer grid-points. The effective coefficient is 200=101 - 1:9802 and thus
classical homogenization yields the approximation
x:
Note the detail in Figure 2 where the wavelet based solution u is essentially a shift
of u eff , i.e. u contains no high frequencies.
Now we take a(x) to be uniformly distributed in the interval [1 100], as plotted
in
Figure
3 (left). The classical homogenized coefficient (effective coefficient) is
computed as
a
dx
solution
Wavelet homog.
Classical homog.
Figure
3:
coefficients a(x) (left) and a comparison of the exact solution u, effective
equations solution u eff and Haar basis homogenization solutions u.
grid-points in these plots.
Figure
3 (right) compares the exact solution u with the wavelet homogenized u and
the result of classic homogenization u eff , where the effective coefficient is a
18:8404. The fine grid has 256 points. Both u eff and u are represented on the
coarse grid using 32 points. However, the wavelet homogenized solution u captures
much more coarse-scale detail then the classic solution u eff .
3.5.2 Homogenization over multiple scales
We test a problem that contains three different scales: Let
The coefficient has three scales, The solution of the equationh 2
contains all the three scales if h resolves the finest scale of a(x). Put
resolve all the scales of the problem. Then we project the exact solution onto V 6 .
Exact sol. on V9
Projection on V3
Projection on V6
-0.06
-0.02Exact sol. on V9
Projection on V3
Projection on V6
-0.06
Figure
4:
Homogenization of several scales. Coefficient a Plot of u 9 , u 6 and
(left). Details of plots (right) shows that u 6 averages the finest scale only and
resolves the coarser scales. u 3 resolves only the coarsest scale.
Figure
4 shows that the finest scale contribution is averaged out, but the coarser
scales are resolved. Projection onto V 3 averages both the finer scales
and the solution has the characteristics of a constant-coefficients
problem.
3.5.3 Banding strategies
We test the accuracy of approximating the homogenized operator by banded matrices
using the two strategies described in Section 3.4. The coefficients a(x) are chosen
at random, uniformly distributed in the interval (0:1 2). The boundary conditions
are
Exact homog.
diagonals
diagonals
diagonals
-0.4
-0.3
-0.2
-0.1Exact homog.
3 diagonals
5 diagonals
7 diagonals
-0.4
-0.3
-0.2
Figure
5:
The homogenized operator approximated by banded matrices. Banding the exact
homogenized operator L J (left) needs a much larger band-width - as compared to
banding the homogenized coefficient matrix H (right). 512 grid-points on the fine
grid, 64 grid-points on the coarse grid.
Figure
5 (left) is the plot the solutions of band(L J ; -)u To
obtain even better accuracy, using the approximation of the homogenized coefficient
considerably fewer diagonals are needed. Figure 5 (right)
plots the solutions of 1
diagonals.
4 2-D Problems
Numerical homogenization for multi-dimensional problems is of great interest since
the analytic methods can only handle periodic micro-structures, see e.g., Bensoussan
et al. [1]. The aim of this section is to show that if a 2-D fine-scale operator is
in discrete divergence form
then the homogenized operator L J acting on the coarser space has the same form. As
we saw in the one-dimensional case, this property is important for efficient truncation
strategies.
2 The operator L J+1 is called discrete elliptic if
1. L J+1 is symmetric, i.e., A
2. The spectrum of L J+1 lies in f0g [ [ffi; +1), where ffi ? 0, and 0 cannot be a
multiple eigenvalue.
4.1 2-D tensor product wavelet spaces
Let us make the notations precise. We consider the tensor product space V
generated by the canonical basis
The coarse space is V
J\Omega V J and it is generated by '
J;k\Omega ' J;l . The orthogonal
complement of V J in V J+1 is the wavelet space
The wavelet transform maps the standard basis of V J+1 into the union of the standard
bases of V J and the three components of W J . If L J+1 is the matrix of a linear
operator on V J+1 , then the orthogonal basis transformation W J yields
The operators A J , B J , C J and L J operate on subspaces:
By elimination, we have that the homogenized operator is
Note that in the finite case, dim(W J
We can continue with the decomposition of V obtain in this
manner a multi-resolution analysis on the product space.
The product of the orthogonal transformations is the
(orthogonal) wavelet transform that maps V J+1 into (\Phi 0-j-J W j
4.2 Invariance of divergence form
The operator
acts on V J+1 and is defined by \Delta x
f)\Omega g, where
is the 1-D forward difference operator. \Delta y
are defined in a similar
manner. We regard the operators A (ij) as multiplication by the discrete functions
a (i;j) (x; y), i.e., A (i;j) ('
l (y). In general, A (i;j) can
be any operator on V J+1 , but then L J+1 is may no longer be a discretization of a
differential operator.
Let us formulate the result:
Theorem 3 Let L J+1 be a discrete elliptic operator in divergence form (14). Assume
periodic boundary conditions in the x and y directions and let L J be the homogenized
operator using the Haar transform. Then L J is also in divergence form.
Proof: We begin by observing that the orthogonal transform W
V J can be written as W is a the corresponding 1-D transform
in the x-direction, and W y is defined analogously. Remark also that W x W
W y W x .
Next we observe that \Delta x
+\Omega I and \Delta y
This gives that \Delta x
and \Delta y
.
The next step is to compute the decomposition of \Delta x
in (W
Using the standard inner-product on tensor-product spaces,
we apply \Delta x
to a basis function and test it against another basis function:
can be any ' J;k or / J;k . Note that the second inner-product is 0 if g 1 6= g 2 .
The first inner-product gives the 1-D decomposition of \Delta + , as in Proposition 3.
Using the notations of Proposition 3, we can synthesis the decomposition of
in
the following table:
W \Theta W W \Theta V V \Theta W V \Theta V
W \Theta W
M\Omega I \Gamma\Delta
+\Omega I
M\Omega I \Gamma\Delta
+\Omega I
V \Theta W \Delta
I \Delta
+\Omega I
I \Delta
+\Omega I
In a similar fashion, we obtain the decomposition of \Delta y
I\Omega M
I\Omega M
I\Omega
and
\GammaM
I \Delta
\Gamma\Omega I
\GammaM
I \Delta
\Gamma\Omega I
\Gamma\Omega I \Delta
\Gamma\Omega I
\Gamma\Omega I \Delta
The essential point is that the last block-row of the decomposition of \Delta x
(or \Delta y
contains only \Delta
+\Omega I (or
entries. For the \Delta x
(or \Delta y
the analogous
holds for the last block-column.
Noting that \Delta
, on the coarse space V J , we have that the decomposition
of the product 1
\Gamma is of the form4h 26 6 6 4
A
are some arbitrary operators. Adding the contributions of
all the terms in the form (14) of L J+1 yields:4h 26 6 6 4
A
where D is in discrete divergence form.
Since L J+1 is elliptic, periodic boundary conditions imply it has a one-dimensional
null-space, spanned by the constant functions. This null-space is mapped by the
transform W J into V J . Since the operator L J+1 has non-negative eigenvalues and
A operates on the complement of V J , it follows that v ? Av ? 0 for any v 6= 0.
Therefore A is invertible and we can build the homogenized operator by block Gauss
elimination. This yields
where \Delta (1)
stands for \Delta x
We have that L J is in divergence form on the coarse
space V J . 2
Remark: The conservation of the divergence form of L J+1 under the 2-D Haar
transformation has important consequences. In the multi-dimensional case, it is
known that the problem
r
admits a homogenized equation but apart from the cell-periodic problem,
there is no general algorithm for deriving the homogenized operator L. In fact, the
nature of L is not known and numerical homogenization can therefore be used not
only as a practical tool, but also to find information about the structure homogenized
operator L.
4.3 A numerical example
We chose a(x; 1. The classical
homogenized equation is
is the harmonical average of a in a cell with the length of a period and
2 is the (arithmetical) average in the same cell:
The homogenized equation has constant coefficients but is strongly anisotropic. Figure
6 displays the exact and wavelet homogenized solutions. The domain is the unit
square and there are Dirichlet boundary conditions on the coordinate axes and Neumann
conditions on the other two sides.
-5
-5
y
Figure
Fine scale (left) and homogenized solution (right). Note that the homogenized
solution captures the effect of the coarse-scale strong anisotropy, averaging only the
fine-scale variations.
Extensions
The homogenization procedure can be carried out on coarser and coarser levels to
produce a sequence of homogenized equations
If we solve the coarse scale problem exactly, then by block back-substitution
we produce the exact solution:
If no truncations are used in building the homogenized operators L j , the above strategy
describes an exact, direct solver, which compares to the reduction techniques
in computational linear algebra, see [8]. If truncations are used, the direct solver
contains an approximation error.
The homogenization procedure can be applied recursively on any number of lev-
els, provided that the initial operator is in discrete divergence form and is elliptic.
These two properties are sufficient for the existence of the Schur complement and
they are are inherited by the homogenized operator. It is not necessary that L J+1
approximates a differential operator as long as it is elliptic and in divergence form.
On coarser scales, the homogenized operator resembles the inverse of a differential
operator and is expected to be dense. The use of wavelets with a high number of
vanishing moments, known to compress well Calderon-Zygmund operators, could
have better compression effects then the Haar system, in the spirit of the ideas
presented in Beylkin, Coifman and Rokhlin's work [2].
In applications, if we want to use the homogenized coefficient matrices, we would
not invert A j , but rather LU-factorize it in the prescribed bandwidth -.
--R
Asymptotic Analysis for Periodic Structures.
Fast wavelet transform and numerical algorithms.
A multiresolution strategy for numerical homoge- nization
Ten Lectures on Wavelets.
Wavelets and Singular Integrals on Curves and Surfaces.
Numerical calculation of equivalent grid block permeability tensors for heterogenuous porous media.
Matrix Computations.
Ondelettes et Op'erateurs
--TR
--CTR
Shafigh Mehraeen , Jiun-Shyan Chen, Wavelet-based multi-scale projection method in homogenization of heterogeneous media, Finite Elements in Analysis and Design, v.40 n.12, p.1665-1679, July 2004
Giovanni Samaey , Ioannis G. Kevrekidis , Dirk Roose, Patch dynamics with buffers for homogenization problems, Journal of Computational Physics, v.213 n.1, p.264-287, 20 March 2006
Assyr Abdulle , E. Weinan, Finite difference heterogeneous multi-scale method for homogenization problems, Journal of Computational Physics, v.191 n.1, p.18-39, 10 October
Pingbing Ming , Xingye Yue, Numerical methods for multiscale elliptic problems, Journal of Computational Physics, v.214 n.1, p.421-445, 1 May 2006
Jiun-Shyan Chen , Hailong Teng , Aiichiro Nakano, Wavelet-based multi-scale coarse graining approach for DNA molecules, Finite Elements in Analysis and Design, v.43 n.5, p.346-360, March, 2007 | wavelets;elliptic operators;homogenization |
276474 | Numerical Integrators that Preserve Symmetries and Reversing Symmetries. | We consider properties of flows, the relationships between them, and whether numerical integrators can be made to preserve these properties. This is done in the context of automorphisms and antiautomorphisms of a certain group generated by maps associated to vector fields. This new framework unifies several known constructions. We also use the concept of "covariance" of a numerical method with respect to a group of coordinate transformations. The main application is to explore the relationship between spatial symmetries, reversing symmetries, and time symmetry of flows and numerical integrators. | Introduction
Recently there has been a lot of interest in constructing numerical integration schemes
for ordinary differential equations (ODEs) in such a way that some qualitative geometrical
property of the solution of the ODE is exactly preserved. This has resulted in much
work on integration schemes that can preserve the symplectic structure for Hamiltonian
ODEs [12, 13, 14, 23, 33]. Other authors have constructed volume-preserving integrators
for divergence-free vector fields [3, 19, 30]. Other authors again have concentrated on
preserving energy and other first integrals [14, 20, 21] or other mathematical properties [7].
In this paper we are interested in constructing integrators that preserve the symmetries
and reversing symmetries of a given ODE. One reason this is important is that nongeneric
bifurcations can become generic in the presence of symmetries, and vice versa. One can
also consider the time step of the integration scheme as a bifurcation parameter, showing
how vitally important it is to stay within the smallest possible class of systems. Reversing
symmetries are particularly important, as they give rise to the existence of invariant tori
and invariant cylinders [15, 22, 25, 28].
One of the first authors to study integrators that preserve a reversing symmetry was
Scovel [24]. His method, as it stands, can only preserve a single reversing symmetry and no
ordinary symmetries. In this paper we give a construction of integration schemes preserving
an arbitrary number of symmetries and reversing symmetries.
In exploring this question, it became clear that constructions were working because
of the effect of certain operators on compositions of maps. This led us to express these
effects in terms of automorphisms and antiautomorphisms of groups. In particular, it turns
out that many different compositions designed to give methods particular features are all
examples of the same general principle. We prepare the basic algebraic material in section
2.
A curious structure emerges several times. Often a subset of a group is not closed under
products AB, but is closed under symmetric products ABA. This is true for symmetric or
antisymmetric matrices, reversible maps, and time-symmetric maps. We have called such
a subset a 'pseudogroup', and it is very useful to us here in integration, although it remains
to be seen whether it is an interesting algebraic object in its own right.
Flows of vector fields have many nice properties, and many of these are desirable in an
integrator as well. Some of these properties are defined for all vector fields, such as covariance
under transformation groups, closure under differentiation [2], and time symmetry
(see (27) below). These are discussed in section 3. Others are defined only for the flows
of a certain class of vector fields, such as symplecticity for Hamiltonian systems, closure
under restriction to closed subsystems [2], and energy conservation [14]. We believe that
in the context of this new viewpoint of numerical methods, it is natural to try to 'lift'
these properties so that they are naturally defined for all vector fields. For example, Ge's
definition of invariance with respect to symplectic transformations [5] is only defined for
Hamiltonian systems, but it can also be seen as a special case of general covariance. In
section 4 we study this lift for symmetries and reversing symmetries, and also look at
methods that make explicit use of the symmetries of the vector field.
The automorphism point of view elucidates the relationship between different prop-
erties, such as time-symmetry and reversibility, which are in fact independent, although
related in some special cases considered previously (Runge-Kutta methods with a linear
reversing symmetry by Stoffer [27], and splitting into reversible pieces by McLachlan [10]).
For example, we show in section 4 how a map covariant under a group larger than strictly
necessary can be more flexible under composition.
Algebraic preliminaries
Let G be a group with elements ' and consider functions G. We only consider
functions which are either automorphisms of the group, that is, A+ is a bijection and
or antiautomorphisms, that is, A \Gamma is a bijection and
G: (2)
Examples of automorphisms are the inner automorphisms
G. Examples of antiautomorphisms are inverse
and, if G is a linear group (a group of matrices), the transpose T '. (This can be generalized
to the quantum case where ' is a linear operator on a Hilbert space.)
Note that the set of automorphisms and antiautomorphisms itself also forms a group,
where the group operation is composition. The automorphisms form a normal subgroup.
More specifically, every such group G \Sigma is homomorphic to Z 2
there is an onto map
We call the number oe(A) the grading of A. In the case at hand, the automorphisms have
grading +1 and the antiautomorphisms have grading \Gamma1. Each such group G \Sigma is generated
by, for example, its antiautomorphisms alone (if it has any), or by one antiautomorphism
together with all the automorphisms.
Define the fixed sets
and
Notice that
but not. However, we do have that the set of group elements fixed by a given
antiautomorphism is closed under the symmetric triple product:
This concept seems to be useful in, for example, constructing integrators, and we shall call
it a pseudogroup 2 .
Example. Let G be the linear group GL(n) and consider the antiautomorphism T ,
transpose. The fixed set of this antiautomorphism is the pseudogroup of nonsingular
symmetric matrices: if X and Y are symmetric matrices, then so is XY X.
1 Unless there are no antiautomorphisms in the group.
A discussion of (what we call) the pseudogroup of maps with time-reversal symmetry, is given in [9].
Example. Let G be any group and consider the inner automorphism N / . The fixed
set of this automorphism is the group of so-called /-equivariant elements: if ' 1 , ' 2 are
/-equivariant (i.e., ' then so is ' 1 ' 2 .
If the antiautomorphism A \Gamma is an involution (i.e., A 2
there is a 'projection' to
because
first introduced in a special case by
Benzel, Ge, and Scovel [1].) This can be generalized to a group G \Sigma of automorphisms and
(not necessarily involutory) antiautomorphisms. Unfortunately, it is difficult to project an
arbitrary group element to fix(G \Sigma ); but QA \Gamma
does map the fixed set of the subgroup G+ of
automorphisms to the fixed set of the entire group.
Proposition: (Generalized Scovel projection) Let G \Sigma be a group of automorphisms
and antiautomorphisms, and let G+ be its subgroup of automorphisms. Let A \Gamma 2 G \Sigma be an
antiautomorphism, so that G
(The proof is given below.) This is not a true projection because it does not satisfy
but it will have some useful properties in our applications.
The 'projection' is not always surjective. For example, let take a group
consisting of one automorphism (the identity) and one antiautomorphism (transpose), that
is, G . Projection to the symmetric matrices produces
symmetric matrices with nonnegative determinant.
The projection Q and the pseudogroup property are in fact both examples of the
following more general relationship, which is of central importance in this paper:
Proposition: (Composition property) Let G \Sigma be a group of automorphisms
fA j+ g and antiautomorphisms of the group G. Let ' 1 ,
for all A
Proof: We only need consider the effect of an antiautomorphism; membership of fix(G
will follow because the antiautomorphisms generate all of G \Sigma .
A
for all k.
From this proposition the pseudogroup property follows by taking
the generalized Scovel projection Q follows by taking ' 2 to be the identity.
As an example of the composition property (12), take G g. All matrices B are
in matrices A are in fix(G \Sigma ), and indeed, BAB T is symmetric. The
reader is invited to check the consequences of the proposition for the sets of matrices given
in table 1.
We note that in most of this work, we can weaken the antiautomorphism requirement
to
G. This allows, for example, the operator A \Gamma
the antisymmetric matrices. The composition property (12) is retained, but the projection
(11) is lost if the identity is not in
Projecting onto the fixed set of automorphisms is more difficult. However, one may
have (as in the case of maps on R n ) that G is a near-ring: (G; +; :) is a near-ring if (G; +)
is a group, (G; :) is a semigroup, and ('
that maps under composition do not form a ring because they are not left-distributive.) If
the automorphisms are linear with respect to addition in the near-ring, and G+ is a finite
group with jG then we can use the following, which is a true projection:
The half-sized group construction
As mentioned above, groups G \Sigma are always homomorphic to Z 2 . In our application
to reversing symmetries, we shall use groups G+ of automorphisms which themselves are
homomorphic to Z 2 . From such a group we can construct a group H \Sigma of the same size as
G+ in the following way. Let A+;oe denote an element of G+ of grading
be a fixed antiautomorphism whose square A 2
\Gamma is an element of G+ with grading +1 which
commutes with G+ .
The automorphisms in H \Sigma are the elements A+;1 . The antiautomorphisms in H \Sigma are
the elements . It is straightforward to check that this gives a group H \Sigma of the
same size as G+ , i.e., half the size of G \Sigma .
Example. Let G be GL(2) and G+ be the inner automorphisms corresponding to a
subgroup of G homomorphic to Z 2 , say rotations by 3-=2. The antiau-
tomorphism I (inverse) satisfies the above requirements: its square (the identity) is in
G+ , does have grading +1, and does commute with G+ . The inner automorphisms N g;+1
together with the antiautomorphisms IN g;\Gamma1 form the group
This group comprises two automorphisms and two antiautomorphisms-it is Z 2 \Theta Z 2 .
Then in order for Q IN -=2
to be a projection to fix(H \Sigma ) we need ' to be fixed under all the
automorphisms in H \Sigma . Here we require
operator sign fixed set type 'projection'
I anti
IT auto AA orthogonal
Table
1. Sets of matrices as fixed sets of (anti)automorphisms. Here I is the identity
matrix and
\GammaI
3 General properties of integration methods
For simplicity we restrict our attention to systems of first-order autonomous ODEs on R n ,
i.e., ODEs of the form
for some n. Let F be a set of vector fields on R n (each vector field need not be on a space
of the same dimension), and let \Phi be a set of functions
An element ' 2 \Phi is called an integrator of f if
When we want to emphasize the functional dependence of ' on - and f we shall call it
a "method;" at other times we may want to think of fixing - and f and looking at the
resulting map.
Examples. If all f 2 F are differentiable, then the exact flow ' -
method Runge-Kutta method, and any one-step method which is
functionally dependent only on f are possible members of \Phi. If we impose additional
conditions on the set of vector fields F , then there is larger family of possible 's; e.g., if
all f 2 F are C r , then the Taylor series method of order r may be in \Phi. Moreover,
functions which are not integrators at all can also be in \Phi. Examples are ' - (f) : y 7! y,
and, (if all f are C r ), elementary differentials such as - 2 f 0 f .
To apply the results of the preceding section, a group is needed. To construct it we let
the set of maps
generate a group G by composition. For example,
We will not specify the sets F and \Phi precisely, as we wish to make the constructions
apply as generally as possible. Minimum requirements will be clear in each application.
Not all the elements of G associate maps with vector fields. For example, let ' - (f) be
an integrator of f . Then even though is an integrator of all vector fields
not an operator on vector fields. In fact the group G may seem at first
to be much bigger than necessary: one could also let, e.g., \Phi generate a group, with group
operation
But we wish to allow flexibility in the choice of time steps (for composition methods) and
vector fields (for splitting methods).
The functions A are now operators on the space of methods. They may or may not
preserve the property of being an integrator, and if they do, they may not preserve the
order of the integrator.
The exact flows of differential equations are important elements of G. It is their properties
that we are trying to mimic in constructing actual numerical methods. Some properties
(e.g., the semigroup property ' - (f)' oe hold and are defined for all f ; others
only hold and are defined for some f (e.g., volume-preservation det(d' -
divergence-free f ). Where possible, we try and extend the definition of these restricted
properties to all f ; but even with the guiding principle that they should hold for flows,
this extension cannot be unique. In this section we consider properties defined for all f , in
particular, those shared by fixed sets of automorphisms and antiautomorphisms of G. 3
Our first such property is "covariance". It is often useful to consider whether a method
produces the same integrator in different coordinate systems. This concept was studied for
symplectic transformations by Ge [5] with the aim of preserving integrals of Hamiltonian
systems by symplectic integrators. It was also used (with general linear transformations
and Runge-Kutta methods) by one of the present authors in [11]. We extend this concept
to general integrators and arbitrary groups of transformations acting on phase space.
Let H be a group with a left action (h; y) 7! hy on phase space R n 3 y. A method
- (f) is covariant with respect to H if the following diagram commutes 8h 2 H:
f=h f
y x=hy \Gamma! ' - ( ~
The top arrow pushes forward the vector field f by h; the bottom arrow conjugates the
Writing out the transformations through a clockwise loop shows that
where
3 Note that if the antiautomorphism A \Gamma is such that flows are in is an integrator of
f of a certain order, then so is A \Gamma ' - (f). This is useful when composing as in (12).
(Compare the coordinate transformation automorphism K h to the inner automorphism
whose fixed set is the equivariant maps.) Now that covariance is
defined for \Phi by (24-26) (i.e., it is defined for the generators of G), the definition extends
to all of G in the natural way.
If ' - (f) is the time- flow of f , then ' is covariant with respect to the group of
diffeomorphisms of R n . In general, however, discrete approximations will only be covariant
with respect to the group of affine transformations or one of its subgroups. Different
methods can then be classified by their covariance group. Many traditional methods, such
as Euler's method, the midpoint and trapezoidal rules, and any Runge-Kutta or Taylor
series method, are covariant with respect to the affine group (the semi-direct product
of GL(n) and the translations R n ). (Some cases have been treated in [5, 27, 31].) Most
methods are covariant with respect to translations, but some are not covariant with respect
to the entire linear group but only with respect to a subgroup. For example, the known
volume-preserving methods [3, 19] are covariant with respect to the subgroup of diagonal
transformations. Splitting methods which partition the coordinates are covariant with
respect to arbitrary transformations which preserve the partitioning.
Our second important property is time-symmetry. A method ' is time-symmetric if
This important property, also known as 'symmetry' or `self-adjointness' or 'reversibility',
is considered in [4, 6, 17, 24, 32, 33]. (We reserve the name 'symmetry' for the spatial
symmetries considered in the next section.) Because the parameter - is included in G, we
can introduce the time-reversal automorphism
so that time symmetry means being in the fixed set of the antiautomorphism IR.
(One can also consider inverse-negative 's with ' -
time-scaling
's with ' c- (f=c). However, since there is a simple projection onto the fixed
set of the time-scaling operator, namely, ' - (f) 7! ' 1 (-f ), and the inverse-negative and
time-symmetry properties are then equivalent, we will not emphasize these properties.)
If our favourite integration method is not time-symmetric, we can use the projection
\Gamma-
to make it so. Now so the group G \Sigma generated by IR is just f1; IRg. It has
no nontrivial automorphisms so all ' are in fix(G + ) and (29) gives a time-symmetric map
for any '. (This projection was first given in [1].)
We can also use the composition property (12) to get time-symmetric methods of the
any method and ' 2 is time-symmetric, which has been used by many authors
starting with Suzuki [29] and Yoshida [33] for constructing higher-order methods.
Now consider the problem of constructing methods which are both covariant and time-
symmetric. Take a group of coordinate transformation automorphisms
and the antiautomorphism IR which together generate G (1)
\Sigma . Firstly, if ' is covariant with
respect to H (so that ' 2 fix(G
for all h 2 H. Secondly, if H, and hence G+ is homomorphic to Z 2 , then the half-sized
group construction gives a subgroup of G (1)
G (2)
with projection
where g is an element with grading \Gamma1 and ' 2 fix(K h ) for all elements h with grading +1.
All exact flows are in fix(G (1)
are desirable properties for integrators
as well. The advantage of also considering G (2)
\Sigma is that then the projection requires less of
'.
4 Integrators that preserve symmetries and reversing
symmetries
In this section we are interested in properties of integrators that do not hold for all vector
fields, but only for those with some given property, such as possessing a symmetry or
reversing symmetry. We first give some definitions:
The ODE (17) is S-symmetric if the vector field f satisfies
The ODE (17) is R-reversible if the vector field f satisfies
Here S and R are arbitrary diffeomorphisms of phase space, i.e., R is not necessarily an
involution.
The map ' is S-symmetric if ' 2 fix(N S ), i.e.
The map ' is R-reversible if ' 2 fix(IN R ), i.e.
If f is S-symmetric, then S-symmetry of ' - (f) is equivalent to S-covariance (' 2 fix(K S )).
If f is R-reversible, then R-reversibility of ' - (f) is equivalent to ' 2 fix(IRKR ).
Of course, a dynamical system may possess more than one (reversing) symmetry. The
set of all symmetries and reversing symmetries of a given dynamical system form a group
under composition, the reversing symmetry group \Gamma [8, 22]. It is homomorphic to Z 2 : the
symmetries have grading +1 and the reversing symmetries have grading \Gamma1. The flow
exp(-f) of f has the same reversing symmetry group as f .
The question we now address is: Given a vector field f , possessing a reversing symmetry
group \Gamma, how does one construct integrators ' - (f) that possess the same reversing symmetry
group? For the case of a single reversing symmetry R, Scovel [24] introduced the following
projection: If R
- R is R-reversible. This is another instance of the
projection (11), in this case, Q IN R
. (Just as for the antiautomorphism IR, the only
generated automorphism is the identity.) For the case of a larger reversing symmetry
group \Gamma, the half-sized group construction means that we can use 's which are not covariant
with respect to all of \Gamma, but only with respect to its symmetries. Then for any reversing
symmetry R in the group, the projection Q IN R
R possesses the full reversing
symmetry group \Gamma.
There are two possible extensions of this projection to all vector fields f . The first,
', has the reversing symmetry group \Gamma, regardless of whether the vector field does. In
particular, it is not an integrator of f even if ' is. Thus we prefer the following extension:
The conceptual advantage is all flows are in fix(IRKR ). This is exactly the group and
projection used in (33).
A drawback of the method Q IRKR ' is that it is not time-symmetric, and that making
it so by a second projection would destroy the desired reversibility: Q IRQ IRK R
There are three ways out.
The first is to use the composition property (12) to increase the order, ignoring time-
symmetry completely. For example, if ' - is reversible and of order 1, then one can check
that
is reversible and of order 2.
The second is to argue that time-symmetry does not affect the dynamics of the method
as much as reversibility. So we could first construct a non-reversible high-order method
(which may well be time-symmetric), and then form Q IRKR ', preserving the order but
not time-symmetry.
The third way is the most interesting. Abstractly, the problem is that we have two
antiautomorphisms A \Gamma and B \Gamma and would like to construct an element invariant under
them both. For the projection to work, the initial element ' must be invariant under all
automorphisms generated by A \Gamma and B \Gamma . Here we have ' 2
not ' 2
does not map to the fixed set of the full group of anti- and
automorphisms. However, if also ' 2 is easy to see that ' is fixed by
all generated automorphisms, and that projection with any generated antiautomorphism
gives the same Q'.
In the case at hand, A . Thus we are interested
in maps covariant under the reversing symmetry R as well, instead of just the ordinary
symmetry R 2 as previously:
Note that if f is R-reversible, its flow is R-reversible, R-covariant, and time-symmetric.
If a map satisfies any two of time-symmetry, R-reversibility, and R-covariance, then it also
satisfies the third. This gives a very flexible approach for constructing integrators, for
R-covariance-invariance under an automorphism-is preserved under arbitrary composi-
tions. At any stage one may project using Q IR to gain all three properties.
The case of a reversing symmetry group \Gamma is similar. We would now require full \Gamma-
covariance: reversing symmetries R in the
group. (In practice, one only checks covariance under a set of generators of \Gamma, such as the
reversing symmetries.) For example, affinely-covariant methods such as Runge-Kutta are
covariant with respect to any affine reversing symmetry group. So by asking a little more
of ' - (f ), we can directly apply the symmetric compositions of Yoshida [33] to increase the
order while still preserving reversibility.
Thus there are many routes to structure-preserving integrators. The preferred path will
depend on what additional structure (e.g. volume-preservation, integrals) the ODE has,
and on whether the symmetries and/or the reversing symmetries are linear. First find a
first-order method ' - (f ). If it is \Gamma-covariant, we are done. If \Gamma is linear, (see (15)) is
\Gamma-covariant, although this may break some structure in '. (For example, there is no known
linearly-covariant volume-preserving integrator.) The case when ' is covariant only with
respect to the symmetries is handled by the generalized Scovel projection, (38). Finally,
splitting methods in which the constituent vector fields are not reversible introduce some
new possibilities, discussed in the example below.
Example 1: Affine method.
One possible way of integrating an ODE whose reversing symmetry group is affine is
well known. It is not difficult to show that the implicit midpoint rule
is covariant with respect to the group of affine transformations (as indeed all Runge-Kutta
methods are). It follows that if the vector field f has an affine reversing symmetry group
\Gamma, then so does ' - (f ). Since the midpoint rule also has time symmetry (27), ' - (f) has
reversing symmetry group \Gamma. (The midpoint rule is also symplectic when the ODE (17)
is Hamiltonian ([23]), but not all Runge-Kutta methods with the time-symmetry property
are symplectic, nor vice versa.)
Example 2:
Consider the divergence-free vector fields in R 4 with parameter ff
@x
(\Gammay 3
@y
@
@z
@
which possess the following reversing symmetry group
the cyclic group generated by R 1 ). We want a method preserving the divergence-free
structure, the symmetries, and the reversing symmetries. The midpoint rule would
preserve the reversing symmetry group but is not, in general, volume preserving. It is shown
in [17] that the midpoint rule is volume preserving if and only if P
In R 4 , this requires the coefficient of - in P (-) to be zero. It can be
checked that this coefficient is not zero here.
Instead, we shall preserve volume by using a splitting method. Suppose
each f i is divergence free and possesses all (nonreversing) symmetries (but not necessarily
all reversing symmetries), and the solution of each -
can be approximated by a
preserving both volume and the symmetries and any reversing symmetries which f i
may have. Then we only need to combine the approximations so as to recover the reversing
symmetries. In general, there are three cases.
Firstly, if each f i is reversible, then the ' - (f i ) are reversible by assumption, so the
composition
is reversible-and, if ' - (f) is time-symmetric, (44) is also time-symmetric.
Secondly, if there is no structure when the f i are reversed, the best we can do is let
and apply the generalized Scovel projection, giving / - R/ \Gamma1
is irrelevant here. (Volume is preserved because the reversing symmetry group preserves
volume in this case.)
Thirdly, and perhaps most efficiently, suppose we can split into three pieces, f 1 , f 2 , and
with the following structure:
(The reversible piece f 2 may be split further if desired.) Let ' - (f 2 ) be reversible, and
integrator. Then
\Gamma-
is reversible. This situation occurs frequently when there is a reversing symmetry which is
linear and satisfies R \Sigma1. For then let f 1 be any vector field and define f 3 := \GammaRf
(because f is reversible by
assumption.) We only need to find an f 1 such that this splitting is useful.
In the present case we split into the planar vector fields
@
@x
@
@x
@y
@y
@
@z
so that each f i is divergence-free and hence Hamiltonian in two dimensions, and has symmetries
(because their coefficients are odd functions). The f 's also satisfy (46)
where the R i are given in (43). We let ' - be the midpoint rule,
which is volume-preserving for these Hamiltonian systems, and get the reversible method
This highlights the point that reversible integrators need not be time-symmetric, nor need
the pieces in a splitting method possess the reversing symmetries.
we have the nice situation of a splitting into two pieces, with
the direct composition of their flows (or time-symmetric, \Gamma-covariant approximations of
being reversible. Exactly the same constructions apply to a system with integrals,
once basic maps preserving the integrals and symmetries have been found.
Acknowledgement
. G.S.T. is grateful to the Australian Research Council for partial support
during the time this paper was written.
--R
Elementary construction of higher order Lie-Poisson integrators
Dynamical systems and geometric construction of algorithms
Equivariant symplectic difference schemes and generating functions
Solving Ordinary Differential Equations I
Chaotic Numerics
Reversing symmetries in dynamical systems
Symmetries and reversing symmetries in kicked systems
On the numerical integration of ordinary differential equations by symmetric composition methods
"Poisson schemes for Hamiltonian systems on Poisson mani- folds"
The accuracy of symplectic integrators
A survey of open problems in symplectic integration
Lectures in Mechanics
Convergent series expansions for quasi-periodic motions
Coexistence of conservative and dissipative behavior in reversible dynamical systems
Solving ODE's numerically while preserving a first integral
Solving ODE's numerically while preserving all first integrals
Chaos and time-reversal symmetry: order and chaos in reversible dynamical systems
Symplectic numerical integration of Hamiltonian systems
Reversible Systems
Symmetric two-step algorithms for ordinary differential equations
Variable steps for reversible integration methods
Numerical analysis of dynamical systems
Fractal decomposition of exponential operators with applications to many-body theories and Monte-Carlo simulations
Representation of volume-preserving maps induced by solenoidal vector fields
Construction of higher order symplectic integrators
--TR
--CTR
G. R. W. Quispel , D. I. McLaren, Explicit volume-preserving and symplectic integrators for trigonometric polynomial flows, Journal of Computational Physics, v.186 n.1, p.308-316, 20 March
R. I. McLachlan , M. Perlmutter , G. R. W. Quispel, Lie group foliations: dynamical systems and integrators, Future Generation Computer Systems, v.19 n.7, p.1207-1219, October | symmetries;automorphisms;numerical integrators |
276485 | Analysis of Algorithms Generalizing B-Spline Subdivision. | A new set of tools for verifying smoothness of surfaces generated by stationary subdivision algorithms is presented. The main challenge here is the verification of injectivity of the characteristic map. The tools are sufficiently versatile and easy to wield to allow, as an application, a full analysis of algorithms generalizing biquadratic and bicubic B-spline subdivision. In the case of generalized biquadratic subdivision the analysis yields a hitherto unknown sharp bound strictly less than 1 on the second largest eigenvalue of any smoothly converging subdivision. | Introduction
The idea of generating smooth free-form surfaces of arbitrary topology by iterated mesh
refinement dates back to 1978, when two papers [CC78], [DS78] appeared back to back in
the same issue of Computer Aided Design. Named after their inventors, the Doo-Sabin
and the Catmull-Clark algorithm represent generalizations of the subdivision schemes for
biquadratic and bicubic B-splines, respectively. By combining a construction principle
of striking simplicity with high fairness of the generated surfaces, both algorithms have
since become standard tools in Computer Aided Geometric Design. However, despite
a number of attempts [DS78], [BS86], [BS88], the convergence to smooth limit surfaces
could not be proven rigorously so far.
NSF National Young Investigator Award 9457806-CCR
y BMBF Projekt 03-HO7STU-2
The proof techniques and actual proofs to be presented here are based on the concept
of the characteristic map as introduced in [Rei95a]. The characteristic map is a smooth
map from some compact domain U to R 2 which can be assigned to stationary linear
subdivision schemes. It depends only on the structure of the algorithm and not on the
data. If this map is both regular and injective, then the corresponding algorithm generates
surfaces. It is shown in this paper that on the other hand non-injectivity
at an interior point of the map implies non-smoothness of the limit surfaces. Further, we
establish two sufficient conditions for regularity and injectivity of the characteristic map
which allow a straightforward verification. The stronger one, however still applicable
in many cases, only requires the sign of one partial derivative of one segment of the
characteristic map to be positive.
A careful analysis of the Doo-Sabin and the Catmull Clark algorithm yields the
following results:
ffl The Doo-Sabin algorithm in its general form uses weights
for computing a new n-gon from an old one. Affine invariance and symmetry,
i.e.
imply that the discrete Fourier
transform of ff is real and of the form -
ff 1 is
greater in modulus than the other entries except for 1 and if
for certain values - then the limit surface is smooth. The bound - max (n)
can be computed explicitly, see Table 1. If 1 ? - max (n), then the limit is a
continuous, yet non-smooth surface.
ffl In particular, the Doo-Sabin algorithm in its original form (5.1) complies with the
conditions, hence generates smooth limit surfaces.
ffl The Catmull-Clark algorithm in its general form uses three weights ff; fi; fl summing
up to one for computing the new location of an extraordinary vertex from
its predecessor and the centers of its neighbors. Iffi fi fi fi
with c n := cos(2-=n), then the limit surface is smooth. If one of the two values
on the left hand side exceeds the right hand side, then the limit surface is not
smooth.
ffl In particular, the Catmull-Clark algorithm in its original form (6.2) complies with
the conditions and generates smooth limit surfaces.
Generalized subdivision and the characteristic map
In this section we briefly outline the results of subdivision analysis as developed in
[Rei95a], and establish a new necessary condition for C 1 -subdivision schemes.
Generalized B-spline subdivision generates a sequence Cm of finer and finer control
polyhedra converging to some limit surface y. On the regular part of the mesh, standard
B-spline subdivision is used for refinement, whereas special rules apply near extraordinary
mesh points. Since all subdivision masks considered here are of fixed finite size,
we can restrict ourselves to analyzing meshes with a single extraordinary mesh point of
valence 4. The regular parts of the control polygons Cm correspond to B-spline
surfaces ym which form an ascending sequence
converging to the limit surface,
With the prolongation of ym defined by
xm := closure (ym+1nym
the limit surface is the essentially disjoint 1 union
The xm are ring-shaped surface layers which can be parametrized conveniently over a
common domain U \Theta Z n ; Z n := Z mod n, consisting of n copies of the compact set
see
Figure
1. Each surface layer xm can be parametrized in terms of control points
polynomial functions N ' according to
Without loss of generality, we may assume that the functions N ' are linearly inde-
pendent. Otherwise, the setup can be reduced without altering the properties of the
scheme. The n parts x 0
forming xm are referred to as segments. Collecting
1 The intersection consists exclusively of points on the common boundary curve, which are identified.
uvU
x
Figure
1: Domain U (left) and structure of surface layers xm (right).
the functions N ' in the columns of a row matrix N and the control points in the rows
of a column matrix Bm yields the vector notation
xm (u; v;
The schemes to be considered here are linear and stationary, i.e. there exists a square
subdivision matrix A with
Definition 2.1 Let the eigenvalues - of A be ordered by modulus,
and denote by / the corresponding generalized real eigenvectors. If
then the characteristic map of the subdivision algorithm is defined by
or in complex form by
Following (2.7), the segments \Psi j and \Psi j
of the characteristic map are the restriction of
\Psi and \Psi to U \Theta j, respectively.
Figure
2: Injective (left) and non-injective (right) characteristic map
Remark is a (L+ 1) \Theta 2-matrix. Its rows play the role of 2D control points.
ii) Throughout, the subscript will indicate that we refer to the complexification of
a two-dimensional real variable or function. We will switch between complex and real
representation without further notice.
On the left hand side, Figure 2 shows a typical example of a characteristic map for
obtained for example by the Doo-Sabin algorithm. In order to guarantee affine
invariance of the algorithm, the rows of A must sum to 1. Thus,
an eigenvector of A to the eigenvalue 1. The following theorem establishes a sufficient
condition for subdivision algorithms to generate smooth limit surfaces.
Theorem 2.1 If - is a real eigenvalue with geometric
multiplicity 2, and if the characteristic map is regular and injective, then the limit surface
y is a regular C 1 -manifold for almost every choice of initial data B 0 .
A proof of this theorem can be found in [Rei95a]. Generalizations, though not required
here, are provided in [Rei95b] and [PR97]. Subsequently, it will be assumed that the
eigenvalues of A satisfy the assumptions of Theorem 2.1, and - will be referred to as
the subdominant eigenvalue.
The following theorem states a necessary condition for the convergence of a subdivision
scheme to smooth limit surfaces.
Theorem 2.2 If the characteristic map of a subdivision scheme is non-injective, i.e.
there exist (u; v;
and if \Psi(u; v; j) is an interior point of \Psi(U; Z n ), then the limit surface y is not a regular
-manifold for almost every choice of initial data B 0 .
Proof Choose an "-neighborhood
Then there exist neighborhoods V and V 0 of (u; v; respectively, with
\Psi is a continuous map sufficiently close to \Psi, i.e.
\Psi is also not injective. Now, express Bm
in terms of the generalized eigenvectors / ' ,
Then for almost every choice of initial data B 0 , the coefficients b 1 and b 2 are linearly
independent, and we can choose coordinates such that b is the origin and b
are the first two unit vectors. A rescaling of the surface layers yields
Now, assume that y is a regular C 1 -manifold. Then the latter equation implies that the
tangent plane at the origin is the xy-plane. The projection ~
/m of ~ xm on the xy-plane
is converging to \Psi, so ~
/m is non-injective for m sufficiently large. Consequently, the
projection of the layers xm to the xy-plane are non-injective near the origin for almost
all m. This contradicts the assumption since the projection of a regular C 1 -manifold on
its tangent plane is locally injective. 2
Finally, we state two basic properties of characteristic maps. The first one is derived
from the fact that \Psi and smoothly,
The second one expresses continuity between segments,
3 Symmetry and Fourier Analysis
This section examines the special structure of the characteristic map for subdivision
schemes obeying generic symmetry assumptions, namely that subdivision is independent
of the particular labeling of control points used for refining the control mesh. According
to the split of xm into n segments, the vectors Bm of control points can be divided into
equally structured blocks,
and A is partitioned into n \Theta n square blocks A j;j 0
Definition 3.1 A subdivision algorithm is called symmetric if it is invariant both under
a shift S and a reflection R of the labeling of the vector Bm of control points. S and R
are permutation matrices characterized by
Symmetry of the subdivision algorithm means that the subdivision matrix A commutes
both with R and S, i.e.
the segments of S and E the identity matrix of the same size as S j;j 0
, the
shift matrix S is given by
Comparison of SA and AS shows that the subdivision matrix of a symmetric scheme is
block-cyclic, i.e.
Thus, (2.8) becomes
With
define the discrete Fourier transform by
is an n-vector in the generalized sense that its entries p j can
be either scalars, or vectors, or matrices. It will always become clear from the context,
what is meant. Applying the discrete Fourier transform to (3.7) yields
see [Lip81] for a comprehensive introduction to the Fourier analysis of cyclic systems.
Theorem 3.1 The characteristic map for a symmetric scheme is non-injective unless
the subdominant eigenvalue - is an eigenvalue of -
A 1 and -
A
Proof - is an eigenvalue of A if and only if it is an eigenvalue of -
A k for some k 2
1g. If - is an eigenvalue of -
A k then it is also an eigenvalue of -
A n\Gammak since A
is real and -
A k . Let -
/, then
is a complex eigenvector of A. Consequently, the segments \Psi j
of the complex characteristic
map satisfy
Now, the winding number of the closed curve
is either k or the curve ff has self-intersections implying
that \Psi is not injective. 2
The effect of the subdominant eigenvalue - stemming from the wrong Fourier component
is depicted in Figure 2 on the right hand side. It shows the non-injective characteristic
map for the Doo-Sabin algorithm for weights chosen such that - is an
eigenvalue of -
A 2 and -
A 4 . As a consequence of Theorem 3.1, it will be assumed that - is
an eigenvalue of -
A 1 and -
A n\Gamma1 from now on. So, (3.12) becomes
The following lemma is the key to reducing the analysis of the characteristic map \Psi to
the examination of a single segment, say \Psi . \Psi is called normalized if -
/ is scaled such
that \Psi 0 (2; that normalization is always possible if \Psi is
injective since then \Psi 0 (2;
Lemma 3.1 If \Psi is a normalized characteristic map of a symmetric scheme, then
and in particular
Proof (3.2) and (3.3) yield N(u; v; j)S implying that
S
since the functions forming N are assumed to be linearly independent. From (3.4) one
concludes that R/ is an eigenvector of A to the eigenvalue -, i.e. it can be written as
R/
On using S/ one obtains from the latter two equations
Since / and / are linearly independent, this implies a = 0, hence R/ . In order
to determine b, consider \Psi
(2; 2). By (3.3),
thus Finally, we obtain
Conditions for regularity and injectivity
In this section we derive a sequence of lemmas resulting in two sufficient conditions for
the regularity and injectivity of the characteristic map that can be verified efficiently.
Throughout, it will be assumed that \Psi is a normalized characteristic map of a symmetric
subdivision scheme.
The first lemma states that for regular functions injectivity is equivalent to injectivity
at the boundary.
Lemma 4.1 Denote by @U the boundary of U and by \Psi 0
@ the restriction of \Psi 0 to @U .
If
then \Psi 0 is injective if and only if \Psi 0
@ is injective.
Proof Assume that \Psi 0 is regular and \Psi 0
@ is injective. By the Inverse Function Theorem
(IFT), points in the interior
U of U are mapped to points in the interior of \Psi 0 (U ), i.e.
Define the function - assigning the number of pre-images to the points in \Psi 0 (U ),
Injectivity of \Psi 0
@ and (4.2) imply -(@ \Psi 0 1. - is upper semi-continuous by the
IFT, hence -(\Psi 0
The second lemma gives a sufficient condition for \Psi 0
(U) being located in a sector of
angle 2-=n in the complex plane.
Lemma 4.2 If \Psi 0 is regular and
for all t 2 [0; 1], then
for all (u; v) 2 U .
Proof By Lemma 3.1, we have
and in particular \Psi 0
then p 2 is monotonically increasing because \Psi 0
we obtain for t 2 [1; 2]
This implies either arg \Psi 0
-=n. The second case
contradicts (4.7), thus
-=n
Figure
3: Curves p 1
.
This means that \Psi 0 (t; 0) is a part of the straight half line
monotonically increasing in both real and imaginary part, it has no
intersections with h except for p (0), hence
Using the scaling property (2.15) and symmetry with respect to the real axis, the latter
two inequations imply that (4.5) holds for all (u; v) 2 @U . By the IFT, we have
is compact, this
implies \Gamma-=n - arg \Psi 0 (U) -=n as asserted. 2
The third lemma provides a condition on the partial derivatives of \Psi 0 that implies
injectivity.
Lemma 4.3 If \Psi 0 is regular and \Psi 0
Proof By Lemma 4.1 it suffices to show that the restriction \Psi 0
@ of \Psi 0 to the boundary
of U is injective. Let
(2t;
for t 2 [0; 1], see Figure 3. Then
with defined as in the proof of Lemma 4.2. p 2
and p 5
do not intersect since
by (4.9). Both curves also do not have self-intersections since they are parametrized
regularly and parts of straight lines. Next, we show that arg p (t) is monotonically
increasing in t. By (4.7) and (4.9),
By assumption, p 1 and p 2 are monotone increasing, thus This implies
d
dt
as claimed. Monotonicity of arg p 6 has the following
consequences: First, it guarantees that p 1
do not have self-intersections.
Second, it excludes intersections of p 1
and p 3
since
Analogously, p 4
and
are disjoint. Third, the only intersections of p 1
and p 3
with
are
and analogously for p 4
. Fourth, the only intersections of p 1
and p 4
are
and the proof is complete. 2
The following theorem establishes a sufficient condition on the partial derivatives of \Psi 0
that guarantees regularity and injectivity of the characteristic map. Its usefulness is due
to the fact that it requires only estimates for the partial derivatives of the single segment
\Psi 0 . Since for generalized B-spline subdivision schemes the functions in questions are
piecewise polynomial, the condition can be verified numerically or even analytically
using B-spline representations and the convex hull property.
Theorem 4.1 If \Psi 0 is regular and \Psi 0
then the characteristic
map \Psi is regular and injective.
Figure
4: Mesh refinement by the Doo-Sabin algorithm.
Proof By Lemma 4.3, \Psi 0 is regular and injective. (3.14) says that \Psi j is obtained from
\Psi 0 by a 2-j=n-rotation about the origin. So, each \Psi is regular and injective.
Further, the segments \Psi j do not overlap since Lemma 4.2 yields
(4.22)The assumptions of the following Corollary are stronger than those of Theorem 4.1, but
can be verified with less effort since no products of partial derivatives are involved in
verifying that \Psi 0 is regular.
Corollary 4.1 If \Psi 0
then the characteristic
map \Psi is regular and injective.
Proof The symmetry relation (4.6) yields
so the determinant J 0 is positive if \Psi 0
5 The Doo-Sabin algorithm
5.1 Algorithm
The Doo-Sabin algorithm is a generalization of the subdivision scheme for biquadratic
tensor product B-splines. For each n-gon of the original mesh, a new, smaller n-gon
is created and connected suitably with its neighbors, see Figure 4. Figure 5 shows the
ff n\Gamma2
ff
Figure
5: Masks for the Doo-Sabin algorithm.
mask for generating a new n-gon from an old one for the regular case
the general case (right). The weights suggested by Doo and Sabin in [DS78] are
Below we analyze more general schemes assuming beforehand nothing but affine invariance
and symmetry,
5.2 Characteristic map
Each of the n segments x j
of the surface layers generated by the Doo-Sabin
algorithm consists of 3 biquadratic B-spline patches. Accordingly, the n blocks
forming the vector of control points Bm consist of 9 elements, each. The labeling is
shown in Figure 6. The 9 \Theta 9-matrices -
introduced in (3.10) have the
following structure,
A k =B @
Figure
Labeling of control points for the Doo-Sabin algorithm
1=16, the sub-matrices are given by
r
r q p
The matrix A k
1;1 has eigenvalues 1=4; 1=8; 1=16, hence each of them is an n-fold eigenvalue
of the subdivision matrix A. Further, A has a 5n-fold eigenvalue 0 stemming from the
5 \Theta 5-zero submatrix of -
A k . Due to their high multiplicity, these eigenvalues cannot
be playing the role of the subdominant eigenvalue -. The only eigenvalues left are the
upper left entries -
A k obtained by applying the discrete Fourier transform
to the vector (ff of weights for the n-gon. Since the ff j sum up to 1, we have
1. Due to symmetry, the remaining eigenvalues are real and occur in pairs
according to -
ff n\Gammak
. From the theory developed in the preceding sections we know
that
ff
must satisfy
ff n\Gamma2
The eigenvector of the matrix -
A 1 corresponding to - is
Note that the characteristic map depends only on - and n. That is, all masks ff with
identical first Fourier component yield the same characteristic map.
5.3 Verification
We start with briefly discussing the case as obtained in particular for the
weights in (5.1). Rearranging the entries of the eigenvector / in the more convenient
matrix form for tensor product B-spline coefficients, see Figure 6, yields
The segment \Psi 0
of the characteristic map consists of three bi-quadratic patches, which
can be expressed in Bernstein-B'ezier form with the following coefficients,2
28 s n
28
\Gamma7 s n
\Gamma7 s n
\Gamma14 s n
\Gamma7 s n
28
\Gamma14 s n
\Gamma14 s n
28 28 c n
\Gamma28 s n (5.9)
Computing the partial derivative \Psi 0
2;v with respect to v yields three
quadratic-linear patches with coefficients2
Both the real and the imaginary part of the coefficients are positive. So, by the convex
hull property and Corollary 4.1 the algorithm is verified to generate smooth limit
surfaces.
The situation for general - is more subtle, in particular as - ! 1. First, Corollary 4.1
turns out to be insufficient. Second, there exists a limit value - depending
on n such that even the assumptions of Theorem 4.1 are not fulfilled for 1
- max . It will be shown in the next subsection that this is due to an actual loss of
smoothness as - passes the bound. All formulas required here were derived using a
computer algebra system. They are partially rather lengthy and will not be stated
explicitly unless necessary. Rather, we depict the crucial results graphically.
In order to apply Theorem 4.1, we have to compute J 0 , i.e. the determinant of the
Jacobian of \Psi 0 . J 0 is a continuous, piecewise bi-cubic function over U which can be
expressed in Bernstein-B'ezier form with 3 \Theta 16 coefficients J 0
depending
on n and -. Explicit calculation shows that all coefficients J 0
- are of the form
with polynomials of degree - 6 in -. We give the coefficient corresponding to
The polynomials P 1 and Q 1 are
In order to apply analytic tools, it is convenient to consider c n as a free variable varying
in the interval c n 2 [\Gamma1=2; 1], which covers all possible values obtained for n - 3. For
fixed there is at most one value ~ c n where J 0
- changes sign,
Figure
7: Feasible set and functions R -).
Figure
7 shows a plot of all these functions as well as a magnification of the significant
region. From the analysis of the case we know that J 0
Thus, J 0 is positive as long as (-; c n ) lies in the shaded region, which is bounded by
precisely, the feasible set for (-; c n ) providing positivity of J 0 is
For the verification of the assumptions of Theorem 4.1 it remains to show that \Psi 0
. Note that both functions are linear in t. So,
it suffices to check positivity for t 2 f0; 1g, which follows immediately from
Finally, we summarize the results derived in this section.
Theorem 5.1 Let ff 0
n be symmetric weights for the Doo-Sabin algorithm. If
ff
ff n\Gamma2
then the limit surface y is a regular C 1 -manifold for almost every choice of initial data
Figure
8: Characteristic map for
5.4 Failure beyond the bound
In contrast to the lower bound - ? 1=4, which appears naturally, the existence of an
upper bound for - may come as a surprise. It is not an artifact of the particular type of
sufficient conditions in Theorem 4.1, but a sharp bound beyond which the Doo-Sabin
algorithm provably fails. If J 0
Consider the curve [g 1 (t); g 2 (t)] := \Psi 0 (t; t). Symmetry with respect to the x-axis implies
For the first component we obtain
hence for each sufficiently small " ? 0 there exists an "
This implies the non-injectivity of the characteristic map \Psi,
Moreover, for " sufficiently small, J 0 (1+ "; 1+ ") ! 0 by continuity. So, \Psi 0 (1+ "; 1+ ") is
an interior point of \Psi(U; Z n ) by the IFT, and the assumptions of Theorem 2.2 are fulfilled
proving sharpness of the bound. Figure 8 shows a magnification of the characteristic map
in the vicinity of \Psi 0 (1; 1) for (right). The latter
case corresponds to weights ff layers of a subdivision
Figure
9: Non-smooth surface generated by the Doo-Sabin algorithm with
surface generated by these weights are shown in Figure 9. The magnification on the right
hand side is non-proportional, i.e. the 'height' of the surface has been expanded in order
to depict its wavy shape. We conclude the discussion of the Doo-Sabin algorithm with
a brief description of the qualitative and quantitative behavior of - max (n). As n !1,
increasing monotonically towards 1. The asymptotic behavior for large n is
The lowest bound occurs for namely
cos
arctan
Table
1 lists the values of - max for
6 The Catmull-Clark algorithm
6.1 Algorithm
The Catmull-Clark algorithm is a generalization of the subdivision scheme for bicubic
tensor product B-splines. Each n-gon of the original mesh is subdivided into n quadrilaterals
thus generating a purely quadrilateral mesh after the first step. There are three
masks for subdividing such a mesh, namely one for computing a new centroid, one for
9 0.9829902941
Table
1: Values of the bound - max (n) for
Figure
10: Mesh refinement by the Catmull-Clark algorithm.
1=4
1=4
1=4
1=4
1=32
fi=n
fl=n
fi=n
fl=n
fi=n
fl=n
fi=n
fl=n
fi=n
fl=n
Figure
11: Masks for the Catmull-Clark algorithm.
Figure
12: Labeling of control points for the Catmull-Clark algorithm.
a new edge point, and one for the new location of a former vertex, see Figure 11. So,
the variables at our disposal are the weights
In [CC78], Catmull and Clark suggest
6.2 Characteristic map
Each of the n segments x j
of the surface layers generated by the Catmull-Clark
algorithm consists of 3 bicubic B-spline patches. Accordingly, the n blocks
forming
the vector of control points Bm consist of 13 elements, each. The labeling used here is
shown in Figure 12. Note that the centroid
is replaced by n identical copies in order to achieve the desired periodic structure. For
all masks involving Mm we substitute
Mm =n
The 13 \Theta 13-matrices -
turn out to have the following structure,
With
the sub-matrices are given by
The eigenvalues 1=8; 1=16; 1=32; 1=64 of the sub-matrix -
are n-fold eigenvalues of A.
Other non-zero eigenvalues come only from -
0;0 . For we obtain the obligatory
eigenvalue letting
which might be either both real or complex conjugate. For k 6= 0, the non-zero eigen-values
of -
are
where c n;k := cos(2-k=n). Let
then straightforward calculus shows that for all n
Consequently, - is subdominant if ff; fi; fl are chosen such that
In particular, this inequality holds for the original weights of Catmull-Clark (6.2), as
can be verified by inspection. A characterization of feasible positive weights can be
found in [BS88] 2 . For computing the characteristic map, the eigenvector -
/ of -
A 1 is
partitioned into three blocks, -
according to the special structure of -
A 1 .
/ is equivalent to
/ can be computed conveniently starting from
which solves the first eigenvector equation. Note that the characteristic map depends
only on n, and not on the particular choice of weights ff; fi; fl provided that (6.11) holds.
6.3 Verification
Corollary 4.1 is sufficient for verifying the algorithm. One proceeds as follows:
1. For given n - 3, compute the subdominant eigenvalue - according to (6.11) and
the corresponding eigenvector -
/ according to (6.15).
2. Express the three patches of the segment \Psi 0 of the characteristic map in Bernstein-
B'ezier form.
3. Compute the forward differences of B'ezier coefficients corresponding
to the partial derivative with respect to v.
4. If all are positive in both components, then by the convex hull
property of the Bernstein-B'ezier form the assumptions of Corollary 4.1 are fulfilled
and the characteristic map is regular and injective.
2 The result in the reference is incorrect for yielding complex eigenvalues - 0
Figure
13: B'ezier coefficients of the partial derivatives \Delta - .
This procedure can be run on a computer algebra system, but the resulting expressions
are rather lengthy, and discussing them is not very instructive. A numerical treatment is
more convenient and yields equally reliable results since only a finite number of quantities
has to be checked for sign. The findings are summarized on Figure 13. The left and
right hand side correspond to the two components of \Delta - . The top row shows the values
of all \Delta - for 20. The bottom row shows the minimum of the \Delta - on a doubly-
logarithmic scale for should cover most cases of practical
relevance. The positivity of all differences is evident. By Corollary 4.1, this proves
smooth convergence of the Catmull-Clark algorithm provided that the inequality (6.14)
holds.
--R
A matrix approach to the analysis of recursively generated b-spline surfaces
Conditions for tangent plane continuity over recursively generated b-spline surfaces
Recursively generated B-spline surfaces on arbitrary topological meshes
Behaviour of recursive subdivision surfaces near extraordinary points
Elements of algebra and algebraic computing
Necessary conditions for subdivision surfaces
A unified approach to sudivision algorithms near extraordinary ver- tices
Some new results on subdivision algorithms for meshes of arbitrary topology
--TR
--CTR
Jorg Peters, Smooth patching of refined triangulations, ACM Transactions on Graphics (TOG), v.20 n.1, p.1-9, Jan. 2001
Jos Stam, Exact evaluation of Catmull-Clark subdivision surfaces at arbitrary parameter values, Proceedings of the 25th annual conference on Computer graphics and interactive techniques, p.395-404, July 1998
K. Kariauskas , J. Peters, Concentric tessellation maps and curvature continuous guided surfaces, Computer Aided Geometric Design, v.24 n.2, p.99-111, February, 2007
Jrg Peters , Ulrich Reif, Shape characterization of subdivision surfaces: basic principles, Computer Aided Geometric Design, v.21 n.6, p.585-599, July 2004
Jrg Peters , Ulrich Reif, The simplest subdivision scheme for smoothing polyhedra, ACM Transactions on Graphics (TOG), v.16 n.4, p.420-431, Oct. 1997
K. Kariauskas , Jrg Peters , U. Reif, Shape characterization of subdivision surfaces: case studies, Computer Aided Geometric Design, v.21 n.6, p.601-614, July 2004
M. K. Jena , P. Shunmugaraj , P. C. Das, A non-stationary subdivision scheme for generalizing trigonometric spline surfaces to arbitrary meshes, Computer Aided Geometric Design, v.20 n.2, p.61-77, May
Jrg Peters, Patching Catmull-Clark meshes, Proceedings of the 27th annual conference on Computer graphics and interactive techniques, p.255-258, July 2000
Evelyne Vanraes , Adhemar Bultheel, A tangent subdivision scheme, ACM Transactions on Graphics (TOG), v.25 n.2, p.340-355, April 2006
I. P. Ivrissimtzis , H.-P. Seidel, Evolutions of polygons in the study of subdivision surfaces, Computing, v.72 n.1-2, p.93-103, April 2004
Barthe , Leif Kobbelt, Subdivision scheme tuning around extraordinary vertices, Computer Aided Geometric Design, v.21 n.6, p.561-583, July 2004
Henning Biermann , Adi Levin , Denis Zorin, Piecewise smooth subdivision surfaces with normal control, Proceedings of the 27th annual conference on Computer graphics and interactive techniques, p.113-120, July 2000
Ioana Boier-Martin , Denis Zorin, Differentiable parameterization of Catmull-Clark subdivision surfaces, Proceedings of the 2004 Eurographics/ACM SIGGRAPH symposium on Geometry processing, July 08-10, 2004, Nice, France
Chhandomay Mandal , Hong Qin , Baba C. Vemuri, A novel FEM-based dynamic framework for subdivision surfaces, Proceedings of the fifth ACM symposium on Solid modeling and applications, p.191-202, June 08-11, 1999, Ann Arbor, Michigan, United States
Georg Umlauf, Analysis and tuning of subdivision algorithms, Proceedings of the 21st spring conference on Computer graphics, May 12-14, 2005, Budmerice, Slovakia
Yonggang Xue , Thomas P.-Y. Yu , Tom Duchamp, Jet subdivision schemes on the k-regular complex, Computer Aided Geometric Design, v.23 n.4, p.361-396, May 2006
Hong Qin , Chhandomay Mandal , Baba C. Vemuri, Dynamic Catmull-Clark Subdivision Surfaces, IEEE Transactions on Visualization and Computer Graphics, v.4 n.3, p.215-229, July 1998
Jrg Peters , Le-Jeng Shiue, Combining 4- and 3-direction subdivision, ACM Transactions on Graphics (TOG), v.23 n.4, p.980-1003, October 2004
Chhandomay Mandal , Hong Qin , Baba C. Vemuri, Dynamic Modeling of Butterfly Subdivision Surfaces, IEEE Transactions on Visualization and Computer Graphics, v.6 n.3, p.265-287, July 2000
Peter Oswald, Designing composite triangular subdivision schemes, Computer Aided Geometric Design, v.22 n.7, p.659-679, October 2005
Martin Bertram, Biorthogonal wavelets for subdivision volumes, Proceedings of the seventh ACM symposium on Solid modeling and applications, June 17-21, 2002, Saarbrcken, Germany
Charles K. Chui , Qingtang Jiang, Matrix-valued subdivision schemes for generating surfaces with extraordinary vertices, Computer Aided Geometric Design, v.23 n.5, p.419-438, July 2006
Martin Bertram , Mark A. Duchaineau , Bernd Hamann , Kenneth I. Joy, Bicubic subdivision-surface wavelets for large-scale isosurface representation and visualization, Proceedings of the conference on Visualization '00, p.389-396, October 2000, Salt Lake City, Utah, United States
Martin Bertram , Mark A. Duchaineau , Bernd Hamann , Kenneth I. Joy, Generalized B-Spline Subdivision-Surface Wavelets for Geometry Compression, IEEE Transactions on Visualization and Computer Graphics, v.10 n.3, p.326-338, May 2004 | arbitrary topology;b-spline;doo-sabin algorithm;characteristic map;catmull-clark algorithm;subdivision |
276487 | Discrete Shocks for Finite Difference Approximations to Scalar Conservation Laws. | Numerical simulations often provide strong evidences for the existence and stability of discrete shocks for certain finite difference schemes approximating conservation laws. This paper presents a framework for converting such numerical observations to mathematical proofs. The framework is applicable to conservative schemes approximating stationary shocks of one-dimensional scalar conservation laws. The numerical flux function of the scheme is assumed to be twice differentiable but the scheme can be nonlinear and of any order of accuracy. To prove existence and stability, we show that it would suffice to verify some simple inequalities, which can usually be done using computers. As examples, we use the framework to give an unified proof of the existence of continuous discrete shock profiles for a modified first-order Lax--Friedrichs scheme and the second-order Lax--Wendroff scheme. We also show the existence and stability of discrete shocks for a third-order weighted essentially nonoscillatory (ENO) scheme. | Introduction
In this paper, we provide a general framework for proving the existence and stability of continuous
discrete shock profiles for conservative finite difference schemes which approximate
scalar conservation laws
Research supported by ARPA/GNR grant N00014-92-J-1890.
We consider schemes of conservative form:
with
Here \Delta) is the numerical flux of the scheme which satisfies g(u; u;
(consistency) and is twice continuously differentiable w.r.t. its arguments; u n
j is an approximation
to u(j \Deltax; n\Deltat)
is a constant integer such that (2p+1) is
the stencil width of the scheme. Schemes with such flux functions include the first order
Lax-Friedrichs scheme and some of its modified versions, the second order Lax-Wendroff
scheme and a class of high resolution weighted ENO schemes[7].
Let be two constants such that Eq. (1.1) with the initial data
admits an entropy satisfying shock given by
where s is the shock speed and by Rankine-Hugoniot condition,
In this paper, we will assume
Let us clarify the concepts we will use frequently in the paper:
Definition 1.1: For
1. sup
2. lim
3.
we call ' an approximate stationary discrete shock with parameter q. If, furthermore, ' also
satisfies
4.
we call it an exact stationary discrete shock for scheme (1.2) with parameter q.
Definition 1.2: If a function '(x)(x 2 R) is bounded, uniformly Lipschitz continuous in R
and for any q 2 [0; 1], is an exact stationary discrete shock for scheme
(1.2) with parameter q, then ' is called an continuous stationary discrete shock profile for
scheme (1.2).
Remark: An approximate discrete shock is not related to the scheme (1.2). However, we
will only be interested in approximate discrete shocks which are so accurate that condition
4 in Definition 1.1 is almost satisfied. In the following discussions, we will omit "stationary"
when referring to stationary discrete shocks. We will often refer to exact discrete shocks
plainly by "discrete shocks".
Existence and stability of discrete shocks are essential for the error analysis of difference
schemes approximating (1.1). It is well known that solutions to (1.1) generally contain
shocks and numerical schemes unavoidably commit O(1) error around the shocks. It is
important to understand whether this O(1) error will destroy the accuracy of the scheme in
smooth parts of the solution. For conservation laws whose solutions are sufficiently smooth
away from isolated shocks, Engquist and Yu [4] proved that, the O(1) error committed by a
finite difference scheme around shocks will not pollute the accuracy of the scheme in smooth
regions up to a certain time, provided that, (i) the scheme is linearly stable and (ii) the
scheme possesses stable discrete shocks.
This paper was motivated by the work of Liu and Yu [9]. Their approach was to linearize
the scheme around some constructed approximate discrete shocks. Existence and stability
of exact discrete shocks were then obtained by proving that this linearized scheme defines
a contractive mapping for small perturbations on the approximated discrete shock and the
orginal scheme behaves closely like the linearized scheme.
Our main observation is that, using computers, one can easily obtain approximate discrete
shocks and most importantly, these approximate discrete shocks can be made as accurate
as the machine limit allows. As we know, if a scheme possesses a discrete shock, it can be
often observed from numerical experiments that the scheme converges quickly to a numerical
discrete shock after a number of time iterations over an initial guess (this is often equivalently
stated as "the residue quickly settles down to machine zero"). When it is linearized around
such accurate approximate discrete shocks, the finite difference scheme can be expected
to behave very closely like the linearized scheme (e.g. in terms of contractiveness of the
induced mapping). The obvious advantage of using numerically computed approximate
discrete shocks is that it can be applied to almost all schemes and all conservation laws with
little efforts.
A brief review on discrete shocks for finite difference schemes is as follows: The existence
of a discrete shock was first studied by Jennings [6] for a monotone scheme by an L 1 -
contraction mapping and the Brower's fixed point theorem. For a first order system, Majda
and Ralston [10] used a center manifold theory and proved the existence of a discrete shock.
Yu [15] and Michaelson [11] followed the center manifold approach and showed the existence
and stability of a discrete shock for the Lax-Wendroff scheme and a third order scheme,
resp. In [14], Smylris and Yu studied the continuous dependence of the discrete shocks by
extending the functional space for finite difference schemes from L 1 (Z) to L 1 (R) and a
fixed point theorem. All existence theorems above require an artificial assumption on the
shock speed. In [8], Liu and Xin proved the existence and stability of a stationary discrete
shock for a modified Lax-Friedrichs scheme. For a first order system, Liu and Yu [9] showed
both the existence and stability of a discrete shock using a pointwise estimate and a fixed
point theorem as well as the continuous dependence of the discrete shock on the end states
Our paper is organized as follows:
First we prove a basic fixed point theorem in Section 2. Then we show in Section 3 how
the existence and stability problem can be formulated into a fixed point problem once an
approximate discrete shock is available. In Section 4, we derive sufficient conditions for the
scheme (1.2) to possess a single stable discrete shock and in Section 5, we derive sufficient
conditions for scheme (1.2) to possess a continuous discrete shock profile. In Section 6, we
discuss how to obtain approximate discrete shocks and how to verify the conditions derived
in earlier sections by computers. In Section 7, we apply our framework to give a unified proof
of the existence of continuous discrete shock profiles for a first order modified Lax-Friedrichs
scheme and the second order Lax-Wendroff scheme. We will also show the proof of existence
and stability of discrete shocks for a third order weighted ENO scheme. Some remarks will
be given in Section 8.
A Basic Fixed Point Theorem
In this section, we prove a basic fixed point theorem. First let us define a weighted l 2 norm.
Suppose that ff ? 1 and fi ? 1 are two constants. For any infinite dimensional vector
becomes the regular l 2 norm. We denote the corresponding space by
l 2
Because ff ? 1 and fi ? 1, it is easy to see that any vector v 2 l 2
lim
We denote a closed ball with radius r around a vector v 0 in l 2
Let F be a mapping from l 2
ff;fi to l 2
ff;fi and have the form
where
1. L is mapping which is linear, i.e. L[av
ff;fi , and contractive, i.e.
2. N is generally a nonlinear mapping and N[- where - is the null vector. Moreover,
v;w2Br (-)
3. E is a constant vector in l 2
independent of v.
We have the following basic fixed point theorem:
Theorem 2.1 fixed point theorem) If there exists oe ? 0 such that
Then the mapping F
1. is contractive in B oe (-) under the norm jj \Delta jj ff;fi ;
2. has a unique fixed point, i.e. there exists only one - (-) such that -
Moreover,
Proof: For any v; w 2 B oe (-), we have
Therefore
Moreover, for any v 2 B oe (-), since
From the condition (2.5), we obtain
Condition (2.5) implies that jjLjj ff;fi
ff;fi is strictly less than 1. Therefore (2.7) and (2.8)
together imply that the mapping F maps B oe (-) into itself and is a contractive mapping.
From Banach fixed point theorem, there exists a unique -
(-) such that F In
addition, we have
from which (2.6) follows. Q.E.D.
3 Formulation of a Fixed Point Problem
For a fixed q 2 [0; 1], assume -
' is an approximate stationary discrete shock with parameter
q and is given. By Definition 1:1, we know
lim
Assume that there exists an exact discrete shock for scheme (1.2) with parameter q
denoted as '. If we set
then condition 2 in Definition 1.1 and Eq. (3.3), resp. imply
lim
Condition 4 in Definition 1.1 gives
Here -
u means a vector sum of -
' and -
u, i.e. ( -
we have -
lim
Using (1.3) to write (3.5) in terms of -
v, we have
io
where we have used
lim
which is implied by (3.2), (3.6) and the consistency and continuity of the flux function g.
Let us define the right-hand-side (RHS) of (3.7) as a mapping in some subspace of the
space of infinite dimensional vectors, namely, let
with v be any vector satisfying (3.6), i.e. lim
0: Then Eq. (3.7) gives
which means that -
v is a fixed point of F .
If we reverse the above arguments, namely, assume that there exists an infinite dimensional
vector - v such that it satisfies (3.6) and is a fixed point of the mapping F defined in
(3.8), then it is easy to see that ' j -
is an exact stationary discrete shock for scheme
(1.2) with parameter q. So to prove the existence of a stationary discrete shock is equivalent
to prove the existence of a fixed point for the mapping F . We will restrict the space for the
search of fixed points of F to l 2
ff;fi in our study, where ff and fi are some suitable constants.
We can rewrite the mapping F in (3.8) in the form of (2.2) with
Here,
are the first order partial
derivatives of the numerical flux function g.
It is easy to see that the mapping L is linear and the mapping N is generally nonlinear
and satisfies N[-. In addition, E is a vector depending on -
but independent of v. The
linear mapping L is just the linearization of the mapping F around the approximate discrete
shock -
'. If -
' is accurate enough, we hope L becomes contractive under the weighted l 2 norm
To see the mapping N would satisfy (2.4), we write, for
any v; w
Z 1Z 1g 00
j+l djd-
where
and g 00
are the second order
derivatives of the flux function g. Notice that the summation on the RHS of (3.13) is a
double summation over k; Introducing the shifting operator (or a mapping)
which is defined as E k [v] vector v, we can rewrite (3.13) as
Z 1Z 1g 00
j+l djd-
If we apply the norm jj \Delta jj ff;fi on both sides of (3.15), due to the fact that v is O(r), the mapping
would satisfy (2.4) provided that the second order derivatives of g are bounded. We will
give precise estimates of jjN jj r
ff;fi in the next two sections for different choices of approximate
discrete shocks -
' and slightly different forms of the mapping N . Then we derive sufficient
conditions on -
' and the first and second derivatives of g which will guarantee the existence
and stability of exact discrete shocks.
Our estimates will be based upon the following two bounds on the first two derivatives
of g:
The functions can be obtained analytically from the given flux function g.
Without loss of generality, we assume that both \Gamma 1 (r) and \Gamma 2 (r) are non-decreasing function
for r - 0.
4 A Single Discrete Shock Profile
For any fixed q 2 [0; 1], we estimate the upper bounds of jjLjj ff;fi and jjN jj r
assuming
that an approximate discrete shock -
' with parameter q is known. We then give a sufficient
condition which ensures the existence and stability of an exact discrete shock for scheme (1.2
with parameter q. The sense of stability will be made precise at the end of this section.
First, we estimate the upper bound of jjLjj ff;fi . Let us write the linear mapping L in the
form of matrix vector product:
thinking v as a column vector with the j th row entry being v j (j 2 Z). According to (3.10),
the infinite dimensional matrix A is given by
refers to the entry of A on i th row and j th column.
otherwise. We define D to be an infinite dimensional diagonal matrix with the i th diagonal
entry being
Use jj \Delta jj 2 and (\Delta; \Delta) 2 to stand for the norm and the inner product in l 2 . It is easy to see that,
for any v 2 l 2
ff;fi , we have jjvjj
(D \Gamma1 is the inverse of D), we
have
A T ~
A T ~
A T ~
Since ~
A T ~
A is symmetric, its l 2 norm is just its spectral radius, ae( ~
A T ~
A). Using Gerschgorin
Circle Theorem from matrix theory, we have
ae( ~
A T ~
A T ~
Notice that A is banded with bandwidth 2p + 1. Therefore ~
A T ~
A is also banded with band-width
not more than 4p + 1. We have the following upper bound for jjLjj
Lemma 4.1
~
A T ~
A
For later use, we define
~
A T ~
A
Here ~
with A and D given by (4.1) and (4.2).
Next we estimate the upper bound of jjN jj r
We start with a simple lemma on the norm of the shifting operator Z). Due to
the non-unitary weight in the norm jj \Delta jj ff;fi , the shifting operator is not unitary.
Lemma 4.2
Proof: Assume k ? 0, for any v 2 l 2
last equality implies
Similarly, for k ! 0, we have
and (4.5) follows by combining the above two inequalities. Q.E.D.
Recall (3.14), we have for any v; w
sup
k is defined in (3.1). Thus we have
j+l
is the function defined in (3.17). From (3.15) and Lemma 4.2, we have
k=\Gammap
Thus we obtain the following upper bound for jjN jj r
Lemma 4.3
are defined in (3.1) and (3.17), resp.
Combining Theorem 2.1 and Lemma 4.1 and 4.3, we obtain:
Theorem 4.1 (Existence and stability of a single discrete shock) If there exist ff ?
and an approximate discrete shock -
' with parameter q such that
oen
where ffi is defined in (4.4), then
1. the finite difference scheme (1.2) possesses an exact stationary discrete shock ' with
parameter q; Moreover, k j sup
2. ' is stable in B oe(-) in the following sense: For any v 2 B oe(-), lim
under maximum norm. Here L n [\Delta] means iterating the finite difference scheme (1.2) n
times using its argument as the initial vector.
Proof:
1. Based on the discussions in Section 3, the existence of an exact discrete shock for
scheme (1.2) is equivalent to the existence of a fixed point for the mapping F in (3.8).
This mapping can be put in the form of L[v] +N [v]
(3.12). By Lemma 4.1 and 4.3, condition (4.7) implies condition (2.5) in Theorem 2.1
(with an extra factor of 1
therefore the mapping (3.8) is contractive in B oe (-) and,
as a result of this, possesses a fixed point -
v. It is easy
to see that ' is an exact stationary discrete shock for scheme (1.2) with parameter q.
Moreover,
Due to the extra factor 1in (4.7), by the second conclusion in Theorem 2.1, namely,
(2.6), we have jj-vjj ff;fi - oe
2. For any v 2 B oe(-), we write
oe and the fact that the mapping F is contractive in B oe (-), we
know that, after applying the mapping F infinitely many times on -
v, the mapping
will converge to the fixed point - v. By the equivalence between the application of the
mapping F on iteration of the scheme (1.2) with initial vector
established in Section 3, we conclude that ' is stable in B oe(-).
5 Continuous Discrete Shock Profiles
In this section, we derive sufficient conditions for the existence of a continuous discrete shock
profile for the scheme (1.2).
For any fixed q 0 2 [0; 1], assume now the conditions in Theorem 4.1 are satisfied. Namely,
there exist ff ? and an approximate discrete shock -
' q0 with parameter q 0
such that condition (4.7) is true. Then by Theorem 4.1, there exists an exact discrete shock
Here we have added superscripts and subscripts q 0 to indicate the dependence on parameter
. The constants ff and fi, however, will be chosen to be independent of q 0 .
For proper choices of ff and fi, we are interested in finding the conditions on the approximate
discrete shock -
and the first two derivatives of the numerical flux function g
such that an exact discrete shock ' q for the scheme (1.2) is guaranteed to exist for any q
in a small neighborhood of q 0 , say [q
Once such
conditions are found and satisfied for a finite number of values of q 0 , e.g.
it becomes clear that for any q 2 [0; 1], there exists an exact discrete shock for scheme (1.2).
A continuous discrete shock profile is then obtained by properly arranging the family of exact
discrete shocks which are parameterized by q 2 [0; 1].
Let us take an approximate discrete shock with parameter q 2 [q
2M ] to be
Zg. It is easy to check that conditions 1,2,3 in Definition 1.1 are
satisfied. We have
sup
Define a mapping based on the approximate discrete shock -
' q in (5.1):
where v is any vector in l 2
ff;fi . We can rewrite the mapping F in the form of (2.2) with
It is easy to see L is a linear mapping and its norm can be estimated similarly as in Lemma 4.1,
namely, we have
A T
~
A q0
and similar to (4.4), we define
A T
~
A q 0
Here ~
is defined in (4.2); A q0 is given in (4.1) with -
' replaced by -
The mapping N satisfies N[- and its norm jjN jj r
ff;fi can be estimated as follows: For
any v; w same as (3.15), we have
Z 1Z 1g 00
j+l djd-
but with (different from (5.11))
Using (5.2) and (5.3), we have
sup
sup
Thus similar to Lemma 4.3, we have
Now we estimate jjE jj ff;fi . Since ' q0 is an exact discrete shock profile, we have '
which implies
We can write
In the third equality above, we have used the definition of -
' q in (5.1) and in the second
j has the bound
sup
Recall the function \Gamma 1 (\Delta) defined in (3.16), we get
Using the fact that jq \Gamma q
2M , we get a larger upper for jjE jj ff;fi which does not depend
on q,
We want to note that the factor jq \Gamma q 0 j on the RHS of (5) will enable us to prove uniform
Lipschitz continuity of a family of exact discrete shocks when they exist.
Based on the bounds found in (5.8), (5.11) and (5.13), we obtain the following sufficient
condition for the existence of a continuous discrete shock profile for scheme (1.2):
Theorem 5.1 (Existence of a continuous discrete shock profile) If there exist ff ?
(an integer) such that for each q
there exist oe q0 ? 0
and an approximate discrete shock profile -
' q0 for which the following inequality is true:
oe q0(
Here E q0 is given by
is given in are two functions defined in (3.16) and
(3.17). Then
1. for any q 2 [0; 1], there exists an exact discrete shock ' q for scheme (1.2);
2. if for any x 2 R, we define '(x)
are uniquely determined by
is a continuous discrete shock
profile for scheme (1.2).
Schematic proof:
1. Condition (5.14) clearly implies condition (4.7) in Theorem 4.1 for Therefore
there exists an exact discrete shock for scheme (1.2) with parameter q 0 . For any
], we can define an approximate discrete shock -
' q as in (5.1).
According to the estimates (5.8), (5.11) and (5.13), condition (5.14) implies (2.5) for
the mapping F in (5.4). By the same logic used in Theorem 4.1, we see that there
exists an exact discrete shock ' q for scheme (1.2) for any q 2 [q
condition (5.14) is true for all q
2. We only need to check that '(x) is bounded and uniformly Lipschitz continuous. For
all
are uniformly bounded by the definition of approximate
discrete shocks and the finiteness of M . Each ' q0 differs from -
' q0 by a vector whose
maximum can be bounded by oe q 0
, so does ' q differ from -
. Due to
choice of -
' q in (5.1), it is easy to see that ' q is uniformly bounded for q 2 [0; 1]. To
prove that '(x) is uniformly Lipschitz continuous, we first give two observations which
can be shown easily:
Observation #1: For any q 2 [0; 1], there exists q
M for some integer i between 0
and M , such that
sup
where the constant C are independent of q and q 0 . This is a result of the estimate
(5.12), the bound (2.6) in Theorem 2.1 and condition (5.14).
Observation #2:
This is due to the way we parameterize the family of discrete shocks, namely the
parameter q in Definition 1.1.
An easy generalization of Observation #1 is that for any q 1
or
for any j 2 Z. Let x
only need to consider the case which we have
Because each term on the RHS of the last equality is of the form of (5.16), the uniform
Lipschitz continuity of '(x) follows.
6 Algorithms for Computer Verification
In this section, we discuss how to use a computer to verify condition (4.7) in Theorem 4.1 to
prove the existence and stability of an exact discrete shock or condition (5.14) in Theorem 5.1
to prove the existence of a continuous discrete shock profile, for scheme (1.2).
6.1 Computing an approximate discrete shock
We start with providing a method of obtaining an accurate approximate discrete shock -
for any fixed q 2 [0; 1] using scheme (1.2). Let J be an integer. Set
where - j is given by (1.4) and f- are chosen such that
For example, we can take
We then apply the finite difference scheme (1.2) to the initial data u 0 repeatedly for sufficiently
many times. Note that when the scheme is applied to u n
need
values of u n
in order to compute values of u n+1
j for all jjj - J . We can
simply set u p. Although this makes
the scheme non-conservative in the bounded region (i.e. jjj - J), it actually does not make
an error much bigger than the machine accuracy if J is taken to be large enough. This is
because an exact discrete shock is generally believed to be converging to the two end states
exponentially fast. For the sake of rigorousness, we can modify the value of u n 0such that
to make the procedure conservative in the bounded region.
Here n 0 is assumed to the number of applications of the scheme on u 0 . To determine how
large n 0 should be, we can monitor sup
is defined in (3.12) with -
' replaced by u n ) to
see if it is small enough, say close to machine accuracy, for the purpose of our verification of
the conditions in Theorem 4.1 or Theorem 5.1. Finally, we can set the approximate discrete
shock to be
It is easy to check that -
' q satisfies the conditions 1 to 3 in Definition 1.1.
Remark: Theoretically the larger J is, the more accurate -
' q one can get. However, larger
J means longer computer time that it takes to verify the conditions. If
to 80 is believed to be good enough. If very small, J needs to be very
large and it may even exceed the computer power. In the latter case, the framework in this
paper may be improper.
6.2 Choosing the constants ff and fi
Once we have an approximate discrete shock -
' q , we can decide what to choose for ff and
fi. The criterion for this is to make the norm jjLjj ff;fi as much below 1 as possible, or in our
estimates, to maximize ffi in (4.4). The range of possible values of ff can be obtained by
studying the linearized scheme of (1.2) around u Similarly, fi can be obtained by studying
the linearized scheme of (1.2) around u \Gamma . See Smylris [13] for details. We can use one
approximate discrete shock or a few such approximate discrete shocks (corresponding to
different values of q in [0; 1]) to choose the constants ff and fi. In the latter case, we should
make the minimum of ffi (over different values of q) above 0 as far as possible. Note that the
matrix A in (4.4), which is given by (4.1) with -
' replaced by -
essentially finite due to
constancy of -
we have a finite row sums to take a maximum of
in (4.4).
6.3 Strategy for verification
We suggest the following strategy for verifying the condition (4.7) in Theorem 4.1. For a
given q 2 [0; 1],
1. Find the function \Gamma 2 (r). Make sure it is non-decreasing in r.
2. Compute an approximate discrete shock following the method described in subsection
6.1. Make it as accurate as possible should the condition (4.7) seems very demanding
3. Find the range of the constants ff and fi from the linear analysis of the scheme and
choose ff ? in (4.4) is positive and maximized.
4. Find the largest possible value for oe for which condition (4.7) is true. Mostly, we can
take
If the above four steps are through, we can conclude that scheme (1.2) does possess an exact
discrete shock with parameter q and it is stable in the sense stated in the second part of
Theorem 4.1.
To verify condition (5.14), we suggest the following steps:
1. Find the functions
2. For several q values, compute an approximate discrete shock -
' q0 for each q 0 . Find the
proper constants ff ? 1 and fi ? 1 such that minimum of the values of ffi q 0
based on
each -
positive and maximized. We then set M - oe \Gamma1
q0 and find oe q 0
such that the
RHS of (5.14) is maximized. Using this oe q0 and replacing - k q0 by
can obtain an estimate of the size of M by requiring the second term on the LHS of
(5.14) to be less than the RHS of (5.14). One can even replace M inside the functions
q0 as long as we eventually use an M - oe \Gamma1
q0 for all values of q 0
sampled (this is due to the monotonicity of \Gamma 1 (\Delta) and \Gamma 2 (\Delta)). We suggest one always
use a bigger M than necessary to attain a bigger margin of the RHS of (5.14) over the
LHS. Usually, one can take
3. For each q
we compute an approximate discrete shock -
parameter q 0 and check if for this -
q0 , there exists oe q0 ? 0 such that (5.14) is true.
If (5.14) is true for every q
conclude that the finite difference
scheme does have a continuous discrete shock profile.
7 Some Examples
In this section, we apply Theorem 5.1 to give an unified proof of the existence of a continuous
discrete shock profile for a modified Lax-Friedrichs scheme and the Lax-Wendroff scheme.
For a third order WENO scheme, we apply Theorem 4.1 to prove the existence and stability
of an exact discrete shock for some sample values of q in [0; 1].
As an example, we take the conservation law to be the Burgers' Eq.:
x
and take the end states \Gamma1. It is well known that an entropy satisfying
stationary shock exists for these two end states.
7.1 A modified Lax-Friedrichs scheme
The flux function for a modified Lax-Friedrichs scheme is
It is clear that the stencil width constant We take the upper bound of the first and
second derivatives of this flux function to be \Gamma 1
Appendix
A.1).
For 0:5, we use 41 points (i.e. to compute approximate discrete shocks and
choose to define the weighted l 2 norm. Roughly oe q 0
12ff maximizes the RHS of
and M can be estimated by 64ff 2
. We take 10000 and is able to verify condition
. Thus this modified Lax-Friedrichs scheme possesses a
continuous stationary discrete shock profile for Burgers' Eq. with end states
\Gamma1. The constant oe 0 , which represents the size of the stability region for the discrete
shocks (see conclusion 2 in Theorem 4.1), is approximately 7 \Theta 10 \Gamma3 . We plot values of the
LHS and RHS of (5.14) for 100 even spaced samples of q 0 in [0; 1] in Figure 1a. The discrete
shock profile is plotted in Figure 1b.
7.2 The Lax-Wendroff scheme
The flux function for the Lax-Wendroff scheme is
The stencil width constant
Appendix
A.2). For 0:5, we use 61 points (or to compute approximate discrete shocks.
For the weighted l 2 norm, we take Roughly oe q 0
24ff maximizes the RHS of
and M can be estimated by 96ff 2
. We take and is able to verify (5.14)
for
. Thus the Lax-Wendroff scheme possesses a continuous stationary
discrete shock profile for Burgers' Eq. with end states \Gamma1. The constant
for the size of the stability region, namely, oe 0 , is approximately 2:3 \Theta 10 \Gamma3 . We plot the LHS
and RHS of (5.14) for 100 even spaced samples of q 0 2 [0; 1] in Figure 1c. The discrete shock
profile is plotted in Figure 1d.
7.3 The third order WENO scheme
The WENO (weighted essentially non-oscillatory) schemes [7] are variations of the ENO
(esstially non-oscilltory) schemes [12]. They both achieve essentially non-oscillatory property
by favoring information from the smoother part of the stencil over that from the less smooth
(a)
(b) -20
(d)
Figure
1: (a) The modified Lax-Friedrichs scheme. Solid line: the value of the LHS of (5.14);
Dashed line: the value of the RHS of (5.14). (b) A discrete shock profile for the modified
Lax-Friedrichs scheme. (c) Same as (a) but for the Lax-Wendroff scheme. (d) Same as (b)
but for the Lax-Wendroff scheme.
or discontinuous part. However, the numerical flux function of the ENO schemes is at most
Lipschitz contiuous while the numerical flux function of the WENO schemes are infiinitely
smooth (if one takes ffl w appearing below to be nonzero).
The numerical flux function for the third order WENO scheme with global Lax-Friedrichs
flux splitting is
and
f \Sigma (z) =2 (f(z) \Sigma \Lambdaz)
Here, ffl w is a small constant to avoid the denominator to be zero and is taken as ffl
is a constant which is the maximum of jf 0 (u)j over all possible values of u. In our case, we
take to be slightly above the maximum of the modulus of the two end states, Namely, we
set 1:1. The first and second order partial derivatives of the numerical flux g are shown
in the
Appendix
.
This scheme is third order accurate in space where the solution is monotone and smooth.
It degenerates to second order at smooth extrema. See [7] for details. If we use Euler Forward
in time, the scheme has the form
However this scheme is linearly unstable for any constant - ? 0. Abbreviate the RHS of
express the scheme with the third order Runge-Kutta scheme [12] in
time as
We abbreviate (7.6) as
We have two observations: (i) if f'
as in (3.8), if this mapping is contractive
under jj \Delta jj ff;fi for some ff ? 1 and fi ? 1, then the mapping derived from
contractive under the same norm.
The first observation is obvious. The second one is due to the fact that each stage in the
third order Runge-Kutta scheme (7.6) is a convex combination of u n and E[u] where u is u n
in stage one, u (1) in stage two and u (2) in the final stage. See [12] for details.
Therefore, in order to prove the existence and stability of exact discrete shocks for (7.6),
it suffices to prove the existence and stability of exact discrete shocks for (7.4). We have
attempted to apply Theorem 5.1 to prove the existence of a continuous discrete shock profile
for (7.4) and found that we need sample roughly 10 19 even spaced values of q 0 in [0; 1] which
is far beyond the computer power. Nevertheless, we are able to use Theorem 4.1 to prove the
existence and stability of exact discrete shocks for (7.4) for many sample values of q 0 2 [0; 1].
Our computer verification strongly indicates that a continuous discrete shock profile does
exist for this scheme.
Here are the details of the computer verification:
It is clear that the stencil width constant p equals 2. We take the upper bound for the
second derivatives of the numerical flux function to be
Its derivation is detailed in Appendix A.3. We have taken used 161 points (or
to compute the approximate discrete shocks. ff and fi are both taken to be 1:8. The
condition in Theorem 4.1 is verified for q This verification is done on
CRAY C-90 using double precision.
We plot the LHS and RHS of (4.7) for the 1000 even spaced samples of q 0 in Figure 2a. It
can be seen that the curve for the LHS is properly below the curve for the RHS. We believe
it is true for all q 0 2 [0; 1]. The discrete shock profile with 40 even spaced samples of q 0 is
plotted in Figure 2b.
The discrete shock profile appears to be monotone. In Figure 2cd, we show the profile
zoomed around between the grid points indexed from \Gamma10 to 0 and around
between points indexed from 0 to 10, resp. Notice that the profile contains very small
oscillations of magnitude around 10 \Gamma4 on both sides of the shock. However the profile looks
very smooth, which leads us to believe that a continuous discrete shock profile does exist
for this third order WENO scheme. We have yet been able to find a less stringent condition
than that in Theorem 5.1 in order to prove this by computer.
Concluding Remarks
We have provided sufficient conditions for a conservative scheme (1.2) to have a single discrete
shock (Theorem 4.1) or a continuous discrete shock profile (Theorem 5.1) for a scalar
conservation law in one dimension. These conditions can usually be verified by computers as
(a)
(b) -80 -60 -40 -20
-0.50.5(c)
(d)
Figure
2: The third order WENO scheme. (a) Solid line: the value of LHS of (4.7); Dashed
line: the value of the RHS of (4.7). (b) The discrete shock profile. (c) Zoom of (b) around
Zoom of (b) around
demonstrated in the last section. The key idea here was to linearize the scheme around accurate
numerical approximations of the discrete shocks and find suitable weighted l 2 norms
for this linear part to define a contractive mapping. If we can find sufficiently accurate
approximate discrete shocks, the orginal scheme behaves closely like the linearized scheme
around this approximate discrete shock in terms of contractiveness of the induced mapping.
Several generalizations or implications of Theorem 4.1 or Theorem 5.1 are immediate.
For example, we can find sufficent conditions, in the form of a single inequality, which assure
the existence of discrete shocks or a continuous discrete shock profile for a range of the
time-space ratio - and the end states Using the result in [4], we can generalize the
theorems to non-stationary shocks with sufficiently small shock speed. Pointwise convergence
rate estimates can also be obtained for schemes that possess stable discrete shocks and are
linearly stable, when used to approximate scalar conservation laws whose solution is smooth
except for some isolated shocks. However, we will not elaborate on such extensions.
9
Acknowledgments
We would like to thank Bj-orn Engquist and Stanley Osher for their support in this research.
The first author wants to thank Chi-Wang Shu for valuable suggestions.
Functions
Let
jzj-r
where f is the flux function of the conservation law (1.1). We derive the functions \Gamma 1 (r) and
for the numerical flux functions of the schemes discussed in Section 7.
For simplicity, we use g 0
k to denote the first order derivative of g(z
and use g 00
k;l to denote its second order derivative w.r.t z k and z l , where k;
A.1 The modified Lax-Friedrichs scheme
The numerical flux function is given in (7.1). Its first order derivatives are
Its second order derivatives are
Therefore, we have
A.2 Lax-Wendroff scheme
The numerical flux function is given in (7.2). Its first order derivatives are
Its second order derivatives are
1). Therefore, we have
A.3 The third order WENO scheme
The flux function for the third order WENO scheme is given in (7.3). To obtain \Gamma 1 (r) and
first find the first two derivatives of the flux function.
The derivatives of the function
are:
r 00
r 00
r 00
The derivatives of the function
a /
a
a
aa /)
a
a r 0
ab /)
Let
2. We define
Similarly we define by / \Sigma
aa / \Sigma
ab
bb the first and second order derivatives of / with
arguments, resp. same as / \Sigma . We drop 0 and 00 in the notation for derivatives of / for
simplicity.
The first order derivatives of the numerical flux function (7.3) are
a
a
2. The second order derivatives are
aa
ab
ab
ab
bb
ab
ab
bb
ab
aa
Other second order derivatives are know by symmetry, i.e. g 00
l;k .
We have the following simple observations:
rr 00
where r 0 can be r 0
a or r 0
can be r 00
aa or r 00
bb or r 00
ab and similarly, / 0 can be / 0
a or / 0
be / 00
aa or / 00
bb or / 00
ab .
Based on the above observations, we have
--R
Convergence of the Lax-Friedrichs scheme for isentropic gas dynamics III
Convergence of difference scheme with high resolution to conservation laws.
Courant and Friedrichs
Convergence of a finite difference scheme for a piece-wise smooth solution
Viscous limits for piecewise smooth solution to system of conservation laws
Gray Jennings Discrete shocks
Efficient implementation of Weighted ENO schemes
Construction and nonlinear stability of shocks for discrete conservation laws
Discrete shock profiles for systems of conservation laws
Discrete shocks for difference approximations to system of conservation laws
Efficient Implementation of essentially non-oscillatory shock-capturing schemes
Existence and Stability of Stationary profiles of the LW Scheme
On the existence and stability of a discrete shock profile for a finite difference scheme
--TR
--CTR
Hailiang Liu, The l1 global decay to discrete shocks for scalar monotone schemes, Mathematics of Computation, v.72 n.241, p.227-245, 01 January | conservation law;weighted ENO;discrete shock |
276491 | A New Spectral Boundary Integral Collocation Method for Three-Dimensional Potential Problems. | In this work we propose and analyze a new global collocation method for classical second-kind boundary integral equations of potential theory on smooth simple closed surfaces $\Gamma \subset {\Bbb R}^3$. Under the assumption that $\Gamma$ is diffeomorphic to the unit sphere $\partial B$, the original equation is transferred to an equivalent one on $\partial B$ which is solved using collocation onto a nonstandard set of basis functions. The collocation points are situated on lines of constant latitude and longitude. The interpolation operator used in the collocation method is equivalent to a certain discrete orthogonal (pseudospectral) projection, and this equivalence allows us to establish the fundamental properties of the interpolation process and subsequently to prove that our collocation method is stable and super-algebraically convergent. In addition, we describe a fast method for computing the weakly singular collocation integrals and present some numerical experiments illustrating the use of the method. These show that at least for model problems the method attains an exponential rate of convergence and exhibits a good accuracy for very small numbers of degrees of freedom. | Introduction
In this work we present a new spectrally convergent collocation method for second-kind
boundary integral equations of potential theory. Our method is the result of a search
for a three-dimensional analogue of well-known discrete global Galerkin or "pseudospec-
tral" schemes for integral equations on planar contours (see, e.g., [15], [19], [4] or [14]).
In these two-dimensional methods the contour is parametrized by a periodic univariate
function and the basis functions used to approximate the solution are constructed from
trigonometric polynomials with respect to this parameter. With an appropriate choice of
quadrature rule the discrete orthogonal projection is then also an interpolation operator,
the discrete Galerkin method is equivalent to a collocation scheme and hence is relatively
cheap to implement. Stability and spectral orders of convergence are then proved using
reasonably standard arguments.
It is interesting to consider the extension of such ideas to three dimensions. In that
case a spectral method may be quite attractive since it has the potential to achieve an
acceptable accuracy with a significantly smaller linear algebra overhead than that required
by conventional (piecewise) methods. In line with spectral methods in general, we consider
here only problems with smooth solutions and hence (in common with the two-dimensional
case) we assume that our boundary integral equation is posed on a smooth surface.
Although most boundaries of interest in structural or mechanical engineering are not
smooth, there are many applications of smooth boundaries in other branches of science
- such as biology and medicine, where problems with corners and edges are rare.
We make the further assumption that our boundary is diffeomorphic to the unit sphere,
which means that the sphere may be used as a parametrization of the boundary. Although
this does restrict the range of problems which can be treated, it is in some sense a
natural generalisation of the fact that in two dimensions all simple closed contours may
be parametrized by the unit circle. Moreover there exists quite a lot of interest already
in the literature on problems of this form.
Applications of boundary value problems on spherical or nearly spherical geometries
are found for example in global weather prediction and in geodesy see, e.g. [21], [11].
Numerical analyses of various methods for such problems and further applications can be
found in [2], [1], [5], [7, x3.6], [8], [9], [12], [16], [23], [24], [25] and the associated references.
For a recent review of spectral methods in general see [10].
For boundary integral equations of the type considered in this paper, nondiscrete (and
therefore not fully practical) Galerkin methods using spherical harmonic basis functions
are the only global approximation schemes whose convergence has been proved ([2], [1],
[16]).
Two questions then naturally arise:
(a) whether these (or related) Galerkin methods still converge when quadrature rules
are used to compute the Galerkin integrals, and
(b) whether any such discrete Galerkin method is equivalent to a collocation scheme.
A discrete method using spherical harmonics and closely related to those used for implementation
(but not covered by the proofs) in [2], [3], [1] and [16] is analysed completely
in [12] for boundaries which are exactly spherical.
However the convergence proof used there fails when the boundary is merely diffeomorphic
to the sphere. This is essentially because the corresponding discrete Galerkin
projection has a bound which grows so fast that the usual stability proof fails. Thus the
answer to (a) is still unknown in this case. With regard to (b), it is known from [22] that
the answer is "no" when the underlying basis functions are spherical harmonics.
The very interesting unpublished Ph.D. thesis of Wienert [25] (see also [7]) suggested
some fully discrete variants of Galerkin's method with both spherical harmonics and other
basis sets and proved some new approximation theory (yielding exponential convergence
rates) for the discrete integral operators. However Wienert did not give a convergence
analysis for the corresponding approximate integral equations and thus does not provide
an answer to (a). Nevertheless one of his methods suggests the use of a discrete orthogonal
projection, with basis functions chosen as Fourier modes on R 2 (mapped to the unit sphere
with polar coordinates). This happens also to be an interpolatory projection and hence
satisfies (b).
Our method takes Wienert's idea as starting point. We construct a basis set by
choosing a distinguished set of Fourier modes on the plane and mapping them to the unit
sphere, again with spherical polar coordinates. However unlike Wienert, we only choose
those Fourier modes which map to continuous functions on the sphere and which have
a certain reflectional symmetry. (Similar ideas are in [21].) Our choice ensures that the
spherical polar coordinate transformation forms a bijection between our basis set and the
distinguished set of Fourier modes and that the usual discrete orthogonal projection onto
these Fourier modes corresponds to a discrete orthogonal projection on the sphere. Both
are interpolatory and have norm growing with O(log 2 (N)), when the number of degrees
of freedom is O(N 2 ). This growth is slow enough to allow stability of the corresponding
discrete Galerkin method to be proved. So the answer to both (a) and (b) is "yes" for
this new method. These results and the implementation of the method are the principal
aim of this paper.
The layout of the paper is as follows. In Section 2 we describe our basis set on
the sphere and prove the required properties of the corresponding discrete orthogonal
projection. The space spanned by our basis functions has dimension 2.
It contains, as a proper subset, spherical polynomials (which are smooth on the sphere)
as well as extra functions which are continuous on the sphere but lack smoothness at
the poles. Once Section 2 is done the convergence of the collocation method is obtained
very easily in Section 3. The method is proved to be super-algebraically convergent. It
should be possible also to show exponential convergence in an appropriate analytic setting
(similar to that in [25]), but we do not attempt that here.
Computation of the collocation integrals is considered in Section 4. Here were are
again influenced by [25]. If the basis functions were smooth on the sphere there is a very
nice way of evaluating the required weakly singular integrals using a rotational change
of variables and then polar coordinates, which eliminates the weak singularity in the
integrand (see also [7, x3.6]). In Section 4 we consider this transformation applied to
our collocation integrals and we show that it yields integrands on polar coordinate space
('; OE) 2 [0; -] \Theta [0; 2-] with a mild singularity in the second derivative at certain points.
Fortunately this singularity seems not to affect the empirical convergence of standard
quadrature rules for the collocation integrals. These perform as if the kernel were
smooth, as is seen in Section 5 where we use the tensor product Gauss-Legendre rule for
the integrals (after having transformed the singularities to the edges of the domain of in-
tegration) and observe exponential convergence. This would be difficult to prove because
of the mild singularity in the integrands. We also observe exponential convergence of the
solutions to the integral equation and to the corresponding solution to the underlying
harmonic boundary value problem, provided the number of quadrature points increases
appropriately with the number of degrees of freedom. For regular problems the method is
extremely accurate: relative errors of about 10 \Gamma3 are obtained by inverting linear systems
of dimension about 40. Because of the relatively small linear algebra requirements of this
method we are able to code it using MATLAB ([18]). On modern machines this has been
found to yield accurate solutions of potential problems quickly enough to be usable as an
interactive tool.
We finish this section by describing in more detail the problem to be solved. Let
\Omega ae R 3 be a bounded domain with simple smooth closed boundary \Gamma. For any j 2 \Gamma let
denote the outward unit normal to \Gamma and let d\Sigma(j) denote surface measure on \Gamma.
denote the Euclidean norm on R 3 . We are primarily concerned in this paper with
fast (spectrally accurate) solutions of the boundary integral equation
Z
\Gamma2-
This is the classical second kind equation which arises when the indirect boundary integral
method with a double layer potential ansatz is used to solve the Dirichlet problem for
Laplace's equation on \Omega\Gamma The method of this paper can also be used to solve the Neumann
problem
on\Omega or the corresponding (exterior) problems
Our method for solving (1.1) is based upon expanding the solution u in terms of a
certain set of global basis functions, with the coefficients in the expansion determined by
collocation. We make the following assumptions which are fundamental to the construction
of this collocation method.
There exists a smooth map q : D ae R 3 (where D is a domain containing the
unit sphere
bijective and its Jacobian dq is assumed to satisfy
det dq(x) 6= 0; x
Since @B ae D is compact it follows from (A.1) that:
All derivatives of q are bounded on @B : (1.2)
With the change of variable the directed measures on \Gamma and @B are
related by Nanson's formula ([20, p.88]):
where doe(y) denotes surface measure on @B at y 2 @B, y is the unit normal to
@B at y and Adj(M) denotes the adjugate of the 3 \Theta 3 matrix M (i.e. the transposed
matrix of its cofactors).
Now, with the substitution using (1.3), we can transplant
onto @B to obtain
Z
where
We will be concerned with the solution of (1.4), which is equivalent to (1.1). Thus we
rewrite u ffi q as u and f ffi q as f and we abbreviate (1.4) as
where K is the integral operator on @B induced by the kernel k. It is well known that
(1.6) has a unique solution u which is smooth when f is smooth.
series and interpolation on the
unit sphere
2.1 Functions on the unit sphere
Let C r (@B); r - 0, denote the usual space of r-times continuously differentiable functions
on the unit sphere @B with norm jj:jj 1;r;@B . Let
denote the space of r-times continuously differentiable functions on R 2 which are 2-
periodic in each argument with norm jj:jj 1;r;D . In addition, for r
we let C r;ff (@B) denote the functions in C r (@B) whose rth partial derivatives (defined with
respect to a suitable chart) satisfy a H-older condition of order ff. (C r;0 (@B) j C r (@B).)
Also, let C r;ff (D) denote the analogous space on the Euclidean domain D. Let k:k 1;r;ff;@B
and k:k 1;r;ff;D denote the corresponding norms. When we drop the index r in the
above notations.
The standard mapping from R 2 to @B is the spherical polar coordinate transformation
This is invertible when considered as a map from (0; -) \Theta [0; 2-) onto the punctured sphere
@Bnfn; sg, where
(for any OE 2 R) are the north and south poles, respectively. On @Bnfn; sg, p has a
continuous inverse given by p is the solution
of the equations
The function p is smooth and 2-periodic in ('; OE) and any v 2 C(@B) thus induces a
function J v 2 C(D) given by
Analogously, for all r
C r;ff (D) and
It is clear that J is an isometry from C(@B) into C(D), i.e.
Since p has reflectional symmetry:
and is independent of OE 2 R at it is natural to introduce the
proper subspaces of C(D):
are independent of OE 2 Rg:
Then it is easy to show that J : C(@B) ! S(D) is an isometric isomorphism with inverse
given by J
w(0;
(w
w(-; s:
We shall use (2.6) to define our approximation procedure in C(@B). We do this by
first recalling the discrete Fourier projection on C(D). Provided this is carefully defined
it turns out also to be a projection operator in S(D). Using the isometry (2.6) this gives
us an analogue of discrete Fourier approximation in C(@B). The details are explored in
the following subsections.
2.2 Discrete Fourier series in C(D)
The Fourier modes
are orthonormal with respect to the usual inner product
We shall use a corresponding discrete inner product defined using the quadrature points
Throughout the paper we assume that N is odd, so that -=2 is never a quadrature point.
The commonly used discrete analogue of (2.8) is then obtained by applying the tensor
product trapezoidal rule based at the nodes (2.9) to obtain
for Throughout the paper
means the usual summation with only half
the first and last terms included and C denotes a generic constant which is independent
of N .
The truncated discrete Fourier series of a function w 2 C(D) is then defined by
m=\GammaN
The following lemma discusses various properties of EN . Before we prove it we recall the
following useful formula2N
Lemma 2.1
(iii)
log 2 N
Proof Let EN denote the set defined on the right hand side of (2.12) and let w 2 C(D).
Note first that
Hence, for m;
\GammaN thus it follows that hw; e m
\GammaN
Hence from the
definition of EN , we get
Moreover, using the identity
he m
we obtain, for
together with
EN (e N
Hence a short calculation shows that
Thus (i) and (ii) follow from (2.15) and (2.16). Also observe that for
m=\GammaN
Then (iii) follows on using (2.12). For (iv), let w 2 C(D) and ('; OE) 2 D. Then
where D
N is the modified Dirichlet kernel [26, Vol I, p.50] given by
Hence
sup
We show in the Appendix that sup
proof is obtained by modifying the argument in [26, Vol.II, p.19]). Hence (iv) follows from
(2.19).
Finally, to obtain (v), let w 2 C r;ff (D); r - 0; ff 2 (0; 1]: Then recall that for each
there exists a trigonometric polynomial T n ('; OE) of degree n in each variable such
that
[17, p.89, Theorem 7]. Since EN TN
Using (iv) and (2.20) in (2.21), we get (v).
2.3 Discrete Fourier series in R(D) and C(D)
The next result shows that the discrete orthogonal projection EN maps R(D) into itself
with range a subspace of EN (C(D)). To show this, it is convenient to introduce functions
\Gamman
Observe that r m
Lemma 2.2
m=\GammaN
and hence EN
Proof Let RN denote the set defined by the right hand side of (2.23). We prove first
that EN (R(D)) ' RN , using essentially the method of [25, p.26] but in a slightly different
setting.
Let w 2 R(D). We write
where
m=\GammaN
and
w (2)
m=\GammaN
m=\GammaN
Using the fact that w 2 R(D), and the 2- periodicity of w and e m
n we have
w (2)
m=\GammaN
m=\GammaN
Using this together with (2.24) and (2.25), we have
m=\GammaN
n , this yields
m=\GammaNN
where
By easily that r N
Z. Hence we have
. Also for 0 - jmj - N with m odd we have
which implies ff m
Thus we have shown
The fact that RN ' EN (R(D)) follows from the following four identities, which can
be proved using standard arguments.
EN r q
EN (r \GammaN
EN r q
r q
EN (r \GammaN
r \GammaN
The next result shows that in fact EN leaves S(D) invariant.
Lemma 2.3 For all w 2 S(D), and all OE 2 R, we have
Proof Let w 2 S(D) and set w(0;
(2.12) we have, for all OE 2 R,
m=\GammaN
m=\GammaN
m=\GammaN
Similarly Lemma 2.2, we have
The previous lemmas show that EN is an interpolatory projection operator on S(D). Its
image EN (S(D)) constitutes a distinguished subspace of the space spanned by the Fourier
modes. It is from this subspace that a basis of interpolating trigonometric polynomials in
S(D) can be constructed. Using J \Gamma1 given by (2.6) this leads directly to an interpolatory
basis in C(@B).
To define the basis in S(D), define a function
\Theta R! R by
f0g. Moreover the sets fs m
are orthogonal with respect to the inner product
(2.8). The spaces
are mutually orthogonal and have dimension
Hence the space
is a subspace of S(D) with dimension 2N 2. We shall show in Theorem 2.6
below that for each w 2 S(D), EN w is the unique element of N which interpolates w at
a certain set of 2N points. Thus EN is an interpolation operator. The proof is
obtained with the help of the following two lemmas.
Lemma 2.4
where
Proof Observe first that, since the basis functions for N are independent of those for
f0g. Now, by examining each of the basis functions for N \Phi \Delta N
in turn and using the characterisation (2.13) and the definition of R(D) we can show that
Hence if wN 2 N \Phi \Delta N then by Lemma 2.1 (ii),
Conversely, suppose wN 2 EN (R(D)). Then from Lemma 2.2 there exist constants ff m
with ff \GammaN
n and ff m
when jmj is odd such that
m=\GammaN
Thus, recalling (2.22),
wN ('; OE) =-
m=\GammaN
even
m=\GammaN
sin n' exp imOE:
where
and
w (2)
Since w (1)
it remains to show
w (2)
To obtain (2.35) let m 2 f0; :::; Ng be even. Then observe that for n 2 f1; :::; Ng with n
odd,
cos n' cos
sin j' sin ' cos mOE
In addition if n 2 f0; :::; Ng with n even then
cos n' cos
sin j' sin ' cos mOE
even and n 2 f0; :::; Ng
cos n' sin mOE 2 N \Phi
Now (2.36)-(2.38) imply (2.35) and wN 2 N \Phi \Delta N . Hence
and, together with (2.34) this proves the result.
Lemma 2.5 EN
Proof Let wN 2 N . It is easily checked that wN 2 S(D). Since by (2.33), wN 2
EN (C(D)), Lemma 2.1 (ii) yields wN 2 EN (S(D)) and so
On the other hand suppose wN 2 EN (S(D)). Then also wN 2 EN (R(D)) and by
Lemma 2.4 there exist unique w (1)
N . Also,
by Lemma 2.3, wN 2 S(D) and, since N ' S(D), we have w (2)
Hence there exist C
w (2)
for all OE 2 R. From the definition of \Delta N it follows easily that w (2)
Hence EN (S(D)) ' N and the result follows from this and (2.39).
These lemmas then lead to the following theorem describing the interpolatory properties
of EN .
Theorem 2.6 For all w 2 S(D), EN w is the unique element of N which satisfies
Proof Let w 2 S(D). From Lemmas 2.1 and 2.5, EN w 2 N has the interpolation
property (2.40) -(2.42). To establish uniqueness, suppose wN 2 N satisfies wN (0;
Then since N ae S(D) we have in fact wN thus
we have (using Lemma 2.1 (ii)),
as required.
2.4 A discrete orthogonal projection on C(@B)
Finally in this section we study the interpolatory operator on C(@B) induced by the
operator EN on S(D). That is we set
is the isometric isomorphism introduced in (2.3). We introduce
the interpolation points on the sphere
Together with the north pole n and the south pole s these form a set of 2N
points on @B. Similarly we introduce the interpolation space
Theorem 2.7
(ii) For v 2 C(@B); EN v is the unique element of SN with the property
(iv) For
log 2 N
Proof This is immediate from Lemma 2.1, Theorem 2.6 and the properties of J .
3 The Collocation (Pseudospectral) Method
In this section we introduce our pseudospectral method for (1.6) and prove its convergence.
First we give an algorithmic description of the method. Let
the dimension of the space introduced in (2.30) and (2.43). A basis for this
space is
For convenience we denote this basis f/ dg. It is natural to choose the
ordering implicit in (3.1), i.e. to choose the first 2N basis functions 2N to be
are
and so on with / being the last two basis functions J
N+1 .
The collocation points are the set
These have a corresponding natural ordering x 1k
and so on, with n and s being the last two collocation points.
Our method for (1.6) then consists of seeking a numerical solution
d
a
where the fa p g are scalars chosen so that
or equivalently
d
a
We discuss the calculation of the matrix entries
in the next section.
The proof of convergence of (3.3) is straightforward from the results of x2.
Theorem 3.1 Let f 2 C(@B). For N sufficiently large (3.3) has a unique solution
. If, in addition, f 2 C r (@B) with r - 1 then, for N sufficiently large,
log 2 N
Proof Observe that by Theorem 2.7, uN satisfies (3.3) if and only if
It is well known e.g. [13, x6.4] that exists and is bounded on C(@B). Moreover
[6, is bounded for ff 2 (0; 1). Hence, for all u 2 C(@B),
Theorem 2.7 (iv) yields
log 2 N
log 2 N
for any ff 2 (0; 1). Hence perturbation theory shows
that exists for N sufficiently large and is uniformly bounded. Then, since
then since I \Gamma K is also invertible on C r (@B); u 2 C r (@B) and the result
follows from Theorem 2.7 (iv).
4 Collocation Integrals
To implement the collocation method we must calculate integrals of the form
Z
where v is one of the basis functions for SN introduced in Section 2 and k is given by
(1.5).
To compute the weakly singular integrals (4.1) we follow the approach of Wienert
([25]) and introduce the Householder matrix defined for x 2 @B by
where
and
and
Moreover, since H(x) has eigenvalues \Gamma1 and 1 in directions u(x) and fu(x)g ? , it follows
that
geometrically, multiplication by H(x) represents reflection
in the plane perpendicular to the line joining x to (0; 0; - (x)) T , passing through the mid-point
of that line. Since surface measure on @B is unaffected by an orthogonal change of
variable, we may rewrite (4.1) as
Z
By virtue of (4.5) the integrand in (4.7) now has a singularity at We
handle this by taking polar coordinates to obtain
Z -Z 2--(x; '; OE)v(H(x)p('; OE))dOEd' (4.8)
with
We shall approximate the integrals (4.8) by quadrature. As we shall see in Lemmas 4.1-
4.3 below, -(x; '; OE) is smooth on ('; OE) 2 D with all its derivatives uniformly bounded
in x 2 @B. This is obtained using the arguments of Wienert [25] (see also [7]) but in the
context of C 1 rather than analytic function spaces.
We shall also examine below the behaviour of v(H(x)p('; OE)). As a function of ('; OE) 2
D this may have a very mild singularity when p('; We characterise this behaviour
in results from Lemma 4.4 below onwards. Together these results show that the integrand
in (4.8) is smooth except for a mild singularity at a single point. We devise a suitable
method for approximating these integrals in the context of a model example in Section 5.
When examining the properties of - it is useful to define the function
We can then write
with (for x 3 -
sin '
and
then we replace 0 by - in (4.11) and (4.12).)
Before we analyse - 1 and - 2 we point out a useful simpler expression for the last factor
on the right-hand side of (4.12) This is obtained by recalling the formula (valid for any
n \Theta n matrix A and any two vectors v; w 2 R n
Then (as is easily verified),
and using (4.4), (4.6) we have
Hence (4.4) and (4.13)-(4.15) yield
sin '
('; OE)
Using (4.13) again together with the chain rule, we have
sin '
('; OE)
To examine the smoothness of the first factor in the integrand of (4.8) it is convenient
to make the following definition.
Definition Given v : @B \Theta D ! R, we say v(x; '; OE) is C 1 (D) uniformly on @B if the
function ('; OE) 7! v(x; '; OE) is in C 1 (D) and all its derivatives are uniformly bounded in
Our first lemma is the following.
Lemma 4.1 Q(x; '; OE) is C 1 (D) uniformly on @B.
Proof From (1.2) it is clear that the function ('; OE) 7! Q(x; '; OE) is in C 1 (D). By the
chain rule, its ('; OE) derivatives are products of the Euclidean derivatives of q evaluated
at H(x)p('; OE) 2 @B and ('; OE) derivatives of H(x)p('; OE). The former are bounded
independently of x (by (1.2)). For the latter, observe that for all x 2 @B, we have, using
(4.4),
d k+l
d k+l p
('; OE)
d k+l p
('; OE)
and the results follow.
The next two lemmas treat the two factors on the right-hand side of (4.10).
Lemma 4.2 - 1 (x; '; OE) is C 1 (D) uniformly on @B.
Proof Let x 2 @B and without loss of generality assume x 3 - 0. Then using (4.11),
we have
The result then follows from the following properties of -
Q which we shall establish
below
uniformly in x ; (4.18)
and proceed analogously.)
To obtain (4.18), observe that
Z 1d
dt
Z 1@Q
Hence
Z 1@Q
Z 1@Q
and (4.18) follows from Lemma 4.1.
To obtain (4.19) observe that -
Since D is compact we simply have to show that
lim
However by (4.21), -
Q(x; 0; OE) is well-defined and non-negative. Thus, if (4.22) is not true,
then (4.21) implies
for some OE 2 [0; 2-]. By (4.9) and (4.5) this implies
cos OE
sin OE3
cos OE
sin OE3
Hence dq(x) has a zero eigenvalue. However this is ruled out by assumption (A.2) of the
Introduction and so (4.19) (and hence the result) follows.
Lemma 4.3 - 2 (x; '; OE) is C 1 (D) uniformly on @B.
Proof As in Lemma 4.2, without loss of generality, let x 2 @B with x 3 - 0. By (4.20),
we have
Z 1(
Z 1Z 1d
ds
ds
ds
Thus, in view of (4.16),
ds
\Theta
Hence dividing (4.12) top and bottom by ' 2 we have a numerator which is C 1 (D) uniformly
on @B and a denominator which is precisely -
complete the proof.
Our next set of results concerns the second factor in the integrand of (4.8), namely
the function
where v is one of the basis functions for the space SN defined in (2.43), i.e.
where w is one of the basis functions for N listed in (2.29). It turns out to be convenient
to study the function v(x; '; OE) as ('; OE) ranges over all of R 2 . Then, recalling (2.6)
motivates us to introduce the set
('; OE) 2 P(x) if and only if p('; OE) 2 f\Gammax; xg. If, in particular,
then it is easy to see that
Then by (2.43) and (2.6), we have, for ('; OE) 2 R 2 nP(x),
Moreover if ('; OE) 2 P(x), then either H(x)p('; or s in which case
It follows easily that ('; OE) 7! v(x; '; OE) defines a function in S(D) for each fixed x 2 @B.
It is convenient to write (4.27) as
where
The first step in characterising the smoothness of v is to examine the properties of
G. Here some care must be taken, since (as well as being undefined at n and s), the
second component of p \Gamma1 suffers a jump discontinuity of amplitude 2- along the line
(0; -)g on @B, This motivates the introduction of the set
Then G(x; '; OE) is well-defined for ('; OE) 2 R 2 nZ(x), where
and in fact is smooth there, as is shown in the following lemma.
Lemma 4.4 Let x 2 @B. Then the function ('; OE) 7! G(x; '; OE) is in (C 1 (R 2 nZ(x))) 2 .
Proof If ('; OE) a solution of the parameter-dependent
problem:
F is infinitely continuously differentiable in y and ('; OE). The Jacobian of F with respect
to y 2 R 2 is the 3 \Theta 2 matrix
which may easily be shown to have rank 2 when
evaluated at G(x; '; OE) for all ('; OE) 2 R 2 nZ(x). Hence the result follows from the implicit
function theorem.
Our next result obtains formulae for the first partial derivatives of G.
Notation If a; b are vectors in R n then [a; b] denotes the n \Theta 2 matrix which has a as
its first column and b as its second column.
Lemma 4.5 Let x 2 @B. Then, for ('; OE) 2 R 2 nZ(x), we have
sin
@' sin g 1
cos
sin
('; OE)
where the components of G and their derivatives are evaluated at (x; '; OE). Moreover
extend as continuous functions to ('; OE) 2 R 2 nP(x).
Proof Since p(G(x; '; we can differentiate this with respect to '
first and then OE to obtainB @
cos
cos
sin
('; OE)
Then, multiplying both sides of (4.32) by the matrix
cos
\Gammasin
yields (4.31). Since the right-hand side of (4.31) is continuous across L(x) the result
follows.
Our next result concerns the smoothness of the function v(x; '; OE) given by (4.29).
We use the following standard multi-index notation : Given
denotes the partial derivative @ jffj /=@' ff 1 @OE ff 2 where
Theorem 4.6 (i) For fixed x 2 @B the function ('; OE) 7! v(x; '; OE) is infinitely continuously
differentiable on R 2 nZ(x).
(ii) If x 2 fn; sg all the partial derivatives of v(x; \Delta) on R 2 nZ(x) have a continuous
extension to R 2 nP(x) and are bounded on R 2 nP(x).
(iii) If x 2 @Bnfn; sg then for each ff with jffj - 1, D ff v(x; \Delta) has a continuous
extension to R 2 nP(x) and there exists a function Rx;ff 2 C 1 (R 4 ) which is 2-periodic in
each argument such that
where v; g 1 and g 2 are evaluated at (x; '; OE).
Proof By (4.29), Lemma 4.4 and the fact that w 2 C 1 (D), part (i) follows directly.
To obtain (ii), observe that if x 2 fn; sg then, for ('; OE) 2 (0; -) \Theta [0; 2-), H(x)p(';
Hence, we have v(x; '; which is
smooth and 2-periodic on (0; -) \Theta R. Also, since v(x; \Delta) 2 S(D), if ('; OE) 2 (\Gamma-; 0) \Theta R,
then
Then v(x; '; OE) is defined by extending these formulae 2-periodically and part (ii) follows.
Part (iii) is proved by induction on first examine the case
simple analysis of (2.29) shows that any basis function w 2 N satisfies either
or
where functions which are 2-periodic in each argument. We shall consider
only the case (4.34). The case (4.35) is similar but slightly simpler. Substituting (4.34)
into (4.29) and differentiating with respect to ' gives
cos
where v; g 1 and g 2 are evaluated at (x; '; OE). Now substituting the expressions for @g 1
and
sin
found in (4.31) yields an expression of the form (4.33) in the case
identical argument can be used to obtain an analogous expression for @v
. This completes
the proof of (4.33) when
Now suppose (4.33) holds for all
By the inductive hypothesis we have
Differentiating with respect to ' yields
sin
sin
sin
with Rx;~ ff evaluated at (g 1 substituting the expressions for @g 1 =@' and
sin found in (4.31) yields an expression for D -
ff v of the form (4.33).
If -
then we set ~
proceed analogously with ' replaced by OE.
Hence (4.33) holds when and the assertion follows.
Theorem 4.6 (iii) allows the higher derivatives of v(x; \Delta) to blow up as ('; OE) approaches
any point in P(x). In the following corollary we give more detail of the blow-up behaviour
which may arise. Recall that P(x) is characterised by (4.26) when x satisfies (4.25).
Corollary 4.7 Let x 2 @Bnfn; sg. For any multi-index ff, D ff v(x; '; OE) has a continuous
extension to ('; OE) 2 R 2 nP(x). Moreover there exists a constant M ff such that, if
for all ('; OE) 2 R 2 sufficiently close (but not equal) to ( -
OE). (Here k \Delta k denotes the
Euclidean norm in R 2 .)
Proof Since Rx;ff defined in Theorem 4.6(iii) is 2- periodic in its second argument and
2 has a jump discontinuity of amplitude 2- along the curves in L(x) the first assertion
follows directly from (4.33). This yields the estimate
for ('; OE) sufficiently close to ( -
OE), where
To obtain the second assertion from the bound (4.37), we observe that when ('; OE) is
near to but not equal to ( -
OE) then (';
2 P(x) and g 1 2 (0; -) is uniquely defined by
cos
Suppose x is given by (4.25) and recall (4.26). It is then a simple calculation to show
that
A Taylor expansion about ( -
for some constant C and all ('; OE) sufficiently close to ( -
OE). Hence from (4.38)
sin
which, together with (4.37), yields (4.36).
Remark 4.8 It is not difficult to find an illustration of the sharpness of the estimates
in Corollary 4.7. For example
OE) with ( ~
sin ' sin OE
sin ' cos
When examining the smoothness of G and v we can consider the limit ('; OE) ! ( ~ '; ~
along infinitely many paths. Two such paths are
Then a simple calculation shows that G(x; ';
(ffi; -=2) on Path 2. Making use of this and (4.31) shows that as ('; OE) ! ( ~
OE) on Path 1,
OE) on Path 2 then
Now let us consider, for example, the basis function
Using (4.39), (4.40), it is then straightforward to check that v(x; ';
OE) on Path 1
whereas
OE) on Path 2:
So @v=@' is discontinuous at (';
OE). In this case the limiting values of @ 2 v=@' 2 are
actually equal along each of Paths 1 and 2. However if we consider instead v(x; ';
then @v
has the same limiting value along Paths 1 and 2 but an elementary calculation
shows
OE) along Path 1
and
OE) along Path 2:
So the estimate O((sin g 1 ) 1\Gammajffj ) of Corollary 4.7 cannot, in general, be strengthened.
5 Numerical Experiments
As a model case we solve (1.1) where \Gamma is the ellipsoid
and\Omega is the interior of \Gamma. This was converted to (1.6) using the mapping
We used the method in x3 to solve (1.6) yielding the solution
uN on @B which is identified with a function on \Gamma (also called uN ), defined using the
inverse of q. Having found this we can compute the potential in \Omega\Gamma
Z
which approximates the solution U of the boundary value problem
on \Gamma. We consider two cases
2.
In Case A, (1.1) has the exact solution u and the solution to the boundary value
problem is U j 1. Because of the quasioptimal estimate (3.5) and since the constant
functions lie in the basis space for the collocation method we have uN j 1 for all N
provided the collocation integrals are computed exactly. Thus Case A can be used as
a test of the accuracy of the quadrature method which we shall use for the collocation
integrals.
In Case B the solution of (1.1) is not known analytically but
which can be used for comparison with the computed value UN (-). In all the
tables of results given below kuN \Gamma 1k1 is computed by finding the maximum of
at the collocation points on @B, and - is the (randomly chosen)
point x on the unit sphere. For any table of results containing
a sequence (a N ) which tends to 0 as N ! 1, exponential convergence is tested by
conjecturing that
and then computing ff from results for two different values of N by
oe
log(a
It remains to explain how we use the information derived in x4 to devise an accurate
method of computing the collocation integrals. Since the collocation method has a spectral
convergence rate we do not want to use any quadrature rule which would destroy that
fast rate and so this question is vitally important.
Recall that the collocation integrals are given by (4.8). The integrands have singularities
(described by Corollary 4.7) if x 6= n or s. So suppose
OE) with ~
and ~
Then we are concerned with singularities in the integrand of (4.8) at any
point in the region [0; -] \Theta [0; 2-]. By Corollary 4.7 these can only happen at points
and at their translates through 2- in the OE direction.
Since the integrand in (4.8) is 2-periodic in OE we can write
where the integrand now has singularities at three points in the domain of integration:
We split (5.2) into the sum of integrals over [0; -] \Theta [0; -] and [0; -] \Theta [-; 2-] and apply
the tensor product Gauss-Legendre rule with M points in each coordinate direction to
each integral. Since the singularities lie on the boundaries of the integration domains we
expect this to converge reasonably well.
We solved (1.1) with above. The results are given in
Table
1 and they indicate that in fact the quadrature rule yields solutions to the integral
equation and approximate potentials which converge with order roughly O(exp(\Gamma1:2M)).
The convergence is less regular for the potential at the arbitrarily chosen point - . These
results are surprisingly good (we have no proof of the exponential convergence), and they
indicate that we have no problems from the weak singularities in (4.8).
Table
A.
Next, to check the performance of the collocation method in the absence of quadrature
errors we solved (1.1) with From Table 1, we
expect quadrature error to pollute at worst (about) the 11th decimal place of our solution.
The results are in Table 2 and show a convergence rate for approximate potentials which
is roughly of order O(exp(\Gamma2:5N)).
9
Table
Table
2 shows that good solutions to the boundary value problem are obtained by
solving a system of very small dimension (e.g. when are 86 degrees of
freedom). A similar accuracy is reported in [2]. However since the method used there is
a Galerkin method more quadrature is needed to assemble the stiffness matrix (for the
same number of degrees of freedom). The program used to compute these solutions is
very simple and compact, was written in MATLAB ([18]) and runs interactively on a Sun
4 workstation with many concurrent users and also on stand alone PC-486.
Finally it is interesting to try to reduce the overall cost of the method by balancing the
quadrature error with the underlying error in the collocation method. To do this let u e
denote the (theoretical) collocation solution (if the collocation integrals are done exactly)
and let uN denote the corresponding solution in the presence of quadrature. Then
We know that the first term on the right-hand side of (5.3) converges exponentially. We
want to ensure the second does also. Recall that from x3
suppose that denotes the quadrature approximation
to ENK (using the M point Gauss rule described above). Then a simple manipulation
shows
Thus, assuming stability for using the fact that u e
u, we expect that
where SN is as in x3. The right-hand side of (5.4) is the uniform norm of a matrix with
dimension O(N 2 ), each entry of which is converging exponentially. Thus it is reasonable
to conjecture
for some fi. We checked the sharpness of the N-dependence in this estimate by solving
Case A. The results are in Table 3 and confirm the
growth conjectured in (5.5). (Note u e
u in this example.)
9
Table
A.
To ensure exponential convergence for u e
for some ff 0 ? 0) it is then clear from (5.5) that we must choose
In principle we would choose ff 0 equal to ff where
and then M by (5.6), but in practice ff, and fi are unknown in general, and ff 0 , and fi in
would have to be chosen by experiment. Here we can verify this analysis since Table
In (5.6) we put 2. The results
are in
Table
4 and exhibit a very similar convergence and accuracy to Table 2, but much
less quadrature is needed.
9 23 7.917(-10) 2.7
Table
Finally, we tried the method on the long slender ellipsoid 10. The results
are in
Table
5 and exhibit similar convergence to Table 4 but with a somewhat larger
asymptotic constant. The accuracy of 3 \Theta 10 \Gamma6 with 86 degrees of freedom is still very
good.
9 23 1.521(-7) 1.6
Table
--R
The numerical solution of Laplace's equation in three dimensions II.
The numerical solution of Laplace's equation in three dimensions.
Algorithm 629: An integral equation program for Laplace's equation in Three dimensions.
A discrete Galerkin method for first kind integral equations with a logarithmic kernel.
Galerkin methods for solving single layer integral equations in three dimen- sions
Integral Equation Methods in Scattering Theory.
Inverse acoustic and electromagnetic scattering theory.
Spectral algorithms for vector elliptic equations in a spherical gap.
A pseudospectral approach for polar and spherical geometries.
A review of pseudospectral methods for solving partial differential equations.
Geophysical data inversion methods and applications.
A pseudospectral 3D boundary integral method applied to a nonlinear model problem from finite elasticity.
Linear Integral Equations.
On the numerical solution of a logarithmic integral equation of the first kind for the Helmholtz equation.
The fast Fourier transform and the numerical solution of one-dimensional boundary integral equations
The numerical solution of Helmholtz's equation for the exterior Dirichlet problem in three dimensions.
Approximation of Functions.
Mathworks Inc.
A spectral Galerkin method for a boundary integral equation.
Nonlinear Elastic Deformations.
Fourier series on spheres.
Polynomial interpolation and hyperinterpolation over general regions.
On the spectral approximation of discrete scalar and vector functions on the sphere.
Cubature for the sphere and the discrete spherical harmonic transform.
Die numerische approximation von randintegraloperatoren f?
Trigonometric Series.
--TR
--CTR
M. Ganesh , I. G. Graham, A high-order algorithm for obstacle scattering in three dimensions, Journal of Computational Physics, v.198 n.1, p.211-242, 20 July 2004
Lexing Ying , George Biros , Denis Zorin, A high-order 3D boundary integral equation solver for elliptic PDEs in smooth domains, Journal of Computational Physics, v.219 n.1, p.247-275, 20 November 2006
Piotr Boronski, Spectral method for matching exterior and interior elliptic problems, Journal of Computational Physics, v.225 n.1, p.449-463, July, 2007 | fourier approximation;pseudospectral method;boundary integral equation;three-dimensional potential problems;collocation |
277067 | Incorporating speculative execution into scheduling of control-flow intensive behavioral descriptions. | Speculative execution refers to the execution of parts of a computation before the execution of the conditional operations that decide whether it needs to be executed. It has been shown to be a promising technique for eliminating performance bottlenecks imposed by control flow in hardware and software implementations alike. In this paper, we present techniques to incorporate speculative execution in a fine-grained manner into scheduling of control-flow intensive behavioral descriptions. We demonstrate that failing to take into account information such as resource constraints and branch probabilities can lead to significantly sub-optimal performance. We also demonstrate that it may be necessary to speculate simultaneously along multiple paths, subject to resource constraints, in order to minimize the delay overheads incurred when prediction errors occur. Experimental results on several benchmarks show that our speculative scheduling algorithm can result in significant (upto seven-fold) improvements in performance (measured in terms of the average number of clock cycles) as compared to scheduling without speculative execution. Also, the best and worst case execution times for the speculatively performed schedules are the same as or better than the corresponding values for the schedules obtained without speculative execution. | Introduction
Speculative execution refers to the execution of a part of a computation
before it is known if the control path to which it belongs
will be executed (for example, execution of the code after a branch
statement before the branch condition itself is evaluated). It has
been used to overcome, to some extent, the scheduling bottlenecks
imposed by control-flow. There has been previous work on speculative
execution in the areas of high-level synthesis [1, 2, 3] as well
as high-performance compilation [4, 5].
Previous work [1, 2, 3] in high-level synthesis has attempted
to locate single or multiple paths for speculation prior to schedul-
ing. This paper presents techniques to integrate speculative execution
into scheduling during high-level synthesis of control-flow
intensive designs. In that context, we demonstrate that not using
information such as resource constraints and branch probabil-
* This work was supported in part by NSF under Grant No. 9319269 and
in part by Alternative System Concepts, Inc. under an SBIR contract from
Air Force Rome Laboratories.
Permissions to make digital/hard copy of all or part of this work for personal
or classroom use is granted without fee provided that copies are not made
or distributed for profit or commercial advantage, the copyright notice, the
title of the publication and its date appear, and notice is given that copying
is by permission of ACM, Inc. To copy otherwise, to republish, to post on
servers or to redistribute to lists, requires prior specific permission and/or a
fee.
DAC 98, San Francisco, California
ities while deciding when to speculate can lead to significantly sub-optimal
performance. We also demonstrate that it is necessary to
perform speculative execution along multiple paths at a fine-grain
level during the course of scheduling, in order to obtain maximal
benefits. In addition, we present techniques to automatically manage
the additional speculative results that are generated by speculatively
executed operations. We show how to incorporate speculative
execution into a generic scheduling methodology, and in particular
present the results of its integration into an efficient scheduler
Wavesched [6]. Experimental results for various benchmarks
and examples are presented that indicate upto seven-fold improvement
in performance (average number of clock cycles required to
perform the computation).
Background and Motivation
Scheduling tools typically work using one or more intermediate
representations of the behavioral description, such as a data flow
graph (DFG), control flow graph (CFG), or control-data flow graph
(CDFG). In this paper, we use the CDFG as the intermediate representation
of a behavioral description, and state transition graphs
(STGs) to represent the scheduled behavioral description, as explained
in later sections. In addition to the behavioral description,
our scheduler also accepts the following information:
ffl A constraint on the number of resources of each type available
(resource allocation constraints).
ffl The target clock period for the implementation, or constraints
that limit the extent of data and control chaining allowed.
ffl Profiling information that indicates the branch probabilities
for the various conditional constructs present in the behavioral
description.
We now present some motivational examples to illustrate the use
of speculative execution during scheduling.
Consider a part of a behavioral description and the
corresponding CDFG fragment shown in Figure 1, that contains a
while loop. The CDFG contains vertices corresponding to operations
of the behavioral description, where solid lines indicate data
dependencies, and dotted lines indicate control dependencies. Control
edges in the CDFG are annotated with a variable that represents
the result of the conditional operation that generates them. For ex-
ample, the control edges fed by operation > 1 are marked c in Figure
1. The initial values of variables i and t 4 used in the loop body
are indicated in parentheses beside the corresponding CDFG data
edges.
Let us now consider the task of scheduling the CDFG shown
in
Figure
1. Suppose we have the following constraints to be used
during scheduling.
while (k > t4) {
c
c
Figure
1: A CDFG to illustrate speculative execution
S5
S5
++1_5, M1_4, *1_2, *1_3,
*2_1, *2_2, +1_0
*2_2, *2_3, +1_1, M2_0
*2_3, *2_4, +1_2, M2_1
M1_1/c_1, *1_0
(a) (b)
Figure
2: (a) Non-speculative schedule for the CDFG of Figure 1,
and (b) schedule incorporating speculative execution
ffl The target clock period allows the execution of +, ++, >, and
memory access operations in one clock cycle, while the * operation
requires two clock cycles. In addition, we assume that
the * operation will be implemented using a 2-stage pipelined
multiplier.
chaining is allowed, since it leads to a violation
of the target clock period constraint (in general, however, our
algorithm can handle chaining).
ffl The aim is to optimize the performance of the design as much
as possible. Hence, no resource constraints are specified for
the purposes of illustration for this example. This is not a
limitation of our scheduling algorithm, which does handle resource
constraints as described in later sections.
A schedule for the CDFG that does not incorporate speculative
execution is shown in Figure 2(a). This schedule can be obtained
by applying either the loop-directed scheduling [7] technique or the
Wavesched [6] technique to the CDFG. Vertices in the STG represent
schedule states, that directly correspond to states in the controller
of the RTL implementation. Each state is annotated with
the names of the CDFG operations that are performed in that state,
including a suffix that represents a symbolic iteration index of the
CDFG loop that the operation belongs to. For example, consider
operation > 1 of the CDFG. When > 1 is encountered the first time
during scheduling, it is assigned a subscript 0, resulting in operation
in the STG of Figure 2(a). In general, multiple copies of
an operation may be generated during scheduling, corresponding to
different conditional paths, or different iterations of a loop. For ex-
ample, operation > 1 1 in the STG of Figure 2(a) corresponds to
the execution of the first unrolled instance of CDFG operation > 1.
An edge in the STG represents a controller state transition, and is
annotated with the conditions that activate the transition.
Each iteration of the loop in the scheduled CDFG requires eight
clock cycles. For this example, the data dependencies among the
operations within the loop require them to be performed serially.
In addition, the control dependencies between the comparison operation
together with the inter-iteration
data dependency 1 from +1 to > 1, prevent the parallel
computation of multiple loop iterations, even when loop unrolling
is employed.
A schedule for the CDFG of Figure 1 that incorporates speculative
execution is shown in the STG of Figure 2(b). This schedule
was derived by techniques we present in later sections. Speculatively
executed operations are annotated with the conditional operations
whose results they depend upon, using the following
tion. op/cond represents an operation op that is executed assuming
that the speculation condition cond will evaluate to t rue. The
speculation condition cond could, in general, be an expression that
is a conjunction of the results of various conditional operations in
the STG. For example, consider operation ++1 1/c 1 in state S1 of
Figure
2(b). This is a speculatively executed operation, that corresponds
to the second instance of CDFG operation in the
schedule, and assumes that the result of conditional operation > 1 1,
which is executed only in state S7, is going to be true. States S7 and
S8 represent the steady state of the schedule. Note that, when in the
steady state, a new iteration is initiated every cycle, as opposed to
once in eight cycles.
The following example illustrates the impact of branch probabilities
and resource constraints on the performance of speculatively
derived schedules and makes a case for the integration of
speculation into the scheduling process.
Consider the example CDFG shown in Figure 3. The
select operation Sel1 selects the data operand at its l (r) port if
the value at its s port is 1 (0). Figure 4 shows three different
schedules that use speculative execution, that were generated using
different resource constraints and branch probabilities. The STG
of
Figure
4(a) was generated assuming the following information.
Available resources consist of one incrementer (++), one adder(+),
An intra-iteration data or control dependency is between operations that
correspond to the same iteration of a loop, while an inter-iteration dependency
is between operations in different (e.g., consecutive) iterations. We
refer to intra-iteration data and control dependencies simply as data and
control dependencies.
a
out
>>1c d
e
s
l r
Figure
3: CDFG demonstrating the effect of resource constraints
and branch probabilities on speculative execution
(a) (b) (c)
Figure
4: Three speculative schedules derived using different resource
constraints or branch probabilities
one comparator (>), one shifter (>>), and one multiplier (*), all of
which require one cycle. Also, the probability of comparison > 1
evaluating to f alse is higher than it evaluating to true. Since the
result of > 1 evaluates to f alse more often, the schedule of Figure
4(a) gives preference to executing operations from the corresponding
control path (e.g., +2). As a result, +2 is scheduled to
be performed on the sole adder in state S0, as opposed to +1, even
though the data operands for both operations are available. The average
number of clock cycles, CC a , required for the STG in Figure
4(a) can be calculated as follows.
In the above equation, P(c1) represents the probability that the result
of comparison > 1 evaluates to true.
The STG of Figure 4(b) was derived with the same information
above, except that it was assumed that comparison > 1 evaluates
to true more often than it evaluates to f alse. Hence, operation +1
is given preference over operation +2 and is scheduled in S0. The
average number of clock cycles, CC b , required for the STG in Figure
4(b) is given by the following expression
Suppose the resource constraints were relaxed to allow two adders.
The speculative schedule that results is shown in Figure 4(c). The
average number of clock cycles, CC c , required for the STG in Figure
4(c) is given by the following expression.
Expected
Number
of
Cycles
CC a
Figure
5: Comparison of the speculative schedules
The values of CC a , CC b , and CC c for various values of P ranging
from 0 to 1 are plotted in Figure 5. As expected, the schedule
of
Figure
4(a) outperforms the schedule of Figure 4(b) when
P(c1) < 0.5, and the schedule of Figure 4(b) performs better when
0.5. Moreover, the schedule of Figure 4(c), which was derived
using one extra adder, outperforms the other two schedules
for all values of P(c1). Thus, we can conclude that branch probabilities
and resource constraints do influence the trade-offs involved
in deciding which conditional paths to speculate upon, making the
case for the integration of speculative execution into the scheduling
step where such information is available.
The following example illustrates that it is necessary to perform
speculative execution along multiple paths, in a fine-grained man-
ner, in order to obtain maximal performance improvements.
schedules shown in Figure 4 were all generated
S5
Figure
Speculation along a single path
by speculatively executing operations from both the conditional
paths of the CDFG in a fine-grained manner, as allowed by the resource
constraints. For the purpose of comparison, we scheduled
the CDFG shown in Figure 3, assuming the same scheduling information
that was assumed to derive the schedule of Figure 4(b).
However, in this case, we restricted the scheduler to allow speculative
execution along only one path. The resulting schedule is shown
in
Figure
6. The average number of clock cycles, CC d , required for
the STG in Figure 6 is given by the following expression.
Comparing the expression for CC d to the expression for CC b from
the previous example indicates that CC d CC b for all feasible values
of P(c1). Thus, in this example, simultaneously speculating
along multiple paths according to resource availability results in
a schedule that is provably better than one derived by speculating
along only the most probable path. Our scheduling algorithm automatically
decides the best paths to speculate upon for the given
resource constraints and branch probabilities.
3 The Algorithm
In this section, we present the changes that need to be made to a
generic scheduling algorithm to support speculative execution.
3.1 A generic scheduling algorithm
Figure
7 shows the pseudocode for a generic scheduling algo-
rithm. The inputs to the scheduler are a CDFG, G, to be sched-
Generic scheduler (CDFG G, ALLOCATION CONSTRAINT K,
MODULE SELECTION INFO M inf , CLOCK PERIOD clk) f
SET Unscheduled operations;
SET Schedulable operations;
while (jUnscheduled operationsj >
schedulable operation
(Schedulable operations, K, M inf , clk);
//Select an operation for scheduling. The selected
// operation must honor allocation and clock cycle constraints
Unscheduled operations.remove operation(op);
5 Schedulable operations.remove operation(op);
6 SET schedulable successors = Compute-
schedulable successors(op);//Find the set of operations
// in op's fanout which become schedulable when op is scheduled
7 Schedulable operations.append(schedulable successors);
//Augment Schedulable operations by addition of
//operations in schedulable successors
gg
Figure
7: Pseudocode for a generic scheduling algorithm
uled, the target clock period of the design, allocation constraints,
which specify the numbers and types of functional units available,
and module selection information, which gives the type of functional
unit an operation is mapped to. The output of the scheduler
is an STG which describes the schedule. At any point, a
generic scheduler maintains (a) the set of unscheduled operations
whose data and control dependencies have been satisfied, and can
therefore be scheduled (Schedulable operations), and (b) the set
of operations which are unscheduled (Unscheduled operations).
The scheduling process proceeds as follows: an operation from
Schedulable operations is selected for scheduling in a given state
(statement 2). The selection should honor allocation and clock cycle
constraints. The manner in which the selection is done varies
from one scheduling algorithm to another. The selected operation,
op, is scheduled in the state. Since op no longer belongs to either
Schedulable operations or Unscheduled operations, it is removed
from these sets (statements 4 and 5). Also, the scheduling
of op might render some of the operations in its fanout schedu-
lable. The routine Compute schedulable successors (statement
identifies such operations, and these operations are subsequently included
in the set Schedulable operations (statement 7).
3.2 Incorporating speculative execution into a
generic scheduler: An overview
We now provide an overview of the changes that need to be
made to incorporate speculative execution into the framework of
the generic scheduler shown in Figure 7.
To support speculative execution, the generic scheduler shown
in
Figure
7 needs to be modified as follows (the details of these
steps are provided in Section 3.3).
1. When an operation is scheduled, one needs to recognize all
its schedulable successors, including the ones which can be
s
r
Figure
8: A CDFG fragment illustrating speculative execution
speculatively scheduled. In addition, speculatively executed
operations and their successors need to be specially marked.
Clearly, procedure Compute schedulable successors needs to
be augmented to consider such cases. Note that, at any stage,
every speculatively schedulable operation is added to the list
of schedulable operations. However, few of them are actually
scheduled. Operations which are not worth being speculated
on are ignored, and eventually removed from the list
of schedulable operations, using procedures described later in
this section.
Example 4: Consider the CDFG fragment shown in Figure
8. We assume that operation op0 is scheduled, operation
op2 has just been scheduled, and operations op1, op3,
Sel1, and op4 are unscheduled. The output of the routine
Compute schedulable successors(op2) must include operation
op4, which can now be speculatively executed, i.e., its
operands can be assumed to be the results of operations op2
and op0.
2. When operations are scheduled, control and data dependencies
of speculatively executed operations are resolved. This
would potentially validate or invalidate speculatively performed
operations. Operations which are validated should
be considered "normal", i.e., they need not be specially
marked any longer. Operations in Unscheduled operations
and Schedulable operations which are invalidated need no
longer be considered for scheduling. They can, therefore, be
removed from these sets. In general, the resolution of the control
or data dependencies of a speculatively performed operation
creates two separate threads of execution, which correspond
to the success and failure of the speculation.
Example 5: Consider again, the CDFG fragment shown in
Figure
8. Suppose operations op0, op2 and op4 have been
scheduled, and operation op3 is unscheduled. Operation op4
uses as its operands, the results of operations op2 and op0.
Assume that operation op1 has just been scheduled. If op1
evaluates to true, then the execution of op4 can be considered
fruitful, because the operands chosen for its computation
are correct. Therefore, op4, and its scheduled and schedulable
successors need not be considered conditional on the result of
op1 anymore, and the data structures can be modified to reflect
this fact. If, however, op1 evaluates to false, then op4
should use as its operands, the results of operations op3 and
op0, thus invalidating the result of our speculation. There-
fore, schedulable operations, whose computations are influenced
by the result computed by op4 are invalid, and can be
removed from the set Schedulable operations.
3. The set, Schedulable operations, from which an operation
is selected for scheduling, contains operations whose execution
is speculative, i.e., whose results are not always use-
ful. The selection procedure, represented by the routine Select
schedulable operation() (statement 2), needs to be modified
to account for this fact. For example, operations, whose
execution is extremely improbable, would make poor selection
candidates, as the resources consumed by them might be
better utilized by operations whose execution is more proba-
ble. Also, operations, which fall on critical paths, would be
better candidates for selection than those on off-critical paths.
3.3 Incorporating speculative execution into a
generic scheduler: A closer look
In this section, we fill in the details of the changes outlined in
Section 3.2. This is preceded by a formal treatment of concepts
related to speculative execution.
A scheduler which supports speculative execution works with
conditioned operations as its atomic schedulable units, just as a
normal scheduler uses operations. Therefore, the fanin-fanout relationships
between operations, captured by the CDFG, need to
be defined for conditioned operations. Since all speculatively performed
operations are conditioned on some event, the adjective
"speculatively performed" when applied to an operation, implies
that it is conditioned on some event or combination of events.
As mentioned in Section 3.2, when an operation is scheduled,
its schedulable successors need to be computed.
s
r
l
s
r
Figure
9: Illustrating the scheduling of successors of speculatively
performed operations
Consider the CDFG fragment shown in Figure 9. Assume
that operations op5 and op6 have been scheduled, operations
op1, op3, and op4 are unscheduled, and op2 has just been sched-
uled. It is now possible to schedule two versions of operation op7,
with the first version, op7 0 , using op2 and op5 as its operands,
and the second, op7 00 using op2 and op6. op7 0 is conditioned on
c(op1) c(op4), and op7 00 is conditioned on c(op1) c(op4). The
following analysis presents a structured means of identifying such
relationships.
We now present a result which helps derive fanin-fanout relationships
among speculatively performed operations.
Lemma 1: Consider an operation, op, whose fanins are op1, op2,
., opn. If the fanins of op have been speculatively scheduled, so
can op. In particular, if the ith fanin, opi, is conditioned on C i , then
op would be conditioned on V n
We now present details of Steps 1, 2, and 3, outlined in Section
3.2.
Step 1: This step addresses the issue of deriving all schedulable
successors of a scheduled operation, op0. The result of Lemma 1 is
used for this procedure.
scheduled op-
erations, which satisfies the following condition sources a schedulable
operation.
Condition: There exits an operation, fanout, in the CDFG, all of
whose fanins are reachable from the outputs of the operations in S
through paths which consist exclusively (if at all) of select opera-
tions. The path connecting the output of an operation opj in S to
an input of fanout is denoted by Pj, and the operations on Pj are
. Note that aj can equal represents the
condition that path Pj is selected, i.e., the result of operation op j is
propagated through path Pj to the appropriate input of fanout. Operation
fanout is conditioned on V
represents the expression opk is conditioned on.
Observation 1 can be used to infer the schedulable successors
of an operation. The procedure Compute schedulable successors,
which is called in statement 6 of the pseudocodeshown in Figure 7,
is appropriately augmented.
So far, we have described the technique used to identify all
schedulable successors of an operation. This was accomplished
by tagging operations with the conditions under which their results
would be valid. Note that our procedure allows us to speculate
on all possible outcomes of a branch, and arbitrarily deeply
into nested branches. If integrated with a scheduler which supports
loop unrolling, the speculation could also cross loop boundaries.
We now present the technique used to validate or invalidate speculatively
performed operations whose dependencies have just been
resolved.
Step 2: Suppose operation op s , which resolves a condition c, has
just been scheduled. The resolution of c results in the creation of
two different threads of execution, where (i) true, and (ii)
false. The following procedure is carried out for every operation,
which belongs either to the set, Schedulable operations, or the
set of scheduled operations. Let op be conditioned on
In the true (false) branch, C is evaluated assuming a value of 1 (0)
for c, and the resultant expression is the new expression that op is
conditioned on.
Step 3: We now describe the procedure employed by the scheduler
to select an operation to schedule, from a pool of schedulable oper-
ations, Schedulable operations. Schedulable operations can contain
operations which are conditioned on different sets of events,
i.e., we can choose different paths to speculate upon. We need to
decide the "best" candidate to map to a given resource, where, by
best, we mean the operation whose mapping on the given resource
would minimize the expected number of cycles for the schedule.
Formally, the problem can be stated as follows: given (i) a partial
schedule, (ii) a functional unit, fu, (iii) a set of operations, S (some
of which may be speculative), which can execute on the functional
unit, and (iv) typical input traces, select the operation, which, if
mapped to fu, would minimize the expected number of cycles.
The above problem has been proven to be NP-complete, even
for conditional- and loop-free behavioral descriptions [8]. We,
therefore, use the following heuristic, whose guiding principle has
been successfully employed by several scheduling algorithms [9].
The heuristic is based on the following premise: operations in the
CDFG which feed primary outputs through long paths are more
critical than operations which feed primary outputs through short
paths and, therefore, need to be scheduled earlier. The rationale behind
this heuristic is that operations which belong to short paths are
more mobile than those on long paths, i.e., the total schedule length
is less sensitive to variations in their schedules. The length of a path
is measured as the sum of the delays of its constituent operations.
In data-dominated descriptions, with no loops and conditional
operations, the longest path between any pair of operations is fixed.
In control-flow intensive descriptions, some paths could be input-
dependent. Therefore, the longest path between a pair of operations
must be defined with respect to a given input. For example, for the
CDFG shown in Figure 3, the longest path connecting primary input
c with output out depends upon the value taken by operation
> 1. Since our scheduling algorithm is geared towards minimizing
the average execution time, we use the expected length of the
longest path from an operation to a primary output as a metric to
rank different operations. We use the notation l(op) to denote this
quantity for operation op.
Speculation adds a new dimension to this problem: the result
computed by an operation is not guaranteed to be useful. For an
Table
1: Expected number of cycles, number of states, best-
and worst-case number of cycles results
Circuit E.N.C. #states bc wc
WS SP WS SP WS SP WS SP
Barcode
GCD 95
Findmin 522 265 4
Table
2: Allocation constraints for the examples in Table 2
Circuit add1 sub1 mult1 comp1 eqc1 inc1
Findmin
operation, op, we account for this effect by multiplying the probability
that an operation's output is utilized with l(op) to derive a
metric of an operation's criticality. This is expressed by means of
the following equation:
where criticality(op) measures the desirability of scheduling op,
is the product of the probabilities of the events that op is
conditioned on, and l(op) is as defined above.
4 Experimental Results
The techniques described in this paper were implemented in a
program called Wavesched-spec, written in C++. We evaluated this
program by using it to produce schedules for several commonly
available benchmarks. These schedules were compared against
those produced by the scheduling algorithm, Wavesched [6], without
the use of speculative execution, with respect to the following
metrics: (a) expected number of cycles, (b) number of states in
the STG produced, (c) the smallest number of cycles taken to execute
the behavioral description, and (d) the largest number of cycles
taken to execute the behavioral description. In general, finding
the largest number of cycles taken to execute a behavioral description
is a hard problem. However, for the examples considered in
this paper, static analysis of the description was sufficient to find
the number.
Table
1 summarizes the results obtained. The columns labeled
#states, bc, and wc represent, respectively, the expected
number of cycles, the number of states in the STG produced, smallest
number of cycles taken to execute the STG, and the largest number
of cycles taken to execute the STG. Minor columns WS and
produced by Wavesched and Wavesched-
spec, respectively. We used a library of functional units which
consisted of (a) an adder, add1, (b) a subtracter, sub1, (c) a mul-
tiplier, mult1, (d) a less-than comparator, comp1, (e) an equality
comparator, eqc1, and (f) an incrementer, inc1. Unlimited numbers
of single-input logic gates (OR, AND, and NOT) were assumed to
be available. All functional units except mult1, which executes in
two cycles, take one cycle to execute. The allocation constraints for
an example can be found by looking up the entry corresponding to
the example in Table 2. For example, the allocation constraints for
GCD are two sub1, one comp1, and two eqc1.
The expected number of cycles for the final design was measured
by simulating a VHDL description of the schedule using the
SynopsysVSS simulator. The input traces used for simulation were
obtained as zero-mean Gaussian sequences.
Of our examples, Barcode, GCD, TLC, and Findmin are borrowed
from the literature. Test1 is the example shown in Figure 1.
Barcode represents a barcode reader, GCD computes the greatest
common divisor of its inputs, TLC represents a traffic light con-
troller, and Findmin returns the index of the minimum element in
an array.
The results obtained indicate that Wavesched-spec produced an
average expected schedule length speedup of 2.8 over schedules
obtained using Wavesched. Note that Wavesched [6] was reported
to have achieved an average speedup of 2 over schedules produced
by existing scheduling algorithms, such as path-based scheduling
[10], and loop-directed scheduling [7]. To get an idea of the
area overhead of this technique, we obtained a 16-bit RTL implementation
for the GCD example using an in-house high-level synthesis
system, for the schedules produced by Wavesched-spec and
Wavesched. These RTL circuits were technology-mapped using the
MSU library, and the area of the gate-level circuits were obtained.
The area overhead for the circuit produced from Wavesched-spec
was found to be only 3.1%. We also note that for Wavesched-spec,
the number of cycles in the shortest and longest paths is smaller
than or equal to the corresponding number for Wavesched.
Conclusions
In this paper, we presented a technique for incorporating speculative
execution into scheduling of control-flow intensive designs.
We demonstrated that in order to fully exploit the power of speculative
execution, one needs to integrate it with scheduling. We introduced
a node-tagging scheme for the identification of operations
which can be speculatively scheduled in a given state, and a heuristic
to select the "best" operation to schedule. Our techniques were
fully integrated into an existing scheduling algorithm which can
support implicit unrolling of loops, functional pipelining of control-flow
intensive behaviors, and can parallelize the execution of independent
loops whose bodies share resources. Experimental results
demonstrate that the presented techniques can improve the performance
of the generated schedule significantly. Schedules produced
using speculative execution were, on an average, 2.8 times faster
than schedules produced without its benefit.
--R
"Experiments with low-level speculative computation based on multiple branch prediction,"
"Global scheduling independent of control dependenciesbased on condition vectors,"
"Combining MBP-speculative computation and loop pipelining in high-level synthesis,"
"Trace scheduling: A technique for global microcode compaction,"
"Sentinel scheduling: A model for compiler-controlled speculative execution,"
"Wavesched: A novel scheduling technique for control-flow intensive behavioral de- scriptions,"
"Performance analysis and optimization of schedules for conditional and loop-intensive specifica- tions,"
Computers and Intractibility.
"Empirical evaluation of some high-level synthesis scheduling heuristics,"
"Path-based scheduling for synthesis,"
--TR
Global scheduling independent of control dependencies based on condition vectors
Empirical evaluation of some high-level synthesis scheduling heuristics
Sentinel scheduling
Performance analysis and optimization of schedules for conditional and loop-intensive specifications
<italic>Wavesched</italic>
Computers and Intractability
Combining MBP-speculative computation and loop pipelining in high-level synthesis
--CTR
Sumit Gupta , Nick Savoiu , Sunwoo Kim , Nikil Dutt , Rajesh Gupta , Alex Nicolau, Speculation techniques for high level synthesis of control intensive designs, Proceedings of the 38th conference on Design automation, p.269-272, June 2001, Las Vegas, Nevada, United States
Sumit Gupta , Nikil Dutt , Rajesh Gupta , Alex Nicolau, Dynamic Conditional Branch Balancing during the High-Level Synthesis of Control-Intensive Designs, Proceedings of the conference on Design, Automation and Test in Europe, p.10270, March 03-07,
Sumit Gupta , Nick Savoiu , Nikil Dutt , Rajesh Gupta , Alex Nicolau , Timothy Kam , Michael Kishinevsky , Shai Rotem, Coordinated transformations for high-level synthesis of high performance microprocessor blocks, Proceedings of the 39th conference on Design automation, June 10-14, 2002, New Orleans, Louisiana, USA
Satish Pillai , Margarida F. Jacome, Compiler-Directed ILP Extraction for Clustered VLIW/EPIC Machines: Predication, Speculation and Modulo Scheduling, Proceedings of the conference on Design, Automation and Test in Europe, p.10422, March 03-07,
Sumit Gupta , Nick Savoiu , Nikil Dutt , Rajesh Gupta , Alex Nicolau, Conditional speculation and its effects on performance and area for high-level snthesis, Proceedings of the 14th international symposium on Systems synthesis, September 30-October 03, 2001, Montral, P.Q., Canada
Soha Hassoun, Fine grain incremental rescheduling via architectural retiming, Proceedings of the 11th international symposium on System synthesis, p.158-163, December 02-04, 1998, Hsinchu, Taiwan, China
Luiz C. V. dos Santos , Jochen A. G. Jess, Exploiting state equivalence on the fly while applying code motion and speculation, Proceedings of the conference on Design, automation and test in Europe, p.120-es, January 1999, Munich, Germany
Darko Kirovski , Miodrag Potkonjak, Engineering change: methodology and applications to behavioral and system synthesis, Proceedings of the 36th ACM/IEEE conference on Design automation, p.604-609, June 21-25, 1999, New Orleans, Louisiana, United States
Srivaths Ravi , Ganesh Lakshminarayana , Niraj K. Jha, Removal of memory access bottlenecks for scheduling control-flow intensive behavioral descriptions, Proceedings of the 1998 IEEE/ACM international conference on Computer-aided design, p.577-584, November 08-12, 1998, San Jose, California, United States
Sumit Gupta , Nikil Dutt , Rajesh Gupta , Alexandru Nicolau, Loop Shifting and Compaction for the High-Level Synthesis of Designs with Complex Control Flow, Proceedings of the conference on Design, automation and test in Europe, p.10114, February 16-20, 2004 | high-level synthesis;telecommunication |
277563 | Minimization of Communication Cost Through Caching in Mobile Environments. | AbstractUsers of mobile computers will soon have online access to a large number of databases via wireless networks. Because of limited bandwidth, wireless communication is more expensive than wire communication. In this paper, we present and analyze various static and dynamic data allocation methods. The objective is to optimize the communication cost between a mobile computer and the stationary computer that stores the online database. Analysis is performed in two cost models. One is connection (or time) based, as in cellular telephones, where the user is charged per minute of connection. The other is message based, as in packet radio networks, where the user is charged per message. Our analysis addresses both the average case and the worst case for determining the best allocation method. | Introduction
Users of mobile computers, such as palmtops, notebook computers and personal communication
systems, will soon have online access to a large number of databases via wireless networks. The
potential market for this activity is estimated to be billions of dollars annually, in access and
communication charges. For example, while on the road, passengers will access airline and other
carriers schedules, and weather information. Investors will access prices of financial instruments,
salespeople will access inventory data, callers will access location dependent data (e.g. where
is the nearest taxi-cab, see [10, 24]) and route-planning computers in cars will access traffic
information.
Because of limited bandwidth, wireless communication is more expensive than wire commu-
nication. For example, a cellular telephone call costs about $0.35 cents per minute. As another
example, RAM Mobile Data Corp. charges on average $0.08 per data message to or from the
mobile computer (the actual charge depends on the length of the message). It is clear that for
users that perform hundreds of accesses each day, wireless communication can become very ex-
pensive. Therefore, it is important that mobile computers access online databases in a way that
minimizes communication.
We assume that an online database is a collection of data items, where a data item is, for
example, a web-page or a file. Users access these data items by a unique id, such as a key, one at
a time. We minimize communication using an appropriate data-allocation scheme. For example,
if a user frequently reads a data-item x, and x is updated infrequently, then it is beneficial for
the user to allocate a copy of x to her/his mobile computer. In other words, the mobile user
subscribes to receive all the updates of x. This way the reads access the local copy, and do
not require communication. The infrequent updates are transmitted from the online database
to the mobile computer. In contrast, if the user reads x infrequently compared to the update
rate, then a copy of x should not be allocated to the mobile computer. Instead, access should be
on-demand; every read request should be sent to the stationary computer that stores the online
database.
Thus, one-copy and two-copies are the two possible allocation schemes of the data item x to
a mobile computer. In the first scheme, only the stationary computer has a copy of x, whereas in
the second scheme both, the stationary and the mobile computer have a copy of x. An allocation
method determines whether or not the allocation scheme changes over time. In a static allocation
method the allocation scheme does not change over time, whereas in a dynamic one it does. The
following is an example of a dynamic allocation method. The allocation scheme changes from
two-copies to one-copy as a result of a larger number of writes than reads in a window of four
minutes.
In mobile computing the geographical area is usually divided into cells, each of which has
a stationary controller. Our stationary computer should not be confused with the stationary
controller. The stationary computer is some node in the stationary network that is fixed for a
given data item, and it does not change when the mobile computer moves from cell to cell.
In this paper we analyze two static allocation methods, namely the one that uses the one-copy
scheme and the one that uses the two-copies scheme; and a family of dynamic data allocation
methods. These methods are suggested by the need to select the allocation scheme according to
the read/write ratio: if the reads are more frequent then the methods use the two-copies allocation
scheme, otherwise they use the one-copy scheme. The family consists of all the methods that
allocate and deallocate a copy of a data item to the mobile computer based on a sliding window of
k requests. For every read or update (we often refer to updates as writes) the latest k requests are
examined. If the number of reads is higher than the number of writes and the mobile computer
does not have a copy, then such a copy is allocated to the mobile computer; if the number of
writes is higher than the number of reads and the mobile computer does have a copy, then the
copy is deallocated. Thus, the allocation scheme is dynamically adjusted according to the relative
frequencies of reads and writes.
The algorithms in this family are distributed, and they are implemented by software residing
on both, the mobile and the stationary computers. The different algorithms in this family differ
on the size of the window, k.
Our analysis of the static and dynamic algorithms addresses both worst-case, and the expected
case for reads and writes that are Poisson-distributed. Furthermore, this analysis is done in two
cost models. The first is connection (or time) based, where the user is charged per minute of
cellular telephone connection. In this model, if the mobile computer reads the item from the
stationary database computer, then the read-request as well as the response are executed within
one connection of minimum length (say one minute). If writes are propagated to the mobile
computer, then this propagation is also executed within one minimum-length connection.
The second cost model is message based. In this model the user is charged per message,
and the exact charge depends on the length of the message. Therefore, in this model we distinguish
between data-messages that are longer, and control-messages that are shorter. Data-
messages carry the data-item, and control messages only carry control information, specifically
read-requests (from the mobile computer to the stationary computer) and delete-requests (the
delete-request is a message that deallocates the copy at the mobile computer). Thus a remote
read-request necessitates one control message, and the response necessitates a data message. A
write propagated to the mobile computer necessitates a data-message.
The rest of the paper is organized as follows. In the next section we present a summary
of the results of this paper. In section 3 we formally present the model, and in section 4 we
precisely present the sliding-window family of dynamic allocation algorithms. In section 5 we
develop the results in the connection cost model, and in section 6 we develop the results in the
message model. In section 7 we discuss some other dynamic allocation methods, and extensions
to handle read, write operations on multiple data items. In section 8 we compare our work to
relevant literature. In section 9 we discuss the conclusions of our analysis.
2 Summary of the results
We consider a single data item x and a single mobile computer, and we analyze the static
allocation methods ST 1 (mobile computer does not have a copy of x) and ST 2 (mobile computer
does have a copy of x), and the dynamic allocation methods SW k (sliding-window with windowsize
k).
We assume that reads at the mobile computer are issued according to the Poisson distribution
with the parameter - r , namely in each time unit the expected number of reads is - r . The writes
at the stationary computer are issued independently according to the Poisson distribution with
the parameter -w . Other requests are ignored in this paper since their cost is not affected by the
allocation scheme. We let ' denote -w
-r+-w .
Our analysis of each one of the algorithms uses three measures. The first, called expected
cost and denoted EXP , gives the expected cost of a read/write request in the case that ' is
known and fixed. The second, called average expected cost and denoted AV G, is important for
the case ' is unknown or it varies over time with equal probability of having any value between
0 and 1. It gives average the expected cost of a request over all possible values of '.
Our third measure is for the worst case, and it is based on the notion of competitiveness 1
(see [9, 18, 23, 29, 32]) of an on-line algorithm. Intuitively, a data allocation algorithm A is said
to be c-competitive if for any sequence s of read-write requests, the cost of A on s is at most c
times as much as the minimum cost, namely the cost incurred by an ideal offline algorithm that
knows the whole sequence of requests in advance (in contrast our algorithms are online, in the
sense that they service the current request without knowing the next request).
In the remainder of this section we summarize the results for each one of the two cost models
discussed in the introduction. These results will be interpreted and discussed at the intuitive
level in the conclusion section.
2.1 Summary of results in the connection model
In the connection model our results are as follows. For ST 1 the expected cost (i.e. expected
number of connections) per request is the expected number of connections
per request is '. For SW k the expected cost per request is '
ff k is the probability that the majority of k consecutive requests are reads (the formula for this
probability is in equation 5). Furthermore, we show that for any fixed k, SW k is not lower than
'g. Thus, if ' - 1, then the static allocation method ST 1 has the best expected cost
per request, and if ' - 1, then the static allocation method ST 2 has the best expected cost per
request.
Next consider the average expected cost. SW k has the best average (over the possible values
of ') expected cost per request. This cost is 1+ 1
, and it decreases as k increases, coming
within 6% of the optimum for In contrast, ST 1 and ST 2 , both have an average expected
cost of 1For the worst case, we show that ST 1 and ST 2 are not competitive, i.e., the ratio between their
performance and the performance of the optimal, perfect-knowledge algorithm is unbounded. In
contrast, we show that SW k is 1)-competitive, and this competitiveness factor is tight.
In summary, in the worst case the cost of the SW k family of allocation algorithms increases
as k increases, whereas the average expected cost decreases as k increases. The window size k
should be chosen to strike a balance between these two conflicting requirements. For example, k
may provide a reasonable compromise.
1 The traditional worst case complexity as a function of the size of the input is inappropriate since all the
algorithms discussed in this paper have the same complexity under this measure. For example, in the connection
model, for each algorithm there is a sequence of requests of size m on which the algorithm incurs cost m.
2.2 Summary of results in the message passing model
In this model our results are as follows. Let the cost of a data message be 1 and the cost of a
control message be !, where 0 - 1. For ST 1 , the expected cost per request is (1+!)
and for ST 2 the expected cost is '. For SW 1 , the expected cost is '
for derived the expected cost as a function of ! and ' as shown in equation
15 of section 6.3 2 . From these formulae of the expected costs, we conclude the following. If
, then ST 1 has the best expected cost; if ' ! 2\Delta!
, then ST 2 has the best expected
cost; otherwise, namely if 2\Delta!
, the SW 1 algorithm has the best expected cost. The
dominance graph of these three strategies is shown in the following figure 1. It indicates the
superior algorithm for each value of ' and !.
Figure
1: Superiority coverage in message model
Next we consider the average expected cost, and we obtain the following results. ST 1 has
an average expected cost of 1+!; ST 2 has an average expected cost of 1; SW 1 has an average
expected cost of 1+2\Delta!; and the average expected cost of SW k (for k 6= 1) is given by equation
of section 6.3, and it has a lower bound of 2+!. Then we conclude that, if ! - 0:4, then SW 1
has the best average expected cost; if ! ? 0:4, then the average expected cost decreases as the
window size k increases (see corollary 2 in section 6.3).
For the worst case we show that, as in the connection cost model, neither ST 1 nor ST 2 are
competitive. Similarly, we show that the sliding-window algorithm SW 1 is (1+2 \Delta !)-competitive,
and SW k (for k ? 1) is [(1
In summary, the trade-off between the average expected cost and the worst case is similar
to the connection model. Namely, a dynamic allocation algorithm is superior to the static ones,
with the worst case improving with a decreasing window size; whereas the average expected cost
decreases as the window size increases.
2 The SW 1 algorithm is not a special case of the SW k algorithms, as pointed out at the end of section 4
3 The Model
A mobile computer system consists of a mobile computer MC and a stationary computer SC that
stores the online database. We consider a data item x that is stored at the stationary computer
at all times. Reads and writes are issued at the mobile or stationary computers. Actually, the
reads and writes at the stationary computer may have originated at other computers, but the
origin is irrelevant in our model. Furthermore, we ignore the reads issued by the stationary
computer and the writes issued by the mobile computer, since the cost of each such request is
fixed (zero and one respectively), regardless of whether or not MC has a copy of the data item.
Thus, the relevant requests are writes that are issued by the stationary computer, and reads that
are issued by the mobile computer. A schedule is a finite sequence of relevant requests to the
data item x. For example, w; w is a schedule. When each request is issued, either the
MC has a copy of the data item, or it does not. For the purpose of analysis we assume that
the relevant requests are sequential. In practice they may occur concurrently, but then some
concurrency control mechanism will serialize them, therefore our analysis still holds. We assume
that messages between the stationary computer and each mobile computer are delivered in a
first-in-first-out order.
We consider the following two cost models. The first is called the connection model. In this
model, for each algorithm (static or dynamic) the cost of requests is as follows. If there does not
exist a copy of the data item at the MC when a read request is issued, then the read costs one
connection (since the data item must be sent from the SC). Otherwise the read costs zero. For
a write at the SC, if the MC has a copy of the data item, then the write costs one connection;
otherwise the write costs zero. The total cost of a schedule /, denoted by COST (/), is the sum
of the costs for all requests in /.
The second model is called the message cost model. In this model, we assume that a data
message cost is 1, and a control message cost is !. Since the length of a control message is not
higher than the length of a data message, 0 - 1. In this model the cost of requests is as
follows. For a read request, if there exists a copy at the MC, then the read does not require
communication; otherwise, it necessitates a control message (which forwards the request to the
SC) and a data message (which transfers the data to the MC) with a total cost of 1
For a write request, if the MC does not have a copy of the data item, then the write costs
Otherwise the write costs 1, !, or 1 !, depending on the algorithm and on the result of
the comparison of reads and writes executed by the MC in response to the write request. If the
write is propagated to the MC and the MC does not deallocate its copy in response, then the
cost is 1; if the MC deallocates its copy in response then the cost is accounts for the
deallocate request). Finally, as will be explained in the next section, SW 1 does not propagate
writes to the MC; it simply deallocates the copy at the MC at each write request. Then the cost
of the write is !.
We assume that the reads issued from the MC are Poisson distributed with parameter - r ,
and the writes issued from the SC are Poisson distributed with parameter -w . Denote -w
-w+-r
by '. Observe that, since the Poisson distribution is memoryless, at any point in time ' is the
probability that the next request is a write, and
-w+-r
is the probability that the next
request is a read.
Suppose that A is a data allocation algorithm, and - r and -w are the read and write distribution
parameters, respectively. We denote by EXPA (') the expected cost of a relevant request.
Suppose now that ' varies over time with equal probability of having any value between 0 and
1. Then we define the average expected cost per request, denoted AV GA , to be the mean value
of EXPA (') for ' ranging from 0 to 1, namely
Z 1EXPA (')d' (1)
The average expected cost should be interpreted as follows. Suppose that time is subdivided
into sufficiently large periods, where in the first period the reads and writes are distributed
with parameters - 1
r and - 1
w , and '
r
; in the second period the reads and writes are
distributed with parameters - 2
r and - 2
w , and '
etc. Suppose further that each ' i has
equal probability of having any value between 0 and 1 (i.e. the probability densisty function
of ' has value 1 everywhere between 0 and 1, and is 0 everywhere else). In other words, each
' i is a random number between 0 and 1. Then, when using the algorithm A, the expected
cost of a relevant request over all the periods of time is the integral denoted AV GA . In other
words, AV GA is the expected value of the expected cost. One can also argue that AV GA is the
appropriate objective cost function when ' is unknown and it has equal probability of having
any value between 0 and 1.
For the worst-case study, we take competitiveness as a measure of the performance of an on-line
data allocation algorithm. Formally, a c-competitive data allocation algorithm A is defined
as follows. Suppose that M is the perfect data allocation algorithm that has complete knowledge
of all the past and future requests. Data allocation algorithm A is c-competitive if there exist
two numbers c (- 1), and b (- 0), such that for any schedule /,
We call c the competitiveness factor of the algorithm A. A competitive algorithm bounds the
worst-case cost of the algorithm to be within a constant factor of the minimum cost.
We say an algorithm A is tightly c-competitive if A is c-competitive, and for any number
A is not d-competitive.
4 Sliding-window algorithms
The Sliding-Window(k) algorithm allocates and deallocates a copy of the data item x at the
mobile computer. It does so by examining a window of the latest relevant read and write
requests. The window is of size k, and for ease of analysis we assume that k is odd. Recall, the
reads are issued at the mobile computer, and the writes are issued at the stationary computer.
Observe that at any point in time, whether or not the mobile computer has a copy of x, either
the mobile computer or the stationary computer is aware of all the relevant requests. If the mobile
computer has a copy of x, then all the reads issued at the mobile computer are satisfied locally,
and all the writes issued at the stationary computer are propagated to the mobile computer; thus
the mobile computer receives all the relevant requests. Else, i.e. if the mobile computer does not
have a copy, then all reads issued at the mobile computer are sent to the stationary computer;
thus the stationary computer receives all the relevant requests.
Thus, either the mobile computer or the stationary computer (but not both) is in charge of
maintaining the window of k requests. The window is tracked as a sequence of k bits (e.g. 0
represents a read and 1 represents a write). At the receipt of any relevant request, the computer
in charge drops the last bit in the sequence and adds a bit representing the current operation.
Then it compares the number of reads and the number of writes in the window.
If the number of reads is bigger than the number of writes and there is a copy of x at the
mobile computer, then the SW k algorithm simply waits for the next operation. If the number
of reads is bigger than the number of writes and there is no copy at the mobile computer (i.e.
the stationary computer is in charge), then such a copy is allocated as follows. Observe that the
last request must have been a read. The stationary computer responds to the read request by
sending a copy of x to the mobile computer. The SW k algorithm piggybacks on this message (1)
an indication to save the copy in MC's local database, in which the SC also commits to propagate
further writes to the MC, and (2) the current window of requests. From this point onwards, the
MC is in charge.
If the number of writes is bigger than the number of reads and there is no copy of x at the
MC, then the SW k algorithm waits for the next request. If the number of writes is bigger than
the number of reads and there is a copy of x at the MC (i.e. the MC is in charge), then the
copy is deallocated as follows. The SW k algorithm sends to the SC (1) an indication that the SC
should not propagate further writes to the MC, and (2) the current window of requests. From
this point onwards the SC is in charge.
This concludes the description of the algorithm, and at this point we make two remarks. First,
when the window size is 1 and the MC has a copy of x, then a write at the SC will deallocate
the copy (since the window will consist of only this write). Therefore, instead of sending to the
MC a copy of x, the SC simply sends the delete-request that deallocates the copy at the MC.
Thus, SW 1 denotes the algorithm so optimized. Observe that SW 1 the classic write-invalidate
protocol.
5 Connection cost model
In this section we analyze the algorithms in the connection cost model. The section is divided
into 3 subsections. In the first subsection, we probabilistically study the static data allocation
algorithms, and in the second we study the family of sliding window algorithms. In each of
these subsections we derive the expected cost first, then the average expected cost, and then we
compare the algorithms based on these measures. Finally, in section 5.3 we analyze the worst
case performance of all the algorithms.
5.1 Probabilistic analysis of the static algorithms
For the ST 1 algorithm, a write request costs 0, and a read request cost 1 connection. For the
algorithm, every write costs 1, and every read costs 0. Hence, EXP ST 1
(') and EXP ST 2
are simply equal to the probabilities that a request is a read and a write, respectively. Thus,
Concerning the average expected cost, by equation 1 and equation 2 we obtain
and AV G ST 2
5.2 Probabilistic analysis of the SW k algorithms
In this section we derive the expected cost of the SW k algorithms, and we show that for each k
and for each ', the SW k algorithm has a higher expected cost than one of the static algorithms.
Then we derive the average expected cost of the SW k algorithms, and we show that for any k
the SW k algorithm has a lower average expected cost than both static algorithms. Also, we show
that the average expected cost of the SW k algorithms decreases when k increases.
Recall that we are assuming that the size of the window k (= 2 is an odd number.
At any point in time, the probability that there exists a copy at the MC (which we denote by
is the probability that the majority among the preceding k requests are reads, and this is
the same as the probability that the number of writes in the preceding k requests is less than or
equal to n, namely
Theorem 1 For every k and for every ', the expected cost of the SW k algorithm is
Proof: Let us consider a single request, q. When there is a copy at the MC, then the expected
cost of q is equal to the probability that q is a write operation, and it equals '. When there is no
copy at the MC, the expected cost of q is '. The expected cost of q is the probability that
there is a copy at the MC times the expected cost of q when there is a copy at the MC, plus the
probability that there is no copy at the MC times the expected cost of q when there is no copy
at the MC. Thus, we conclude the theorem. 2
The next theorem compares the expected costs of the SW k and the static algorithms.
Theorem 2 For every k and every ', EXP SW k
(')g
Proof: From equations 2, 5 it follows that EXP SW k
theorem follows due to the fact that the weighted average of two values is not smaller than the
minimum of the two values. 2
Now let us consider the average expected costs.
Theorem 3 For the sliding-window algorithm with window size k, SW k , the average expected
cost per request is
Z 1EXP SW k
Our derivation of equation 6 uses the following identity for positive integers a and b,
Z 1x a
Using equation 5, it is straightforward to show that
Using equation 4 and the identity given by equation 7 and after some algebraic simplifications,
it can be shown that Z 1ff k \Delta
and Z 1ff k
Substituting for
in equation 8, and after some simplification, we get the
result given by equation 6. 2
Corollary 1 The average expected cost of the SW k algorithms decreases when the window size
k increases, and AV G SW k
for any k - 1.
Proof: From theorem 3, it is easy to see that AV G SW k
decreases when k increases, and
1= 1. From equations 3 in section 5.1, we conclude the corollary.5.3 Worst case analysis in connection model
In this section we show that the static algorithms, ST 1 and ST 2 , are not competitive. Then
we show that the SW k algorithm is (k 1)-competitive. Therefore, our competitiveness study
suggests that for optimizing the worst case, one has to choose the sliding window algorithm with
a small window size k.
First, let's consider the two static strategies. For the ST 1 algorithm, we can pick a long
schedule which consists of only reads. Then the cost of the ST 1 algorithm is un-boundedly
higher than the cost of the optimal algorithm on this schedule (which is 0 if we keep a copy at
the MC). For the ST 2 algorithm, we can also pick a long schedule which consists of only writes.
Then the cost of the ST 2 algorithm on this schedule is also un-boundedly higher than the optimal
cost (which is 0 if we do not keep a copy at the MC). Therefore, the static algorithms, ST 1 and
are not competitive.
Theorem 4 The sliding-window algorithm SW k is tightly
We prove this by showing that for any schedule / of requests, COST SW k
is the number of read requests in / that occur immediately after a write
request. We will also exhibit a schedule / 0 for which COST SW k
Since it can be shown that the cost of an optimal off-line algorithm on a schedule / is N / , it
follows that SW k is tightly 1)-competitive. As before, we assume throughout the proof that
First we prove that COST SW k
1). Let / be a schedule consisting of
read and write requests. Let N / be the number of read requests in / that occur immediately after
a write request. We divide the schedule / into maximal blocks consisting of similar requests.
Formally, let r be the division of / into blocks such that the requests in any block
are all reads or they are all writes, and successive blocks have different requests.
It should be easy to see that the total number of read blocks in /, i.e. blocks that only
contain read requests, is less than or equal to (N / 1). Similarly, the total number of write
blocks in / is less than or equal to (N / 1). Now, we analyze the cost of read and write requests
separately. Consider any read block B i . It should be easy to see that only the first n+1 reads in
may each incur a connection. After the first n reads the window will definitely have more
reads than writes, and the algorithm will maintain two copies (consequently further reads in the
block do not cost any connections). Thus the cost of executing all the reads in B i is bounded
by (n 1). Hence the cost of all the reads in / is bounded above by (n 1). By a
similar argument, it can be shown that the cost of all the writes in a write block is bounded by
(n+ 1). As a consequence, the cost of all the writes in / is bounded by (n+ 1) \Delta (N
rearranging the terms, we
get COST SW k
To show that the above bound is tight, assume that initially there is a single copy of the
data item. Consider a schedule / 0 that starts with a block of read requests, ends with a block
of write requests, and in each block there are exactly k requests. It should be easy to see that
6 Message cost model
This section is divided into 4 subsections. In the first subsection we probabilistically analyze
the static algorithms, in the second we analyze SW 1 , and in the third we analyze the family of
sliding window algorithms SW k for k ? 1. 3 In each one of the first three subsections we study
the algorithm's expected cost first, then the average expected cost. We also study the relation
among the expected costs of all the static and dynamic algorithms; and the relation among the
average expected costs. In subsection 6.4, we study the worst case of all the algorithms.
Recall that in this model we assume that a data message cost is 1, and a control message
cost is !, where ! ranges from 0 to 1.
6.1 Probabilistic analysis of the static algorithms
For the ST 1 algorithm, the write does not require any communication, whereas the read costs
for the ST 2 algorithm, every write costs 1, the read costs 0. So,
3 As mentioned at the end of section 4, SW 1 is not simply SW k with 1. In this cost model this difference
in the algorithms results in a different analysis, thus the need for a separate subsection dedicated to the analysis
of SW 1 .
6.2 Probabilistic analysis of the SW 1 algorithm
First we derive the expected cost of a relevant request.
Theorem 5 The expected cost of the SW 1 algorithm is
Proof: For the SW 1 algorithm, a read that immediately follows a write costs
the control message that conveys the read request, and 1 for the data message); a write that
immediately follows a read costs an ! (the cost of a control message deallocating the copy at
the MC). No other relevant requests cost any communication. Therefore, the expected cost of a
request q is the expected cost of a read that immediately follows a write times the probability of
q being such a read, plus the expected cost of a write that immediately follows a read times the
probability of q being such a write, namely, EXP SW 1
In the next theorem we study the relation of the expected costs of three algorithms, i.e.,
('), and EXP SW 1
('). The results of this theorem is graphically illustrated
in
Figure
1.
Theorem 6 The expected costs EXP SW 1
('), and EXP ST 2
are related as follows,
depending on ' and !.
, then EXP ST 1
, then EXP SW 1
, then EXP ST 2
Proof: This is a straight-forward algebraic derivation, that uses equations 11, 13, and the fact
Now we are ready to consider the average expected cost.
Theorem 7 The average expected cost of the SW 1 algorithm is
and AV G SW 1
Proof: Equation 14 can be easily obtained from equation 13, based on the definition of the
average expected cost (equation 1). Since 0 - 1, we obtain 1+2\Delta!- 1- 1+!. From the
equations 12 in section 6.1, we conclude the theorem. 2
6.3 Probabilistic analysis of SW k
In this section we consider the SW k algorithms, for First, we derive the formula
of the expected cost for SW k . Then we show that for any k and for any ' the expected cost of
SW k is higher than the minimum of the expected costs of SW 1 , ST 1 , and ST 2 . Thus we conclude
that for a known fixed ', SW k is inferior to the other algorithms. Then we derive the formula
of the average expected cost for SW k . Then we show that SW k has the best average expected
cost for some k - 1, and we determine this optimal k as a function of !, the cost of a control
message.
Theorem 8 For every k ? 1, the expected cost of the SW k algorithm is
Consider a write request w. It costs a data message if there exists a copy at the MC
when the request is issued. The probability of having a copy at the MC is ff k . Additionally, if
the MC deallocates its copy as a result of this write, the write will necessitate a delete message
sent from the MC to the SC, It can be argued (and we omit the details) that this occurs if and
only if the sequence of k requests immediately preceding w, starts with a read and has exactly
writes. Therefore, the expected cost of w is
Now consider a read request r. It does not require communication if there is a copy at the
MC when the request is issued. Otherwise, it costs a control message for the request, and a data
message for the response. Thus, the expected cost of r is
Therefore, the expected cost of a request is the expected cost of a write times the probability
that the request is a write, plus the expected cost of a read times the probability that the request
is a read, namely, EXP SW k
A simple algebraic manipulation of the above expression leads to equation 15. 2
Theorem 9 For any ' and for any k ? 1, the expected cost of algorithm SW k is higher than
the expected cost of at least one of the algorithms SW 1 , ST 1 , and ST 2 . Namely, EXP SW k
(')g
In order to prove this theorem, we need the following three lemmas.
Lemma 1 For any k ? 1, if ' - 0:5, then EXP SW k
(').
Proof: From equations 11 and 15 we derive
decreases when k increases, and
From the definition of ff k (see equation 4), we can derive
are omitted). From this formula, we see that ff k+2 \Gamma ff k is
negative for all k - 1. Hence ff k decreases with k. As a consequence ff
for any k ? 1. 2
Lemma 3 For any k ? 1 and any ' ? 0:5,
, then EXP SW k
, then EXP SW k
(').
Proof: EXP SW k
equations 11 and 15) ff k
namely,
Base on the above inequality, it is easy to show that if
, then EXP SW k
Thus, we have proved the first claim of the lemma.
namely, EXP SW k
Based on the above inequality and lemma 2, it is easy to show that if ! -
, then EXP SW k
Proof of theorem 9: If ' - 0:5, then lemma 1 indicates that EXP SW k
(')g. The
theorem follows. 2
Now, let's consider the average expected cost of the SW k algorithms for k ? 1.
Theorem 10 For the SW k algorithm with window size k ? 1, the average expected cost is
Z 1EXP SW k
From the definition of ff k in section 5.2, equation 7 and equation 15, we can derive the
equation 16. The tedious intermediate derivation steps are omitted. 2
Corollary 2 For k ? 1, AV G SW k
decreases when k increases, and AV G SW k
This corollary is straight forward from equation 16. 2
In theorem 7 we have shown that the average expected cost of the SW 1 algorithm is better
(i.e. lower) than that of the static algorithm. In the following corollaries, we analyze when the
average expected cost of SW k (for k ? 1) is lower than the average expected cost of SW 1 based
on the two formulae 14 and 16. In corollary 3 below, we show that when ! - 0:4, the average
expected cost of SW k is always higher than that of SW 1 .
Corollary 3 If ! - 0:4, then AV G SW k
for any k ? 1.
Thus, by theorem 7 and corollary 2, we conclude this
corollary. 2
In the next corollary, we study the case where ! ? 0:4. We show that for a given ! ? 0:4,
there is some k 0 , such that if k - k 0 , then the average expected cost of SW k is lower than that
of SW 1 . The following figure illustrates the results of the corollaries 3 and 4. For example, if
only when k - 39, the SW k algorithm has a lower expected cost than that of
only when k - 7, the SW k algorithm has a lower expected cost than that
of SW 1 .
Corollary 4 If ! ? 0:4, then AV G SW k
for any k which satisfies
manipulation using equations 14 and 16. 2
6.4 Worst case in message model
In this section, we study the competitiveness of the algorithms ST 1 , ST 2 , and SW k for k - 1,
in the message cost model. The result for SW 1 is stated separately, since it is a special case
(see section 4). We conclude that the static algorithms are not competitive as is the case in the
connection model. Then we show that SW 1 is more competitive than SW k for k ? 1, and we
show that the competitiveness factor of the SW k algorithms deteriorates when k increases, thus
performs the best in the worst case.
As in the connection model, we can easily derive that the static algorithms are not competitive
in the message model.
Theorem 11 The algorithm SW 1 is tightly !)-competitive in the message cost model,
where !(! 1) is the ratio of control message cost to data message cost.
Similarly to the proof of theorem 4, we let N / be the number of reads in / that occur
immediately after a write, where / is an arbitrary schedule of requests. It is easy to see that
N / is the minimum cost to satisfy all the requests in /. Let r be the division of /
into blocks such that the requests in any block are all reads or they are all writes, and successive
blocks have different requests.
It should be easy to see that the total number of read blocks in / is less than or equal to
(N / +1), and a read block costs at most (1+!) since after the first read the mobile computer will
keep a copy of the data item. The total cost of reads is bounded by (N Similarly,
the total number of write blocks in / is less than or equal to (N / + 1), and a write block costs
only ! since the first write in the block will invalidate the copy at the MC. Thus, the total cost
of writes in / is bounded by (N
To show that the above bound is tight, assume that initially there is a single copy of the
data item. Consider a schedule / 0 that starts with a read request, ends with a write request,
and in each block there is exactly 1 request. It should be easy to see that COST SW k
Theorem 12 The algorithm SW k (for k ? 1) is tightly [(1 +!]-competitive in the
message cost model, where !(! 1) is the ratio of control message cost to data message cost.
Similarly to the proofs of theorems 4 and 11, we prove that for any schedule / of
requests, COST SW k
is the
number of read requests in / that occur immediately after a write request. We will also exhibit
a schedule / 0 for which COST SW k
is a constant.
Since it can be shown that the cost of an optimal off-line algorithm on / is N / , it follows that
SW k is tightly [(1 !]-competitive. As before, we assume throughout the proof
that
Let / be a schedule consisting of read and write requests. We prove that COST SW k
We divide the schedule / into maximal
blocks consisting of similar requests. Formally, let r be the division of / into blocks
such that the requests in any block are all reads or they are all writes, and successive blocks have
different requests.
It should be easy to see that the total number of read blocks in /, i.e. blocks that only
contain read requests, is less than or equal to (N / 1). Similarly, the total number of write
blocks in / is less than or equal to (N / 1). Now, we analyze the cost of read and write
requests separately. Consider any read block B i . It should be easy to see that only the first
may each cost (1 !). After the first n reads the window will definitely
have more reads than writes, and the algorithm will maintain two copies and further reads
in the block do not cost any communication. Thus the cost of executing all the reads in B i
is bounded by (n Hence the cost of all the reads in / is bounded above by
1). Now consider a . It is easy to see that B j will cost
at most (n data message, since after the first n the window will definitely have
more writes than reads and the copy at the MC will be deallocated, and this deallocation may
cost this block an additional control message. Thus, the cost of a write block is bounded by
(n+1+!). As a consequence, the cost of all the writes in / is bounded by (n+1+!) \Delta (N
Hence, COST SW k
rearranging the terms, we get COST SW k
To show that the above bound is tight, assume that initially there is a single copy of the
data item. Consider a schedule / 0 that starts with a block of read requests, ends with a block
of write requests, and in each block there are exactly k requests. It should be easy to see that
Extensions
In this section we discuss various extensions to the previous methods. In particular, in the first
subsection we show how to modify the static algorithms to make them competitive, and in the
second subsection we discuss extensions of the algorithm to optimize the case where multiple
data items can be read and written in a single operation.
7.1 Modifications to the Static Methods
We have presented two simple static methods that use the one-copy and two-copies schemes.
The static methods can be chosen if the value of ' is known in advance. For example, in the
connection model, the static method using a single copy at the stationary computer has the
best expected cost if ' ? 0:5. Similarly, the static method using the two-copy scheme has the
best expected cost when ' - 0:5. However, the static methods do not have a good worst case
behavior, i.e. they are not competitive. For example, a static method using a single copy will
incur a high cost on a sequence of requests consisting of only reads from the mobile computer.
This cost can be arbitrarily large, depending on the length of the sequence. Even though such a
sequence is highly improbable, it can occur with nonzero probability.
We can overcome this problem by simple modifications to the static methods, that actually
make them dynamic. For example, we can modify the one-copy static method as follows. It
will normally use the one-copy scheme until m consecutive reads occur; then it changes to the
two-copies scheme and uses this scheme until the next write. Then it reverts back to one-copy
scheme and repeats this process. We refer to this algorithm as T1m . It can be shown that T1m
1-competitive and that its expected cost is in the connection
model. Note that the second term is the additional expected cost over the static method (it can
be shown that for each ' ? 0:5 T1m has a lower expected cost than SWm and they are both
equally competitive). This is the price of competitiveness. Thus, if we know that ' ? 0:5 then
we can choose the T1m algorithm instead of ST 1 , for an appropriate value of m.
Similarly, we can modify the ST 2 algorithm to obtain the T2m algorithm that has almost the
same expected cost as ST 2 , and is (m
7.2 Multiple Data Items
In this paper we have addressed the problem of choosing an allocation method for a single data
items. These results can be extended to the case where multiple data items can be read and
written in a single operation.
We will sketch an algorithm that gives an optimal static allocation method, in the connection
model, for multiple data items, when the frequencies of operations on the data items are known
in advance. Assume that multiple data items can be remotely read in one connection; similarly
for the remote writes. We present the algorithm for the case when we have only two data items x
and y. This can be generalized to more than two data items. Also, we discuss how this approach
can be extended to the dynamic window based algorithms.
Assume that we have two data items x and y. We classify the read operations into three
classes. reads of x only, reads of y only, and reads that access both x and y. We assume that
these three different reads occur according to independent Poisson distributions with frequencies
respectively. We classify the writes similarly and assume that these writes occur
with frequencies -w;x ; -w;y and -w; , respectively. It is to be noted that - denote the
frequencies of joint reads and writes respectively. Now, we have four possible allocation methods
for x and y: ST 1 (both x and y have only one copy), ST 2 ( both x and y have two copies), ST 1;2
has one copy and y has one copy) and ST 2;1 (x has two copies and y has only one). For each of
these allocation methods we can obtain the expected cost of a single operation using the above
frequencies, and then choose the one with the lowest expected cost. For example, the expected
cost for ST 1 is (- r;x and that of ST 1;2 is (- r;x
the sum of all the read and write frequencies. The above method can be generalized to any finite
set of data items. We need the frequencies of various joint operations on these data items.
To use the method given above we need to know the frequencies of various operations in
advance. If these frequencies are not known in advance, then we can use the window based
approach that dynamically calculates these frequencies. In this case, we need to keep track of
the number of operations of different kind (i.e. the joint/exclusive read/write operations for
multiple data items) in the window. From these numbers, we can calculate the frequencies of
these operations, compute their expected costs (similar to the static methods given in the previous
using these frequencies, and choose an appropriate future allocation method. To avoid
excessive overhead, this recomputation can be done periodically instead of after each operation.
Future work will address the performance analysis of this method.
8 Comparison with Relevant Literature
As far as we know this is the first paper to study the communication cost of static and dynamic
allocation in distributed systems using both, average case and worst case analysis. There are two
bodies of relevant work, each of which is discussed in one of the following two subsections. In the
first subsection we compare this paper with database literature on data allocation in distributed
systems. In the second subsection we compare this paper to literature on caching and distributed
virtual memory.
8.1 Data allocation in distributed systems
Data allocation in distributed systems is either static or dynamic. In [35] and [36], we considered
dynamic data allocation algorithms, and we analyzed them using the notion of convergence,
which is different than the measures used in this paper, namely expected case and worst case.
Additionally, the algorithms in those works are different than the ones discussed here. Further-
more, in [35] and [36] we did not consider static allocation algorithms, and we did not consider
the connection cost model.
Other dynamic data allocation algorithms were introduced in [22] and [23]. Both works
analyze dynamic data allocation in the worst case only. Actually, the SW 1 algorithm was first
analyzed in [23]. However, the model there requires a minimum of two copies in the system,
for availability purposes. Thus even for the worst case the results are different. In contrast, in
this paper we assume that availability constraints are handled exclusively within the stationary
system, independently of the mobile computers.
There has also been work addressing dynamic data allocation algorithms in [9]. This work also
addresses the worst case only. Additionally, the model there does not allow concurrent requests,
and it requires centralized decision making by a processor that is aware of all the requests in the
network. In contrast, our algorithms are distributed, and allow concurrent read-write requests.
Static allocation was studied in [37, 14]. These works address the following file-allocation
problem. They assume that the read-write pattern at each processor is known a priori or it
can be estimated, and they find the optimal static allocation scheme. However, works on the
file-allocation problem do not compare static and dynamic allocation, and they do not quantify
the cost penalty if the read-write pattern deviates from the estimate.
Many works on the data replication problem (such as [4, 6, 11, 13, 17, 21, 34]) and on file
systems (such as CODA [30, 33]) address solely the availability aspect, namely how to ensure
availability of a data item in the presence of failures. In contrast, in this paper we addressed the
communication cost issue.
The works done by Alonso and Ganguly in [5, 20] are also related to the present paper in
the sense that they also address the optimization issue for mobile computers. However, their
optimization objective is energy, whereas ours is communication.
The work on broadcast disks ([2]) also addresses peformance issues related to push/pull of
data in a mobile computing environment. However, this work assumes read-only data items and
it does not perform the type of analytical performance evaluation present in this paper.
8.2 Caching and virtual memory
In the computer architecture and operating systems literature there are studies of two subjects
related to dynamic data allocation, namely caching and distributed virtual memory (see [1, 3, 7,
However, there are several important differences between Caching and Distributed Virtual
Memory (CDVM) on one hand, and replicated data in distributed systems on the other. There-
fore, our results have not been obtained previously. First, many of the CDVM methods do
not focus on the communication cost alone, but consider the collection of factors that determine
performance; the complexity of the resulting problem dictates that their analysis is either experimental
or it uses simulation. In contrast, in this paper we assumed that since in mobile computing
wireless communication involves an immediate out-of-pocket expense, the optimization of wireless
communication is the sole caching objective; and we performed a thorough analytical cost
evaluation.
Second, in CDVM the size of the cache is assumed to be limited. Thus, the important issues
in CDVM literature are cache utilization, and the page replacement strategy (e.g. LRU, MRU,
etc.), namely which page to replace in the cache when the cache is full and a new page has to
be brought in. In other words, in contrast to replicated data in distributed systems, which may
reside on secondary and even tertiary storage, in CDVM a page may be deleted from the cache
as a results of limited storage. One may argue whether or not limited storage is a major issue in
distributed databases, however, in this paper we assumed that storage at the mobile computer
is abundant.
There have been some CDVM methods which consider communication cost as one of the
optimization criteria (e.g. TreadMarks [26]). However, they do not use dynamic allocation
schemes.
9 Conclusions
In this paper we have considered several data allocation algorithms for mobile computers. In
particular, we have considered the one-copy and the two-copies allocation schemes. We have
investigated static and dynamic allocation methods using the above schemes. In a static method
the allocation scheme remains unchanged throughout the execution. In a dynamic method the
allocation scheme changes dynamically based on a window consisting of the last k requests;
if in the window there are more reads at the mobile computer than writes at the stationary
computer, then we use the two-copy scheme, otherwise we use the one-copy scheme. We get
different dynamic methods for different values of k. For the dynamic method is simply the
classic write-invalidate protocol.
We have considered two cost models - the connection model and the message model. In the
connection model, the cost is measured in terms of the number of (wireless telephone) connec-
tions, where as in the message model the cost is measured in terms of the number of control and
data messages.
We have considered three different measures- expected cost, average expected cost, and the
worst case cost which uses the notion of competitiveness. Roughly speaking, an algorithm A
is said to be k-competitive if for every sequence s of read-write requests the cost of A on the
sequence s is at most k times the cost of an ideal off-line algorithm on s which knows s in advance.
An algorithm A is said to be competitive, if for some k ? 0, A is k-competitive. The expected
cost is the standard expected cost per request assuming fixed probabilistic distributions for reads
and writes. We believe that an allocation method should be chosen based on the expected cost
as well as the worst case cost. Specifically, we think that an allocation method should be chosen
to minimize the expected cost, provided that it has some bound on the worst case behavior.
Now we explain the difference between the expected cost and the average expected cost. We
have assumed that both, reads at the mobile computer and writes at the stationary computer,
occur according to independent Poisson distributions with frequencies - r and -w respectively.
When the values of - r and -w are known (more specifically, when the value of
-r+-w
is known), then an allocation method should be chosen based on the expected cost and the
competitiveness. However, when ' varies and it is equally likely to have any value between 0 and
1, then an allocation method should be chosen based on the average expected cost (in addition
to competitiveness). The average expected cost is the integral of the expected cost over ' from
0 to 1. An allocation method with a lower average expected cost will have a lower average cost
per request, in a sequence of requests in which the frequencies of reads and writes vary over
time. Furthermore, the average expected cost can also provide an insight and/or a measure for
selecting an allocation method in the case when ' is unknown, but it is equally likely to have
any value between 0 and 1 4 .
In the connection model, if ' is greater than 0:5, i.e., the read frequency is lower than the write
frequency, then the static allocation method using only one copy at the stationary computer has
4 If ' is not uniformly distributed, then the average expected cost should be defined as the integral of the
expected cost multiplied by the density function for '.
the best expected cost. Similarly, if ' is smaller than 0:5, then the static allocation method using
one copy at the stationary computer and one at the mobile computer has the best expected cost.
When - r and -w change over time (i.e. ' changes over time), then one of the dynamic
methods SW k for an appropriate value of k should be chosen. This is due to the fact that the
average expected cost of the SW k algorithms is lower than either one of the static methods. The
value of the window size k should be chosen to strike a balance between the average expected
cost (which decreases as k increases, see equation 6) and competitiveness (the SW k algorithm
1)-competitive, thus competitiveness becomes worse as k increases). For example, for
9 the sliding-window algorithm will have an average expected cost that is within 10% of
the optimum, and in the worst case will be at most 10 times worse than the optimum offline
algorithm.
In the message model, the static allocation methods are still not competitive; and the dynamic
allocation methods SW k are again competitive, although with a different competitiveness factor.
For a given ', the expected cost of one of the three methods ST 1 , ST 2 and SW 1 is lowest; the
particular one depends on the values of ' and ! (the ratio between the control message cost and
the data message cost). The lowest expected-cost algorithm as a function of ' and ! is given in
figure 1.
If ' is unknown or it varies over time, then one of the sliding window methods provides the
optimal average expected cost. The particular one depends on the value of !. If ! - 0:4 then
the SW 1 algorithm should be chosen as it has the least average expected cost; for other values
of !, the higher the value of k the lower the average expected cost of the SW k algorithm (see
figure 2). Again, the appropriate value of k should be chosen to strike a balance between average
expected cost and competitiveness.
The data allocation methods and the results of this paper pertain to applications where the
data items accessed by the various mobile computers are mostly disjoint, and the read requests
must be satisfied by the most-up-to-date version of the data item. For applications that do not
satisfy these assumptions, techniques that use data broadcasting and batching of updates may
be appropriate, and our results need to be extended. This is the subject of future work.
ACKNOWLEDGEMENT
We wish to thank the referees for their insightful comments.
--R
"An Evaluation of Cache Coherence Solutions in Shared-Bus Multiprocessors"
"Balancing Push and Pull for Data Broadcast"
"An Evaluation of Directory Schemes for Cache Coherence"
"Multidimensional Voting"
"Query Optimization for Energy Efficiency in Mobile Enviro- ments"
"The Tree Quorum Protocol: An Efficient Approach for Managing Replicated Data"
"Adaptive Software Cache Management for Distributed Shared Memory Architectures"
shared memory based on type-specific memory coherence"
"Competitive Algorithms for Distributed Data Management"
"Replication and Mobility"
"The Grid Protocol: A High Performance Scheme for Maintaining Replicated Data"
"Data Caching Tradeoffs in Client-Server BDMS Architectures"
"Distributed concurrency control performance: A study of algorithms, distribution and replication"
"A Characterization of Sharing in Parallel Programs and Its Application to Coherency Protocol Evaluation"
"Evaluating the Performance of Four Snooping Cache Coherency Protocols"
"Achieving Robustness in Distributed Database Systems"
"Competitive paging algorithms"
"Client data Caching: A foundation for high performance object database systems"
"Query Optimization in Mobile Enviroments"
"Weighted Voting for Replicated Data"
"A Competitive Dynamic Data Replication Algorithm"
"Dynamic Allocation in Distributed System and Mobile Com- puters"
"Querying in highly mobile distributed environments"
"Shared Virtual Memory on Loosely Coupled Multiprocessors"
"TreadMarks: Distributed Shared Memory on Standard Workstations and Operating Systems"
"Memory Coherence in Shared Virtual memory systems"
"Synchronization with Multiprocessor Caches"
"Competitive Snoopy Caching"
"Disconnected Operation in the Coda File System"
"Disk Cache Performance for Distributed Systems"
"Competitive algorithms for online problems"
"Coda: A Highly Available File System for a Distributed Workstation Environment"
"A Majority Consensus Approach to Concurrency Control for Multiple Copy Database"
"Distributed Algorithms for Dynamic Replication of Data"
"An Algorithm for Dynamic Data Distribution"
"The Multicast Policy and Its Relationship to Replicated Data Placement"
"Cache Consistency and Concurrency Control in a Client/Server DBMS Architecture"
--TR
--CTR
Sandeep K. S. Gupta , Pradip K. Srimani, Data management in wireless mobile environments, Handbook of wireless networks and mobile computing, John Wiley & Sons, Inc., New York, NY, 2002
Wang , Ing-Ray Chen , Chih-Ping Chu , I-Ling Yen, Replicated Object Management with Periodic Maintenance in Mobile Wireless Systems, Wireless Personal Communications: An International Journal, v.28 n.1, p.17-33, January 2004
Ycel Saygin , zgr Ulusoy, Exploiting Data Mining Techniques for Broadcasting Data in Mobile Computing Environments, IEEE Transactions on Knowledge and Data Engineering, v.14 n.6, p.1387-1399, November 2002
James Jayaputera , David Taniar, Data retrieval for location-dependent queries in a multi-cell wireless environment, Mobile Information Systems, v.1 n.2, p.91-108, April 2005
Isabel Cruz , Ashfaq Khokhar , Bing Liu , Prasad Sistla , Ouri Wolfson , Clement Yu, Research activities in database management and information retrieval at University of Illinois at Chicago, ACM SIGMOD Record, v.31 n.3, September 2002
W. J. Lin , B. Veeravalli, A dynamic object allocation and replication algorithm for distributed systems with centralized control, International Journal of Computers and Applications, v.28 n.1, p.26-34, January 2006
Bharadwaj Veeravalli, Network Caching Strategies for a Shared Data Distribution for a Predefined Service Demand Sequence, IEEE Transactions on Knowledge and Data Engineering, v.15 n.6, p.1487-1497, November
Bharadwaj Veeravalli, Network Caching Strategies for a Shared Data Distribution for a Predefined Service Demand Sequence, IEEE Transactions on Knowledge and Data Engineering, v.15 n.6, p.1487-1497, November
Anurag Kahol , Sumit Khurana , Sandeep K.S. Gupta , Pradip K. Srimani, A Strategy to Manage Cache Consistency in a Disconnected Distributed Environment, IEEE Transactions on Parallel and Distributed Systems, v.12 n.7, p.686-700, July 2001
Xiao-Hui Lin , Yu-Kwong Kwok , Vincent K. N. Lau, A Quantitative Comparison of Ad Hoc Routing Protocols with and without Channel Adaptation, IEEE Transactions on Mobile Computing, v.4 n.2, p.111-128, March 2005
Sandeep K. S. Gupta , Goran Konjevod , Georgios Varsamopoulos, A theoretical study of optimization techniques used in registration area based location management: models and online algorithms, Proceedings of the 6th international workshop on Discrete algorithms and methods for mobile computing and communications, September 28-28, 2002, Atlanta, Georgia, USA | probabilistic analysis;mobile computing;communication cost;wireless communication;dynamic data allocation;caching |
277606 | Concurrency Control and View Notification Algorithms for Collaborative Replicated Objects. | AbstractThis paper describes algorithms for implementing a high-level programming model for synchronous distributed groupware applications. In this model, several application data objects may be atomically updated, and these objects automatically maintain consistency with their replicas using an optimistic algorithm. Changes to these objects may be optimistically or pessimistically observed by view objects by taking consistent snapshots. The algorithms for both update propagation and view notification are based upon optimistic guess propagation principles adapted for fast commit by using primary copy replication techniques. The main contribution of the paper is the synthesis of these two algorithmic techniquesguess propagation and primary copy replicationfor implementing a framework that is easy to program to and is well suited for the needs of groupware applications. | Introduction
Synchronous distributed groupware applications
are finding larger audiences and increased interest
with the popularity of the World Wide Web. Major
browsers include loosely integrated groupware applications
like chat and whiteboards. With browser functionality
extensible through programmability (Java
applets, plug-ins, ActiveX), additional groupware applications
can be easily introduced to a large community
of potential users. These applications may vary
from simple collaborative form filling to collaborative
visualization applications to group navigation tools.
Synchronous collaborative applications can be built
using either a non-replicated application architecture
or a replicated application architecture. In a non-replicated
architecture, only one instance of the application
executes, and GUI events are multicast to all
the clients, via systems such as shared X servers [1].
In a replicated architecture, each user runs an appli-
cation; the applications are usually identical, and the
state or the GUI is "shared" by synchronously mirroring
changes to the state of one copy to each of the
others [6, 13].
Research Center,
Hawthorne, NY 10532, USA. E-mail: fstrom, banavar, klm,
mjwg@watson.ibm.com
2 Department of Electrical Engineering and Computer Sci-
ence, University of Michigan, Ann Arbor, MI, USA. E-mail:
aprakash@umich.edu
In this paper, we assume that replicated architectures
are used because they generally have the potential
to provide better interactive responsiveness and
fault-tolerance, as users join and leave collaborative
sessions. However, the domain of synchronous collaborative
applications is broader than those supported
by a fully replicated application architecture. For example
ffl the applications may have different GUIs and
even different functionality, sharing only the replicated
state,
ffl the shared state may not be the entire application
state, and
ffl an application may engage in several independent
collaborations, e.g., one with a financial planner,
another with an accountant, and each collaboration
may involve replication of a different subset
of the application state.
In order to support the development of such a large
variety of applications, it is clearly beneficial to build a
general application development framework. We have
identified the following requirements for such a frame-work
From the end-user's perspective, collaborative applications
built using the framework must be highly
responsive. That is, the GUI must be as responsive as
a single user GUI at sites that initiate updates, and
the response latency at remote sites must be minimal.
Second, collaborative applications must provide sufficient
awareness of ongoing collaborations.
From the perspective of the developer of collaborative
applications, the framework must be application-independent
and high-level. That is, it must be capable
of expressing a wide variety of collaborative appli-
cations. Second, the developer should not be required
to be proficient in distributed communication proto-
cols, thread synchronization, contention, and other
complexities of concurrent distributed programming.
We have implemented a framework called Decaf
(Distributed, Extensible Collaborative Application
Framework) that meets the above requirements.
Our framework extends the well-known Model-View-
Controller paradigm of object-based application development
[10]. In the MVC paradigm, used in GUI-
oriented systems such as Smalltalk and InterViews
[12], view objects can be attached to model objects
in order to track changes to model objects. Views
are typically GUI components (e.g., a graph or a win-
dow) that display the state of their attached model
objects, which contain the actual application data.
Controllers, which are typically event handlers, receive
input events from GUI components and, in response,
invoke operations to read and write the state of model
objects. Updated model objects then notify their attached
views of the change, so that the view may re-compute
itself based on the new values. The MVC
paradigm has several beneficial properties such as (1)
modular separation of application state components
from presentation components and (2) the ability to
incrementally track dependencies between such components
To support groupware applications, Decaf extends
the MVC paradigm as indicated in Figure 1. First,
the framework supplies generic collaborative model
objects, such as Integers, Strings, Vectors, etc. to application
developers. These model objects can have
replica relations with model objects across applica-
tions, so that replicated groupware applications can be
easily built. Second, it provides atomicity guarantees
to updates on model objects, even if multiple objects
are modified as part of an update. The framework
automatically and atomically propagates all updates
to replicas of model objects and their attached views.
Third, writers can choose whether views see updates
to model objects as they occur (optimistic) or only
after commit (pessimistic). Fourth, applications can
dynamically establish collaborations between selected
model objects at the local site and model objects in
remote applications. Finally, users may also code authorization
monitors to restrict access to sensitive objects
In this paper, we first introduce the basic concepts
of the Decaf framework (Section 2). Next, we describe
the distributed algorithms that implement consistent
update propagation (Section 3) and view notification
(Section 4). Then, we present comparison
with related work, our experience with using Decaf
(Section 5) and finally, some concluding remarks (Sec-
tion 6).
2 The DECAF Framework
As mentioned earlier, Decaf extends the Model-
View-Controller paradigm [10]. Decaf model object
classes are supplied by the framework; the application
programmer simply instantiates them (model objects
Replica
update
notifications
updates
reads/
A Replica
Model Objects
Transaction View
Collaborative Application A2
reads
Application
Framework
Figure
1: Typical structure of Decaf applications.
are shown below the horizontal line that separates the
framework from the application in Figure 1). The
application programmer writes views and controllers,
which initiate transactions (these are shown above the
horizontal line in Figure 1). In the following subsec-
tions, we describe the key concepts in the framework
and the atomicity guarantees on access to model objects
provided by the Decaf infrastructure.
2.1 Model objects
Model objects hold application state. All model
objects allow reading, writing, and attaching views.
There are three kinds of model objects: (1) Scalar
model objects, which currently are of types integer,
real, and string; (2) Composite model objects, which
support operations to embed and to remove other
model objects called children, and which may be either
lists (linearly indexed sequences of children) or
tuples (collections of children indexed by a
Association model objects, which are used to track
membership in collaborations.
Model objects can join and leave replica relationships
with other model objects. The value of an association
object is a set of replica relationships which
are bundled together for some application purpose.
For each replica relationship, the association object
contains the set of model objects which have joined,
together with their sites and object descriptions.
The operations on association objects relevant to
this paper are join and leave, by which a model object
joins or leaves a particular replica relationship as
described in Section 2.7.
2.2 Replica Relationships
A replica relationship is a collection of model ob-
jects, usually spanning multiple applications, which
are required to mirror one another's value. Replica
relationships are symmetric and transitive.
2.3 Controllers
A controller is an object which responds to end-user
initiated actions, such as typing on the keyboard,
clicking or dragging the mouse, etc. A controller may
initiate transactions to update collaborative model ob-
jects. A controller may also perform other external
interactions with the end user.
2.4 Transactions
Transactions on model objects are executed by invoking
an execute method on a transaction object. Application
programmers may define transaction objects,
with their associated execute method, for actions that
need to execute atomically with respect to updates
from other users. The execute method may contain
arbitrary code to read and write model objects within
the application. Any changes to model objects will be
automatically propagated to their replicas.
The execution of a transaction is an atomic ac-
tion. That is, it behaves as if all its operations -
those of the execute method and those which propagate
changed values to replicas - take place at a
single instant of time, totally ordered with respect to
the times of all the other atomic actions in the system.
Atomicity is implemented optimistically in Decaf.
Transactions may abort, e.g., if two transactions originated
at different sites and each transaction guessed
that it read and updated a certain value of an object
before the other transaction did, then one of the
transactions will abort. Aborted transactions are re-executed
at the originating site. The effects of aborted
transactions will be invisible to pessimistic views, and
automatically undone as seen by optimistic views.
2.5 View Objects
A view object is a user-defined object which can be
dynamically attached to one or more model objects.
When a view is attached to a model object, that view
object will be able to track changes to the model object
by receiving update notifications, as calls to its update
method. If a view object is attached to a composite
model object, it will receive notifications for changes
to the composite as well as any of its children. The
purpose of a view object is to compute some function,
e.g., a graphical rendering, of some or all of the model
objects it is attached to. When the view object receives
an update notification, its update method may
take a state snapshot by reading any of the model objects
that it is attached to. State snapshots are guaranteed
by the infrastructure to be atomic actions -
behaving as if they are instantaneous. Besides taking
a state snapshot, the update method may initiate
new transactions and perform arbitrary external in-
teractions, such as rendering on the display, printing,
playing audio data, etc.
Each update notification contains a list of all ob-
jects, and only such objects, that have changed value
since the last notification. Objects not on this list
may be assumed not to have changed value. This information
allows view objects to recompute its function
more efficiently when only a part of a large value
has changed. For example, when large composite objects
are changed, the update notification will not only
specify which composite object has changed, but also
which parts have changed, to allow incremental recalculation
2.6 Optimistic and Pessimistic Views
View objects can be either optimistic or pessimistic.
Optimistic and pessimistic views differ in the protocols
for delivery of update notifications.
An optimistic view will receive an update notification
as soon as possible after any of its attached
model objects has changed. However, the state snap-shot
may be inaccurate or inconsistent because messages
have arrived out of order or because transactions
will abort. If an optimistic view ever takes an incorrect
snapshot, the infrastructure will eventually execute a
superseding update notification. Therefore, so long
as the system eventually quiesces, the final snapshot
taken before the system quiesces will be correct. An
optimistic view will receive a commit notification (as
a call to its commit method) whenever its most recent
update notification is known to be correct. Committed
state snapshots are always correct and always occur in
monotonic order. An optimistic view therefore trades
off accuracy and the risk of wasted work in exchange
for responsiveness.
Pessimistic views receive update notifications only
when transactions updating attached model objects
have committed and when the view will be able to
see consistent values. The system makes two guarantees
to a pessimistic view: (1) never to show any
uncommitted or inconsistent values, and (2) to show
all committed values in monotonic order of applied
updates.
2.7 Collaboration Establishment
Users may create replica relationships and cause
model objects to join or leave replica relationships dy-
namically. In order for an object A at one site to join
a replica relationship involving an object B at another
site, the following steps must occur:
ffl B's owner must create an association object BAs-
soc containing at least one replica relationship
joined by B.
ffl B's owner must then publicize the right to collaborate
with B by creating an external token called
an invitation including a reference to BAssoc and
export it somewhere where A's owner can access
it (e.g., on a bulletin board).
ffl A's owner must then import this invitation and
use it to instantiate his own association object
AAssoc. Object AAssoc must then be authorized
to reveal BAssoc's replica relationships.
ffl A's owner can then read AAssoc, discover the existence
of a replica relationship involving B that
it wishes to join, and issue a join of A to that
relationship.
Since association objects are also model objects,
and can have views attached to them, changes in membership
in associations are signalled in exactly the
same way as changes in values of data objects.
3 Concurrency control
This section describes the optimistic concurrency
control algorithms for propagating updates among
model objects in replica relationships.
Each transaction is started at some originating site,
where it is assigned a unique virtual time
to execution. The V T is computed as a Lamport time
[11], including a site identifier to guarantee uniqueness
When a transaction is initiated, a transaction implementation
object is created at the originating site.
When updates are propagated to remote replicas,
transaction implementation objects are created at
those sites. Each transaction implementation object
at a site contains: the V T of the transaction, references
to all model objects updated by the transaction
at that site, and additional state information.
Each model object holds:
ffl Value History: The value history is a set of pairs
of values and V T 's, sorted by V T . The value with
the latest V T is called the current value.
ffl Replication Graph History: This is a similarly
indexed set of replication graphs. A replication
graph is a connected multigraph whose nodes are
references to model objects, and whose multi-
edges are the replication relations built by the
users. It includes the current model object, and
all other model objects which are directly or indirectly
required to be replicas of the current model
object as a result of replication relations. Since
replication graphs change very infrequently, in
practice this history will most frequently contain
only a single graph.
Histories are garbage-collected as transactions com-
mit. Committal makes old values no longer needed for
view snapshots or for rollback after abort, thus they
are discarded.
There is a function which maps replication graphs
to a selected node in that graph. The node is called
the primary copy and the site of that node is called the
primary site, adapting a replication technique used by
Chu and Hellerstein [4] and others. The correctness of
the concurrency control algorithm is based upon the
fact that the order of all reads and updates of replicas
is guaranteed to match the order of the corresponding
reads and updates at the primary copy. Whenever the
originating site of a transaction is not the primary site
of some of the objects read or written by a transac-
tion, the transaction executes optimistically, guessing
that the reads and updates performed at the originating
site will execute the same way when re-executed
at the primary sites. If these guesses are eventually
confirmed, the transaction is said to commit. If not,
the effects of the transaction are undone at all sites
and the transaction is retried at the originating site.
3.1 Concurrency Control for Scalar
Model Objects
When a transaction T is first executed, it is assigned
a V T which we call t T . As it executes, the
transaction reads and/or modifies one or more model
objects at the originating site. Each model object
records each operation in the transaction implementation
object. For each model object M read, the
transaction implementation object records the read
R , where t M
R is defined as the V T when the
current value was written. For "blind writes" (writes
into objects not read by the transaction), t M
R is defined
as equal to t T . The transaction object additionally
records the graph time t M
G , defined as the V T that
's replication graph was last changed.
Consider a transaction T given in Figure 2 that is
originated at some site. T is assigned a V T t
The current values of W, X, Y, and Z at t T are
4, written at V T 80,
3, written at V T 70, and
Assume all replication graphs were initialized at V T
(this is not shown in the figure). At the end of the
transaction execution, the transaction implementation
object records the following:
- Read object W, t W
- Read object X, t X
Y := X;
then Z := Z
Transaction T
Figure
2: Example of transaction execution.
Update object Y, t Y
Update object Z, t Z
Observe that the update to Y is a "blind write,"
since Y was not read in this transaction; hence t Y
The transaction implementation object next distributes
the modifications to all replicas of the above
model objects. The transaction requests each primary
copy to "reserve" a region of time between t M
R and t T
as write-free. Since replica graphs can also change (al-
beit slowly), the transaction must also reserve a region
of time between t M
G and t T as free of graph updates.
As mentioned earlier, the originating site of the
transaction has executed optimistically, guessing that
its reads and updates will be conflict-free at each primary
copy. Specifically, the validity of the transaction
depends upon the following types of guesses:
ffl "Read committed" (RC) guesses: That each
model object value (or graph) read by the trans-action
was written by a committed transaction.
ffl "Read latest" (RL) guesses: That for each
value (or graph) of model object M read by trans-action
T , no write of M by another transaction
occurred (or will occur) at the primary copy between
R (or t M
G ) and the transaction's t T . This
guess implies that the primary site would have
read the same version of the object had the trans-action
executed pessimistically.
ffl "No conflict" (NC) guesses: That for each
model object value (or graph) written by the
transaction, no other transaction reserved at the
primary copy a write-free region of time containing
the transaction's V T . This guess implies that
the primary site would not invalidate previous
reads by confirming this write.
CONFIRM
Y
Z
CONFIRM-READ
COMMIT
COMMIT
CONFIRM
Figure
3: Example of update propagation.
The execution of the transaction takes place op-
timistically, using strategies derived from the optimistic
guess propagation principles defined in Strom
and Yemini [15], and applied to a number of distributed
systems (e.g. optimistic call streaming [2] and
HOPE [5]). However, our algorithm makes certain
specializations to reduce message traffic.
For RC guesses, the originating site simply records
the V T of the transaction which wrote the uncommitted
value that was read. The originating site will
not commit its transaction until the transaction at the
recorded V T commits. For each uncommitted trans-action
T at a site, a list of other transactions at that
site which have guessed that T will commit is maintained
The RL and NC guesses are all checked at the site
of the primary copy of an object M . The RL guess
checks that no value (or graph) update has occurred
R (or t M
G ) and t T , and if this check succeeds,
creates a write-free reservation for this interval so that
no conflicting write will be made in the future; the NC
guess checks that no write-free reservation has been
made for an interval including t T .
For each object M read but not written, a message
is sent to the primary copy (if it is at a remote
site). This message contains t M
G , and t T . Each
primary copy object then verifies the RL guesses for
values and graphs. A confirmation message is then is-
sued, confirming or denying the guess. In the general
approach, or in Hope, this confirmation
message would be broadcast to all sites. But in the
Decaf implementation, this confirmation is sent only
to the originating site. It is a property of our Decaf
implementation that the originating site always knows
the totality of sites affected by its transaction by com-
mit/abort time. Therefore, the originating site is in
a position to wait for all confirmations to arrive and
then to forward a summary commit or abort of the
transaction as a whole to all other sites. This avoids
the need for each primary copy to communicate with
all non-primary sites, and it avoids the needs for non-primary
remote sites to be aware of guesses other than
the summary guess that "the transaction at virtual
For each object M modified by T we send a message
to all relevant sites, containing t M
the new value. However while all sites other than the
primary site simply apply the update at the appropriate
primary site additionally performs the
RL and NC guess checks and then sends a confirmation
message to the originating site.
The originating site waits for confirmations of
guesses from remote primary sites. If all guesses for
a transaction are confirmed, the originating site commits
the transaction and sends a commitmessage to all
remote sites which received update messages. If any
guess is denied, the originating site aborts the trans-action
and sends an abort message to all remote sites.
The originating site then re-executes the transaction.
If a site detects that a transaction at V T has com-
mitted, the modified model objects at that site are in-
formed. This notification can be used to schedule view
notifications and eventually to garbage-collect histo-
ries. The site retains the fact that the transaction
has committed so that if any future update messages
arrive, the updates are considered committed.
If a transaction is aborted, the modified model objects
are informed so that the value at V T can be
purged from the history. The site retains the fact that
the transaction has aborted so that if any future up-date
messages arrive, the updates are ignored.
Let us examine how these algorithms would apply
in our example, shown in Figure 3. Suppose there are
four sites, and that W and X are replicated at sites 1,
2, and 3, while Y and Z are replicated at sites 2, 3,
and 4. Suppose that T is initiated at site 2. Suppose
further that the primary site of W and X is 1, and of
Y and Z is 4. Ignoring graph times and graph updates
for now, and assuming that the three current values
read by the transaction were committed (hence there
are no RC guesses), the following messages are sent
from site 2 after the transaction is applied locally (we
perform the obvious optimization of sending messages
only to relevant sites):
1. To site 1: CONFIRM-READ t
2. To sites 3 and 4:
checks that W is write-free for the V T range
from 80 to 100 (RL guess check) and that X is write-
free for the V T range from 60 to 100 (RL guess check).
If so, it reserves those times as write-free and sends a
CONFIRM to site 2.
simply applies the updates to its Y and Z
replica objects.
Site 4 checks that Z is write-free for the V T range
from 40 to 100 (RL guess check). If so, it reserves
those times as write-free. It also checks that writing
Y or Z at V T 100 does not conflict with any previously
made read reservations (NC guess checks). If all the
above checks succeed, it applies the updates and sends
a CONFIRM to site 2.
responses. If both are confirma-
tions, it sends COMMIT 100 to all other sites involved.
3.2 Concurrency Control for Composite
Model Objects
Although the concurrency control algorithm is the
same for composite objects as for scalar objects, it
is desirable to save space by not keeping a separate
replication graph for each object inside a composite.
That is, if composite A is a replica of composite A 0 and
A 00 (see
Figure
4), we wish to avoid encoding inside
object A[103] that it is a replica of objects A 0 [103]
and A 00 [103].
Our approach is that by default, an object embedded
within a composite inherits the replication graph
of its root; e.g. A[103]'s replicas would be at the same
sites as A's replicas, at the corresponding index (103).
Similarly, if A[103] is itself a composite object, its
embedded objects, e.g. A[103]["John"][12] would be
replicated at the same sites.JohnA'
JohnA"
Figure
4: Replicas of composite model objects.
The set of indices between the root and a given
object is called its path; when an object such as
A[103]["John"][12] is modified, the change and the
path to A are then sent by A to its replicas A 0 and A 00 ,
which then use the same path name, [103]["John"][12],
to propagate the update to their corresponding components
A 0 [103]["John"][12] and A 00 [103]["John"][12].
We call this technique indirect propagation of updates,
in contrast to the direct propagation technique discussed
earlier, in which each object holds its own replication
graph and communicates directly to its replicas.
In addition to saving space, indirect replication
avoids the problem in direct replication that small
changes to the embedding structure could end up
changing a large number of objects. For example, if indirect
replication were not used, adding a new replica
A 000 to the set fA; A 0 ; A 00 g would entail updating the
replication graph for every object embedded within A
and its replicas. Similarly, removing A[103] from A
would entail updating the replication graph for every
object embedded within A[103].
3.2.1 Adjustments to support indirect propa-
gation
There are two adjustments which have to be made
to ensure the correctness of the concurrency control
algorithm in the presense of indirect propagation.
The first has to do with the relative order of
list items. A transaction at V T 100 may modify
having seen that an earlier
transaction at V T 80 deleted A[5] so that what
the originating site thinks of as A[103] may appear
to some other sites to be A[102]. This is not a concurrency
control conflict - it is simply a consequence
of the fact that path names like [103]["John"][12] are
fragile. To overcome this, rather than using the actual
list index in a path name, the propagation algorithm
uses the transaction V T as a unique identifier - e.g.
if A[103] was embedded in A at V T 40, then 40 is used
as an index. If several embeds were performed at V T
40, they are distinguished by a "subtime", e.g. 40.1
or 40.8. A composite object receiving such an indirect
propagation message can always propagate it down the
tree regardless of the order in which it has received
other structure-changing operations. During such a
propagation, if it is determined (using the transaction
that an earlier path changing update
has not yet been received, the propagation will block
until the earlier update is received.
The second adjustment has to do with guesses associated
with the paths for indirect object propagation.
The updated model objects must make RC guesses to
ensure that transactions that created their paths have
committed and RL guesses that no straggling transactions
have removed any component of their paths.
3.2.2 When indirect propagation is not possi-
ble
Indirect propagation is the default mode of propagating
value updates to objects within composites. However
indirect propagation is not always possible. Consider
the configuration in Figure 5. In this case, node
C can indirectly propagate changes to C 0 , but node B
cannot because it has a different set of replicas than
the rest of the tree. We therefore use direct replication
for objects B, B 0 , and B 00 .
A'
A
B"
Figure
5: Indirect propagation not possible for this
case.
3.3 Dynamic Collaboration Establish-
ment
The set of replica relations between objects remains
relatively static. Most transactions change the values
of objects rather than the replication graphs. But
replication graphs do change, as users join and leave
collaborations. Direct propagation graphs for embedded
objects inside composites can also change as a
result of deleting objects from composites and embedding
them elsewhere. Dynamic collaboration establishment
transactions need not be especially fast, but
they must work correctly in conjunction with all the
other transactions.
We have already seen some of the effects of dynamic
collaboration establishment in the algorithms
described above. Replication multi-graphs are time-stamped
with the transaction V T which changed
them. There is no "negotiation" for primary copy;
each node is able to map a given multi-graph to the
identity of the primary site for that configuration. A
primary copy always confirms the RL guess that the
graph hasn't changed as well as confirming whatever
else it is being asked to check: this guards against
the possibility that the originating site is propagating
to the wrong set of sites or that it is asking the
wrong primary copy because of a graph change that it
hasn't seen yet. A primary copy always reserves the
graph against changes during a region of time that a
previously confirmed transaction has assumed to be
change-free.
4 View Notification
This section describes the algorithms for implementing
the view notification semantics given in Section
2.5.
When a transaction implementation object completes
executing at a site, the Decaf infrastructure
initiates view notifications to be sent to all the view
objects attached to the model objects updated in the
transaction. View object attachments are always lo-
cal, i.e., views are always attached to model objects
at the same site. Thus, a view notification is simply
a method call to the update method implemented by
the view object. The update method can contain arbitrary
code that takes a state snapshot by reading the
view's attached model objects and recomputes its dis-
play. The infrastructure guarantees that such a state
snapshot is implicitly a consistent atomic action.
For every view notification initiated, a snapshot object
is created internal to the Decaf infrastructure.
All the snapshots associated with a particular user
level view object are managed internally by a view
proxy object. Each model object contains the set of
view proxies corresponding to its views, which it notifies
upon receiving an update or a commit.
Since a snapshot is an atomic action, it is assigned
a virtual time t S . Each snapshot is assumed to read
all the model objects attached to the view at V T t S .
These reads may be optimistic; hence, as described in
Section 3.1, their validity depends upon the read values
being the latest (RL guess) and the read values being
committed (RC guess). Confirming RL guesses involves
remote communication with the primary copies
of the objects read in the snapshot. If the RL and RC
are confirmed, the snapshot is said to commit.
Optimistic and pessimistic views differ in two re-
spects. First, they differ in the time at which view notifications
are scheduled. Optimistic notifications are
scheduled as early as possible, i.e., as soon as a model
is updated and a snapshot thus initiated, whereas pessimistic
notifications are scheduled after it is known
that the snapshot is valid, i.e., that the view will read
consistent committed values. Second, they differ in
the lossiness of notifications. Pessimistic views are
notified losslessly of every single update in monotonic
order of updates whereas optimistic views are notified
only of the latest update. Subsections 4.1 and 4.2
describe these behaviors in more detail.
As described in Section 2.5, view notifications are
incremental, i.e., each notification provides only that
part of the attached model object state that has
changed since the last notification. However, for the
sake of simplicity, the algorithms presented in this section
do not incorporate incrementality; each snapshot
is assumed to read the set of attached model objects
in its entirety. Furthermore, notifications may be bundled
to enhance performance, i.e., a single view notification
may be delivered for multiple model objects
that were updated in a single transaction.
4.1 Optimistic Views
Figure
6 shows an optimistic view V attached to
model objects A and B. The view proxy object V P
represents V internally. A and B have committed current
values (i.e., values with the latest V T ) at V T 's
100 and 80 repectively. A transaction T runs at V T
110 and updates A, which notifies its view proxy V P .
The primary requirement of optimistic views is fast
response. Consequently, as soon as V P is notified, it
performs the following:
1. It creates a snapshot object and assigns it a V T
equal to the greatest of the V T 's of the current
values of all attached model objects. In this case,
2. It schedules a view notification, i.e., calls the
view's update method.
A
Figure
View notification.
At the end of the snapshot, the snapshot object
records that all attached model objects were read at
t S . In order for the snapshot in this example to com-
mit, two guesses must be confirmed (as before, we ignore
guesses related to the graph):
1. An RC guess that the update by transaction T at
T 110 has committed. This requires receiving a
COMMIT message from the site that originated
transaction T .
2. An RL guess that the V T interval from 80 to
110 is update free for B. This requires sending a
CONFIRM-READ message to B's primary copy
and waiting for the response.
Eventually, if these guesses are confirmed, then the
snapshot commits, and a commit notification is sent
to V , i.e., its commit method is called.
If, on the other hand, an RC guess turns out to
be false, the view proxy re-runs the snapshot with a
new t S . In the example of Figure 6, if the RC guess
was denied as a result of transaction T at V T 110
aborting, a new snapshot is run. This snapshot will
have since that is now the greatest V T of the
current values of all attached model objects. Notice
from this example that optimistic view notifications
are not necessarily in monotonic V T order.
In the case that an RL guess is denied by the primary
copy, that means that the requested interval is
not update free, and thus a straggler update is yet to
arrive at the guessing site. In this case, the straggler
itself will eventually arrive and cause a rerun of the
view notification. In the example in Figure 6, if the
RL guess was denied as a result of a straggler update
to B at V T 105, the update at V T 105 will trigger a
new view notification at t
This algorithm implements the liveness rule for optimistic
views that an update notification is followed
either by a commit notification or, in the case of an
invalid optimistic guess or a subsequent update, a new
update notification.
An optimistic view proxy maintains at most one
uncommitted snapshot - the one with the latest t S
- at any given time. If a new update arrives before
the current snapshot has committed, then we're
obliged to notify the new update to the view due to
the responsiveness requirement. The system may as
well discard the old snapshot since there is no way to
notify the view of its commit (as we don't expose V T 's
to views). As a result, an optimistic view gets a commit
notification only when the system quiesces, that
is, when no new updates are initiated in the system
before existing updates are committed.
4.2 Pessimistic Views
Recall that the system makes two guarantees to a
pessimistic view: (1) never to show any uncommitted
values, and (2) to show all committed values in
monotonic order of applied updates.
A pessimistic view proxy initiates a snapshot at
every V T that one or more of its attached model objects
receive a committed update. However, it does-
n't schedule a view notification for the snapshot until
the snapshot commits. Snapshot committal depends
on (1) the validity of model object reads at the snap-
shot's t S , and (2) whether its preceding snapshots have
already committed (this is due to the monotonicity
requirement). When one or more snapshots commit,
the view is notified, once for each committed snapshot,
in V T sequence. Thus, unlike an optimistic proxy, a
pessimistic proxy manages several uncommitted snap-shots
A pessimistic view proxy thus contains a list of
snapshot objects sorted by V T . It also contains a
field lastNotifiedVT which is the V T of the last up-date
notification.
To illustrate pessimistic view notification, let us say
that the view V in the example of Figure 6 is a pessimistic
view. Suppose that lastNotifiedVT = 80. Suppose
further that the snapshot at V T 100 is as yet uncommitted
and thus A's committed update at V T 100
is not yet notified. When the transaction at V T 110
commits, it informs the model object A, which in turn
informs the pessimistic view proxy V P . V P creates
a snapshot object, assigns it a t records
the following guesses:
1. A "snapshot committed" guess (SC guess) that
the preceding snapshot at V T 100 will commit.
This stems from the monotonicity requirement.
2. An RL guess that the V T interval from 100 to
committed updates for A. This also
stems from the monotonicity requirement. This
requires sending a CONFIRM-READ message to
A's primary copy and waiting for a response.
3. An RL guess that the V T interval from 100 to 110
is free of committed updates for B. This requires
a CONFIRM-READ message as above.
Eventually, if all the guesses made by a particular
snapshot object are confirmed, it commits, and it
can confirm the SC guess of its successor snapshot.
When any snapshot commits, all contiguous committed
snapshots after lastNotifiedVT are notified, and
lastNotifiedVT is updated.
A straggling committed update, say at
for B in the example, may cause an RL guess to be
negated. In this case, when the straggling committed
update is notified to the proxy, a new snapshot is
created at V T 105 as given above. Additionally, the
RL guess made by the succeeding snapshot at V T 110
(guess (3) above) is revised to be for the V T interval
from 105 to 110 for B.
This algorithm implements the consistency and monotonicity
requirements for pessimistic views.
5.1 Related Work
The Decaf framework is designed for collaborative
work among a small and possibly widely distributed
collection of users. Consistency, responsiveness, and
ease of programming are important objectives.
ISIS [3] provides programming primitives for consistent
replication, although its implementation strategies
are pessimistic.
Interactive groupware systems have different performance
requirements and usage characteristics from
databases, leading to different choices for concurrency
control algorithms. First, almost all databases use
pessimistic concurrency control because it gives much
better throughput, a major goal of databases. In interactive
groupware systems, on the other hand, pessimistic
concurrency control strategies are not always
suitable because of impact on response times to user
actions - ensuring interactive response time is often
more important than throughput. Second, possibilities
of conflicts among transactions is lower in groupware
systems because people typically use social protocols
to avoid most of the conflicts in parallel work.
Optimistic protocols based on Jefferson's Time
Warp [8] were originally designed for distributed simulation
environments. They have been successfully applied
in other application areas as well[7]. However,
one important characteristic of distributed simulation
is that there is usually an urgency to compute the final
result, but not necessarily to commit the intermediate
steps. In these protocols, the primary purpose
of "committing" is to free up space in the logs, not to
make the system state accessible to view. But in a co-operative
work environment such as ours, fast commit
is essential. The delay associated with waiting for at
most a single primary site per model object in Decaf
is typically considerably less than a Time Warp style
global sweep of the system would be.
The ORESTE [9] implementation provides a useful
model in which programmers define high-level operations
and specify their commutativity and masking re-
lations. One drawback is that there are no high-level
operations on multiple objects, nor are there ways of
combining multiple high-level operations into transac-
tions. To get the effect of transactions, one must combine
what are normally thought of as multiple objects
into single objects and then define new single-operand
operations whose effects are equivalent to the effects
of the transaction. One must then explicitly specify
the interactions between the new operations and all
the other operations.
There is also a subtle difference between the correctness
requirements in Decaf and in ORESTE.
This difference results from the fact that ORESTE
only considers quiescent state - the analysis does not
consider "read transactions" (e.g., snapshots) which
can coexist with "update transactions". For instance,
in the ORESTE model, a transaction which changes
an object's color can reasonably be said to commute
with a transaction which moves an object from container
A to container B, since for example, starting
with a red object at A and applying both "change
to blue" and "move to B" yields a blue object at B
regardless of the order in which the operations are ap-
plied. But once viewers or read-only transactions or
system state in non-quiescent conditions is taken into
account, some sites might see a transition in which a
blue object was at A and others a transition in which
a red object was at B.
Finally, in ORESTE a state cannot be committed
to an external viewer until it is known that there is
no straggler; this involves a global sweep analogous to
Jefferson's Global Virtual Time algorithm. In a system
with a single group of collaborating applications,
this may not be too severe a problem. In a world-wide
web in which sites A, B, and C are collaborating, and
independently sites C, D, and E are collaborating, and
and F are collaborating, etc., it is preferable not to
have commit depend on the global state of the net-
work, but rather on a small number of objects.
A recent system, COAST [14], also attempts to use
optimistic execution of transactions with the MVC
paradigm for supporting groupware applications. Key
differences with our system are the following. First,
COAST only supports optimistic views. Second, concurrency
algorithms used in COAST assume that all
model objects in the application are shared among all
participants. Furthermore, the optimistic algorithm
implemented in COAST is based on a variation of the
algorithm discussed above.
5.2 Status and Experience
A substantial implementation of the Decaf frame-work
has been completed in the Java programming
language. The framework currently supports scalar
model objects, transactions, and optimistic and pessimistic
views. The implementations of these objects
use the algorithms described in this paper. Several
optimizations are forthcoming, including commit del-
egation, faster commit of snapshots, and incremental
propagation. We are currently implementing composite
model objects.
Several collaborative applications have been successfully
built using the current prototype implemen-
tation. These include several groupware applications
that allow an insurance agent to help clients understand
insurance products via data visualization and
fill out insurance forms, a multi-user chat program,
and simple games. Our preliminary experience is that
it is easy to write new applications or to modify existing
programs to use our MVC programming para-
digm. Optimistic views have been very useful due to
their fast response, and also due to the low conflict
rate in typical use. Pessimistic views have also been
useful for viewers that want to track all changes to the
values of model object.
6 Conclusions
The Decaf framework's major objectives are ease
of programming, and responsiveness in the context of
systems of collaborating applications.
The ease of programming is achieved primarily
through hiding all concerns about distribution, multi-
threading, and concurrency from users. Programmers
write at a high level, using the Model-View-Controller
paradigm, and our implementation transparently converts
operations on model objects to operations on
distributed replicated model objects. The View Notification
algorithm automatically schedules view snap-shots
at appropriate times and also allows viewers
to respond efficiently to small changes to large ob-
jects. Model objects for standard data types (Inte-
gers, Strings, etc.) and collections (e.g., Vectors) are
provided as part of the Decaf infrastructure.
The responsiveness results from the use of optimism
combined with the fast commit protocol of the primary
copy algorithm. If a transaction updates objects
A and B, then a viewer of B 0 , a replica of B, sees the
commit as soon as A's primary site and B's primary
site have each notified the originating site that the
updates are non-conflicting, and the originating site
has notified B 0 's site that the transaction has com-
mitted. This is a small delay, even for a pessimistic
view. Users can get even more rapid response time
using optimistic views, and most of the time their optimistic
view will later be committed with the same
speed as the pessimistic view.
Our experience with using Decaf has shown the
architecture and algorithms to be well suited for a variety
of groupware applications.
Acknowledgements
We gratefully acknowledge Gary Anderson's input to
the design of our framework. He has also built several
collaborative applications and components on top of
our framework.
--R
XTV: A frame-work for sharing X window clients in remote synchronous collaboration
Optimistic parallelization of communicating sequential processes.
Pat Stephen- son
The exclusive- writer approach to updating replicated files in distributed processing systems
Concurrency control in groupware systems.
The time warp mechanism for database concurrency control.
Virtual time.
An algorithm for distributed groupware applications.
A Cookbook for Using the Model-View-Controller User Interface Paradigm in Smalltalk-80
Support for building efficient collaborative applications using replicated objects.
Jan Schum- mer
Synthesizing distributed and parallel programs through optimistic transformations.
--TR
--CTR
Sumeer Bhola , Mustaque Ahamad, 1/k phase stamping for continuous shared data (extended abstract), Proceedings of the nineteenth annual ACM symposium on Principles of distributed computing, p.181-190, July 16-19, 2000, Portland, Oregon, United States
Guruduth Banavar , Sri Doddapaneni , Kevan Miller , Bodhi Mukherjee, Rapidly building synchronous collaborative applications by direct manipulation, Proceedings of the 1998 ACM conference on Computer supported cooperative work, p.139-148, November 14-18, 1998, Seattle, Washington, United States
Christian Schuckmann , Jan Schmmer , Peter Seitz, Modeling collaboration using shared objects, Proceedings of the international ACM SIGGROUP conference on Supporting group work, p.189-198, November 14-17, 1999, Phoenix, Arizona, United States
James Begole , Randall B. Smith , Craig A. Struble , Clifford A. Shaffer, Resource sharing for replicated synchronous groupware, IEEE/ACM Transactions on Networking (TON), v.9 n.6, p.833-843, December 2001
Wanlei Zhou , Li Wang , Weijia Jia, An analysis of update ordering in distributed replication systems, Future Generation Computer Systems, v.20 n.4, p.565-590, May 2004 | optimistic concurrency control;pessimistic views;optimistic views;groupware;replicated objects;model-view-controller programming paradigm |
277653 | Complete removal of redundant expressions. | Partial redundancy elimination (PRE), the most important component of global optimizers, generalizes the removal of common subexpressions and loop-invariant computations. Because existing PRE implementations are based on code motion, they fail to completely remove the redundancies. In fact, we observed that 73% of loop-invariant statements cannot be eliminated from loops by code motion alone. In dynamic terms, traditional PRE eliminates only half of redundancies that are strictly partial. To achieve a complete PRE, control flow restructuring must be applied. However, the resulting code duplication may cause code size explosion.This paper focuses on achieving a complete PRE while incurring an acceptable code growth. First, we present an algorithm for complete removal of partial redundancies, based on the integration of code motion and control flow restructuring. In contrast to existing complete techniques, we resort to restructuring merely to remove obstacles to code motion, rather than to carry out the actual optimization.Guiding the optimization with a profile enables additional code growth reduction through selecting those duplications whose cost is justified by sufficient execution-time gains. The paper develops two methods for determining the optimization benefit of restructuring a program region, one based on path-profiles and the other on data-flow frequency analysis. Furthermore, the abstraction underlying the new PRE algorithm enables a simple formulation of speculative code motion guaranteed to have positive dynamic improvements. Finally, we show how to balance the three transformations (code motion, restructuring, and speculation) to achieve a near-complete PRE with very little code growth.We also present algorithms for efficiently computing dynamic benefits. In particular, using an elimination-style data-flow framework, we derive a demand-driven frequency analyzer whose cost can be controlled by permitting a bounded degree of conservative imprecision in the solution. | Introduction
Partial redundancy elimination (PRE) is a widely used and
effective optimization aimed at removing program statements
that are redundant due to recomputing previously
produced values [27]. PRE is attractive because by targeting
statements that are redundant only along some execution
paths, it subsumes and generalizes two important
value-reuse techniques: global common subexpression elimination
and loop-invariant code motion. Consequently, PRE
serves as a unified value-reuse optimizer.
Most PRE algorithms employ code motion [11, 12, 14,
15, 16, 17, 25, 27], a program transformation that reorders
instructions without changing the shape of the control flow
graph. Unfortunately, code-motion alone fails to remove
routine redundancies. In practice, one half of computations
that are strictly partially redundant (not redundant
along some paths) are left unoptimized due to code-motion
obstacles. In theory, even the optimal code-motion algorithm
[25] breaks down on loop invariants in while-loops,
unless supported by explicit do-until conversion. Recently,
Steffen demonstrated that control flow restructuring can remove
from the program all redundant computations, including
conditional branches [31]. While his property-oriented
expansion algorithm (Poe) is complete, it causes unnecessary
code duplication.
As the first step towards a complete PRE with affordable
code growth, this paper presents a new PRE algorithm
based on the integration of code motion and control flow
restructuring, which allows a complete removal of redundant
expressions while minimizing code duplication. No
prior work systematically treated combining the two trans-
formations. We control code duplication by restricting its
scope to a code-motion preventing (CMP) region, which localizes
adverse effects of control flow on the desired value
reuse. Whereas the Poe algorithm applied to expression
elimination (denoted PoePRE) uses restructuring to carry
out the entire transformation, we apply the more economical
code-motion transformation to its full extent, resorting
to restructuring merely to enable the necessary code motion.
The resulting code growth is provably not greater than that
of PoePRE; on spec95, we found it to be three times smaller.
Second, to answer the overriding question of how complete
a feasible PRE algorithm is allowed to be, we move
from theory to practice by considering profile information.
Using the dynamic amount of eliminated computations as
the measure of optimization benefit, we develop a profile-guided
PRE algorithm that limits the code growth cost
for (;;) {
else R;
if (O) . = c+d;
else if (P) break;
En
En
duplicated to make [a+b] fully redundant
duplicated to allow code motion of [a+b]
En
En
code
motion
En
duplicated for complete optimization of [c+d]
duplicated for partial optimization of [c+d]
R
O
R
O
c+d
c+d
c+d
c+d
a+b
R
c+d u
O O
a+b
c+d
O
R
O
c+d
c+d
R
O
c+d
c+d
O
a+b
a+b
R
O
c+d
a+b
a+b
O
d) our optimization of [c+d]
a) source program b) PoePRE of [a+b] c) our optimization of [a+b] e) trade-off variant of d)
R
Figure
1: Complete PRE through integration of code motion and control flow restructuring.
by sacrificing those value-reuse opportunities that are infrequent
but require significant duplication. Third, we describe
how and when speculative code motion can be used instead
of restructuring, and how to guarantee that speculative PRE
is profitable. Finally, we demonstrate that a near-complete
PRE with very little code growth can be achieved by integrating
the three PRE transformations: pure code motion,
restructuring, and speculative code motion.
All algorithms in this paper rely in a specific way on
the notion of the CMP region which is used to reduce both
code duplication and the program analysis cost. Thus, we
make the PRE optimization more usable not only by increasing
its effectiveness (power) through cost-sensitive restruc-
turing, but also by improving its efficiency (implementa-
tion). We develop compile-time techniques for determining
the impact of restructuring a program region on the dynamic
amount of eliminated computations. The run-time
benefit corresponds to the cumulative execution frequency
of control flow paths that will permit value reuse after the
restructuring. We describe how this benefit can be obtained
either using edge profiles, path-profiles [7], or through data-flow
frequency analysis [28].
As another contribution, we reduce the cost of frequency
analysis by presenting a frequency analyzer derived from a
new demand-driven data-flow analysis framework. Based on
interval analysis, the framework enables formulation of analyzers
whose time complexity is independent of the lattice
size. This is a requirement of frequency analysis whose lattice
is of infinite-height. Due to this requirement, existing
demand frameworks are unable to produce a frequency analyzer
[18, 23, 30]. Furthermore, we introduce the notion
of approximate data-flow frequency information, which conservatively
underestimates the meet-over-all-paths solution,
keeping the imprecision within a given degree. Approximation
permits the analyzer to avoid exploring program paths
guaranteed to provide insignificant contribution (frequency-
wise) to the overall solution. Besides PRE, the demand-driven
approximate frequency analysis is applicable in interprocedural
branch correlation analysis [10] and dynamic
optimizations [5].
Let us illustrate our PRE algorithms on the loop in Figure
1(a). Assume no statement in the loop defines variables
or d. Although the computations [a+b] and [c+d] are
loop-invariant, removing them from the loop with code motion
is not possible. Consider first the optimization of [a+b].
This computation cannot be moved out of the loop because
it would be executed on the path En; O;P;Ex, which does
not execute [a + b] in the original program. Because this
could slow down the program and create spurious excep-
tions, PRE disallows such unsafe code motion [25].
The desired optimization is only possible if the CFG is
restructured. The PoePRE algorithm [31] would produce
the program in Figure 1(b), which was created by duplicating
each node on which the value of [a+b] was available only
on a subset of incoming paths. While [a + b] is fully opti-
mized, the scope of restructuring is unnecessarily large. Our
complete optimization (ComPRE) produces the program in
Figure
1(c), where code duplication is applied merely to enable
the necessary code motion. In this example, to move
[a out of the loop, it is sufficient to separate out the
offending path En;O; P;Ex which is encapsulated in the
CMP region highlighted in the figure. As no opportunities
for value reuse remain, the resulting optimization of [a
is complete. Because restructuring may generate irreducible
programs, as in Figure 1(c), we also present a restructuring
transformation that maintains reducibility.
Hoisting the loop invariant [a out of the loop was
prevented by the shape of control flow. Our experiments
show that the problem of removing loop invariant code (LI)
has not been sufficiently solved: a complete LI is prevented
for 73% of loop-invariant expressions. In some cases, a simple
transformation may help. For example, [a
can be optimized by peeling off one loop iteration
and performing the traditional LI [1], producing the progra
Figure
1(b). In while-loops, LI can often be enabled
with more economical do-until conversion. The example presented
does not allow this transformation because the loop
exit does not post-dominate the loop entry. In effect, our
restructuring PRE is always able to perform the smallest
necessary do-until conversion for an arbitrary loop.
Next, we optimize the computation [c+d] in Figure 1(c).
Our optimization performs a complete PRE of [c +d] by duplicating
the shaded CMP region and subsequently performing
the code motion (Figure 1(d)). The resulting program
may cause too much code growth, depending on the sizes
of duplicated basic blocks. Assume the size of block S outweighs
the run-time gains of eliminating the upper [c
In such a case, we select a smaller set of nodes to duplicate,
as shown in Figure 1(e). When only block Q is duplicated,
the optimization is no longer complete; however, the optimization
cost measured as code growth is justified with
the corresponding run-time gain. In Section 3.2, speculative
code motion is used to further reduce code duplication.
In summary, this paper makes the following contributions:
ffl We present an approach for integrating two widely used
code transformation techniques, code motion and code
restructuring. The result is an algorithm for PRE that
is complete (i.e., it exploits all opportunities for value
reuse) and minimizes the code growth necessary to
achieve the code motion.
ffl We show that restricting the algorithm to code motion
produces the traditional code-motion PRE [17, 25].
ffl Profile-guided techniques for limiting the code growth
through integration of selective duplication and speculative
code motion are developed.
ffl We develop a demand-driven frequency analyzer based
on a new elimination data-flow analysis framework.
ffl The notion of approximate data-flow information is defined
and used to improve analyzer efficiency.
ffl Our experiments compare the power of code-motion
PRE, speculative PRE, and complete PRE.
Section 2 presents the complete PRE algorithm. Section 3
describes profile-guided versions of the algorithm and Section
4 presents the experiments. Section 5 develops the
demand-driven frequency analyzer. The paper concludes
with a discussion of related work.
Complete PRE
In this section, we develop an algorithm for complete removal
of partial redundancies (ComPRE) based on the integration
of code motion and control flow restructuring. Code
motion is the primary transformation behind ComPRE. To
reduce code growth, restructuring is used only to enable
hoisting through regions that prevent the necessary code
motion. The smallest set of motion-blocking nodes is identified
by solving the problems of availability and anticipabil-
ity on an expressive lattice. We also show that when control
flow restructuring is disabled, ComPRE becomes equivalent
to the optimal code-motion PRE algorithm [25].
An expression is partially redundant if its value is computed
on some incoming control flow path by a previous
expression. Code-motion PRE eliminates the redundancy
by hoisting the redundant computation along all paths until
it reaches an edge where the reused value is available along
either all paths or no paths. In the former case, the computation
is removed; in the latter, it is inserted to make the
original computation fully redundant. Unfortunately, code
motion may be blocked before such edges are reached. Nodes
that prevent the desired code motion are characterized by
the following set of conditions:
1. hoisting of expression e across node n is necessary when
a) an optimization candidate follows n: there is a computation
of e downstream from n on some path, and
b) there is a value-reuse opportunity for e at node n: a
computation of e precedes n on some path.
2. hoisting of e across n is disabled when
c) any path going through n does not compute e in the
source program: such path would be impaired by the
computation of e.
All three conditions are characterizable via solutions to the
data-flow problems of anticipability and availability, which
are defined as follows.
be any path from the start node to
a node n. The expression e is available at n along p iff
e is computed on p without subsequent redefinition of its
operands. Let r be any path from n to the end node. The
expression e is anticipated at n along r iff e is computed on
r before any of its operands are defined. The availability of
e at the entry of n w.r.t. the incoming paths is defined as:
AVAIL in [n;
Must all
available along no paths.
May some
Anticipability (ANTIC ) is defined analogously.
Given this refined value-reuse definition, code motion is necessary
when a) and b) defined above hold mutually. Hence,
AVAIL in [n; e] 6= No:
Code motion is disabled when the condition c) holds:
Must -
AVAIL in [n; e] 6= Must:
A node n prevents the necessary code motion for e when the
motion is necessary but disabled at the same time. By way
of conjunction, we get the code motion-preventing condition:
AVAIL in [n;
The predicate Prevented characterizes the smallest set of
nodes that must be removed for code motion to be enabled.
a) code motion prevented by CMP region b) CMP region diluted via code duplication c) complete PRE of [a+b]
code motion becomes possible
code motion
ANTIC=No
ANTIC=Must
AVAIL=No
AVAIL=Must
R
a+b
R
a+b
a+b
a+b
R
AVAIL=May
ANTIC=May ANTIC=May ANTIC=May ANTIC=May
AVAIL=No
AVAIL=Must
AVAIL=No
AVAIL=Must
ANTIC=May
Figure
2: Removing obstacles to code motion via restructuring.
Motion Preventing region, denoted
CMP [e], is the set of nodes that prevent hoisting of a
computation e: CMP [e] = fn j ANTIC in [n;
AVAIL in [n; Mayg.
To enable code motion, ComPRE removes obstacles presented
by the CMP region by duplicating the entire region,
as illustrated in Figure 2. The central idea is to factor the
May-availability that holds in the entire region into Must-
and No-availability, to hold respectively in each region copy.
An alternative view is that we separate within the region the
paths with Must- and No-availability. To achieve this, we
can observe that a) no region entry edge is May-available,
and b) the solution of availability within the region depends
solely on solutions at entry edges (the expression is neither
computed nor killed within the region). Hence, the desired
factoring can be carried out by attaching to each region
copy the subset of either Must or No entry edges, as shown
in
Figure
2(c).
After the CMP is duplicated, the condition Prevented is
false on each node, enabling code motion. The ComPRE
algorithm, shown in Figure 3, has the following three steps:
1. Compute anticipability and availability. The problems
use the lattice
the flow functions are distributive under the least common
element operator -, which is defined using the
partial order v shown below. Distributivity property
implies that data-flow facts are not approximated at
control flow merge points. Intuitively, this is because
L is the powerset lattice of fNo; Mustg, which are the
only facts that may hold along an individual path.
The partial order v:
Must
May
2. Remove CMP regions via control flow restructuring.
Given an expression e, the CMP region is identified by
examining the data-flow solutions locally at each node.
Line 2 in Figure 3 duplicates each CMP node and line 3
adjusts the control flow edges, so that the new copy of
the region hosts the Must solution. Restructuring necessitates
updating data-flow solutions within the CMP
region (lines 4-12). While the ANTIC solution is not
altered, the previously computed AVAIL solution is invalidated
because some paths flowing into the region
were eliminated when region entry edges were discon-
nected. For the expression e, AVAIL becomes either
Must or No in the entire region. For other expressions,
the solution may become (conservatively) imprecise. In
other words, splitting a May path into Must/No paths
for e might have also split a May path for some other
expression. Therefore, line 6 resets the initial guess and
lines 10-12 recompute the solution within the CMP.
3. Optimize the program. The code motion transformation
is carried out by replacing each original computation
e with a temporary variable te . The temporary
is initialized with a computation inserted into each
No-available edge that sinks either into a May/Must-
availability path or into an original computation. The
insertion edge must also be Must-anticipated, to verify
hoisting of the original computation to the edge.
Theorem 1 (Completeness). ComPRE is optimal in
that it minimizes the number of computations on each path.
Proof. First, each original computation is replaced with
a temporary. Second, no computation is inserted where its
value is available along any incoming path. Hence, no additional
computations can be removed.
Within the domain of the Morel and Renviose code-
motion transformation, where PRE is accomplished by
hoisting optimization candidates (but not other statements)
[27], ComPRE achieves minimum code growth. 1 This follows
from the fact that after CMP restructuring, no program
node can be removed or merged with some other node without
destroying any value reuse, as shown by the following
observations. Prior to Step 2, each node n may belong to
CMP regions of multiple offending expressions. Duplication
of n during restructuring can be viewed as partitioning of
control flow paths going through n: each resulting copy of
n is a path partition that does not contain both a Must-
and a No-available path, for any offending expression. The
1 Outside this domain, further code growth reduction is possible
by moving instructions out of the CMP before its duplication.
Step 1: Data-flow analysis: anticipability, availability.
ffl Input: control flow graph
each node contains a single assignment x := e,
ffl Comp(n; e): node n computes an expression e,
ffl Transp(n; e): node n does not assign any variable in e,
ffl boundary conditions: for each expression e
ANTIC out [end; e] := AVAIL in [start; e] := No,
ffl initial guess: set all vectors to ? S , where S is
the number of candidate expressions. Solve iteratively.
ANTIC in [n; e] :=? !
Must if Comp(n; e),
ANTIC out [n; e] otherwise.
ANTIC out [n; e] :=
ANTIC in [m; e]
AVAIL in [n; e] :=
AVAIL out [m; e]
AVAIL out [n; e] := f e
n (AVAIL in [n; e])
Must if Comp(n; e) -Transp(n; e),
x otherwise.
Step 2: Remove CMP regions: control flow restructuring.
ffl modify G so that no CMP nodes exists,
for any expression e.
1 for each expression e do
duplicate all CMP[e] nodes to create a copy of the CMP.
n Must is a copy of node n hosting
Must
attach new nodes to perform the restructuring
Must
Must Must
update data-flow solutions within CMP and its copy
4 for each node n 2 CMP[e] do
5 ANTIC in [n Must ] := ANTIC in [n]
ANTIC out [n Must ] := ANTIC out [n]
6 AVAIL in [n Must ] := AVAIL in [n] := ? S
AVAIL out [n Must
7 AVAIL in [n Must ; e] := AVAIL out [n Must ; e] := Must
8 AVAIL in [n; e] := AVAIL out [n; e] := No
9 end for
reanalyze availability inside both CMP copies
for each expression e 0 not yet processed do
12 end for
13 end for
Step 3: Optimize: code motion.
Insert[(n; m); e] , ANTIC in [m; Must -
AVAIL out [n;
(AVAIL in [m;
Replace[n; e] , Comp(n; e)
Figure
3: ComPRE: the algorithm for complete PRE.
following properties of Step 2 can be verified: 1) the number
of path partitions (node copies) created at a given node is
independent of the order in which expressions are considered
(in line 1), 2) each node copy is reachable from the start
node, and 3) for any two copies of n there is an expression e
such that remerging the two copies and their incoming paths
will prevent code motion of e across the resulting node.
To compare ComPRE with a restructuring-only PRE,
we consider PoePRE, a version of Steffen's complete algorithm
[31] that includes minimization of duplicated nodes
but is restricted in that only expressions are eliminated (as
is the case in ComPRE). Elimination is carried out using a
temporary, as in Step 3.
Theorem 2 ComPRE does not create more new nodes
than PoePRE.
Proof outline. The proof is based on showing that the
PoePRE-optimized program after minimization has no less
nodes than the same program after CMP restructuring. It
can be shown that, given an original node n, for any two
copies of n created by CMP restructuring, there are two
distinct copies of n created by PoePRE such that the minimization
cannot merge them without destroying some value
reuse opportunity.
In fact, PoePRE can be expressed as a form of Com-
PRE on a (non-strictly) larger region: for each computation
e, PoePRE duplicates fnjANTIC in [n; e] 2 fMust; Mayg -
AVAIL in [n; which is a superset of CMP [e].
Algorithm complexity. Data-flow analysis in Step 1 and
in lines 10-12 requires O(NS) steps, where N is the flow
graph size and S the number of expressions. The restructuring
in Step 2, however, may cause N to grow exponentially,
as each node may need to be split for multiple expressions.
Because in practice a constant-factor code-growth budget is
likely to be defined, the asymptotic program size will not
change. Therefore, the running time of Step 2, which dominates
the entire algorithm, is O(NS 2 ).
2.1 Optimal Code-Motion PRE
Besides supporting a complete PRE, the notion of the CMP
region also facilitates an efficient formulation of code-motion
PRE, called CM-PRE. In this section, we show that our complete
algorithm can be naturally constrained by prohibiting
the restructuring, and that such modification results in the
same optimization as the optimal motion-only PRE [17, 25].
In comparison to ComPRE, the constrained CM-PRE
algorithm bypasses the CMP removal; the last step (trans-
formation) is unchanged (Figure 3). The first step (data-
flow analysis) is modified with the goal to prevent hoisting
across a node n when such motion would subsequently be
blocked by a CMP region on each path flowing into node
n. First, anticipability is computed as in ComPRE. Second,
availability is modified to include detection of CMP nodes.
When a CMP node is found, instead of propagating forward
May-availability, the solution is adjusted to No. Such adjustment
masks those value reuse opportunities that cannot
be exploited without restructuring. The result of masking is
that code motion is prevented from entering paths that cross
a CMP region (see predicate Insert in Step 3 of Figure 3).
The modified flow function for the AVAIL problem fol-
lows. The third line detects a CMP node. No-availability
is now extended to mean that the value might be available
along some path but value reuse is blocked by a CMP region
along that path.
Must if Comp(n; e) - Transp(n; e),
x otherwise.
Given a maximal fixed point solution to redefined AVAIL,
CM-PRE performs the unchanged transformation phase
Figure
3, Step 3). It is easy to show that the resulting
optimization is complete under the immutable shape of the
control flow graph. The proof is analogous to that of Theorem
1: all original computations are removed and no computation
has been inserted where an optimization opportunity
not blocked by a CMP exists.
Besides exploiting all opportunities, a PRE algorithm
should guarantee that the live ranges of inserted temporary
variables are minimal, in order to moderate the register pres-
sure. The live range is minimal when the insertion point
specified by the predicate Insert cannot be delayed, that is,
moved further in the direction of control flow.
Theorem 3 (Shortest live ranges). Given the CMP-
restructured (or original) control flow graph, ComPRE
(CM-PRE) is optimal in that it minimizes the live range
lengths of inserted temporary variables.
Proof. An initialization point Insert cannot be delayed either
because it would become partially redundant, destroying
completeness, or because its temporary variable is used
in the immediate successor.
Existing PRE algorithms find the live-range optimal
placement in two stages. First, computations are hoisted as
high as possible, maximizing the removal of redundancies.
Later, the placement is corrected through the computation
of delayability [25]. Our formulation specifies the optimal
placement directly, as we never hoist into paths where a
blocking CMP will be subsequently encountered.
However, note that after the above redefinition, f e
n is no
longer monotone: given ANTIC in [n;
Must, we have x1 v x2 but f e
Must. Although a direct approach to solving such system
of equations may produce conservatively imprecise solution,
the desired maximal fixed point is easily obtained using bit-vector
GEN/KILL operations as follows.
First, compute ANTIC as in Figure 3. Second, solve the
well-known availability property, denoted AV all , which holds
when the expression is computed along all incoming paths:
Must. Finally, we compute AV some which
characterizes some-paths availability and also encapsulates
CMP detection: AV some , AVAIL 6= No. The pair of solutions
some ) can be directly mapped to the desired
solution of AVAIL. The GEN and KILL sets [1] for the
some problem are given below. The value of the initial
guess is false, the meet operator is the bit-wise or.
Must - ANTIC 6= Must)
The condition (AVAIL 6= Must - ANTIC 6= Must) detects
the CMP node. While it is less strict than that in Definition
2, it is equivalent for our purpose, as it is safe to kill
single loop
copied for reducibility
entry node
En
En
a+b
O
c+d
O
R
O
c+d
c+d
a) source program
c+d
a+b
R
c+d
reducible ComPRE of [a+b]
Figure
4: Reducible restructuring. (See Figure 1(c))
when there is no reuse or when there is no
hoisting No). The less strict condition is beneficial
because computing and testing Must requires one bit
per expression, while two bits are required for May. Con-
sequently, we can substitute ANTIC 6= Must with :AN all ,
where AN all is defined analogously to AV all . As a result,
we obtain the same implementation complexity as the algorithms
in [17, 25]: three data-flow problems must be solved,
each requiring one bit of solution per expression.
In conclusion, the CMP region is a convenient abstraction
for terminating hoisting when it would unnecessarily
extend the live ranges. It also provides an intuitive way of
explaining the shortest-live-range solution without applying
the corrective step based on delayability [25]. Furthermore,
the CMP-based, motion-only solution can be implemented
as efficiently as existing shortest-live-range algorithms.
2.2 Reducible Restructuring
Duplicating a CMP region may destroy reducibility of the
control flow graph. In Figure 1(c), for example, ComPRE
resulted in a loop with two distinct entry nodes. Even
though PoePRE preserves reducibility on the same loop
Figure
1(b)), like other restructuring-based optimizations
[4, 10, 31], it is also plagued by introducing irreducibility.
One way to deal with the problem is to perform all optimizations
that presuppose single-entry loops prior to PRE.
However, many algorithms for scheduling (which should follow
PRE) rely on reducibility.
After ComPRE, a reducible graph can be obtained with
additional code duplication. An effective algorithm for normalizing
irreducible programs is given in [24]. To suppress
an unnecessary invocation of the algorithm, we can employ
a simple test of whether irreducibility may be created after
a region duplication. The test is based upon examining
only the CMP entry and exit edges, rather than the entire
program. Assuming we start from a reducible graph, re-structuring
will make a loop L irreducible only if multiple
CMP exit edges sink into L, and at least one region entry
is outside L (i.e., is not dominated by L's header node). If
such a region is duplicated, target nodes of region exit edges
may become the (multiple) loop entry nodes. Consider the
loop in Figure 4(a). Two of the three exits of CMP [a
fall into the loop. After restructuring, they will become loop
entries, as shown in Figure 1(c).
Rather than applying a global algorithm like [24], a
straightforward approach to make the affected loop reducible
is to peel off a part of its body. The goal is to extend
the replication scope so that the region exits sink onto a single
loop node, which will then become the new loop entry.
Such a node is the closest common postdominator (within
the loop) of all the offending region exits and the original
loop entry. Figure 4(a) highlights node c+d whose duplication
after CMP restructuring will restore reducibility of the
loop. The postdominator of the offending exits is node Q,
which becomes the new loop header.
3 Profile-Guided PRE
While the CMP region is the smallest set of nodes whose
duplication enables the desired code motion, its size is often
prohibitive in practice. In this section, relying on the profile
to estimate optimization benefit, complete PRE is made
more practical by avoiding unprofitable code replication.
First, we extend ComPRE by inhibiting restructuring in
response to code duplication cost and the expected dynamic
benefit. The resulting profile-guided algorithm duplicates a
CMP region only when the incurred code growth is justified
by a corresponding run-time gain from eliminating the
redundancies. Second, the notion of the CMP region is combined
with profiling to formulate a speculative code-motion
PRE that is guaranteed to have a positive dynamic effect,
despite impairing certain paths. The third algorithm integrates
both restructuring and speculation and selects a
profitable subgraph of the CMP for each. While profitably
balancing the cost and benefit under a given profile is NP-
hard, the empirically small number of hot program paths
promises an efficient algorithm [4, 19]. Finally, to support
profile guiding, we show how an estimate of the run-time
gain thwarted by a CMP region can be obtained using edge
profiles, frequency analysis [28], or path profiles [7].
3.1 Selective Restructuring
We model the profitability of duplicating a CMP region R
with a cost-benefit threshold predicate T (R), which holds
if the region optimization benefit exceeds a constant multiple
of the region size. Our metric of benefit is the dynamic
amount of computations whose elimination will be
enabled after R is duplicated, denoted Rem(R). That is,
true for each
region R, the algorithm is equivalent to the complete Com-
PRE. When T (R) = false for each region, the algorithm
reduces to the code-motion-only CM-PRE. Obviously, predicate
determines only a sub-optimal tradeoff between exploiting
PRE opportunities and limiting the code growth.
In particular, it does not explicitly consider the instruction
cache size and the increase in register pressure due to introduced
temporary variables. We have chosen this form of T
in order to avoid modeling complex interactions among compiler
stages. In the implementation, T is supplemented with
a code growth budget (for example, in [6], code is allowed
to grow by about 20%).
First, we present an algorithm for computing the optimization
benefit Rem(R). The method is based on the fact
Step 1: compute anticipability and availability. (unchanged)
Step 2: Partial restructuring: remove profitable CMP regions.
1 for each computation e do
2 for each disconnected subregion R i of CMP[e] do
build the largest connected subregion
3 select a node from R and
collect all connected CMP nodes
determine optimization benefit Rem(R i )
4 carry out frequency analysis of AVAIL on R i
if profitable, duplicate (lines 2-12 of Fig.
6 end for
7 end for
8 recompute the AVAIL solution, using f e
n from Section 2.1
Step 3: Optimize: code motion. (unchanged)
Figure
5: PgPRE: profile-guided version of ComPRE.
that the CMP scope localizes the entire benefit thwarted by
the region: to compute the benefit, it suffices to examine
only the paths within the region. Consider an expression
e and its CMP region [e]. For each region exit
edge [e]), the value
of ANTIC in [m; e] is either Must or No, otherwise m would
be in CMP [e]. Let ExitMust (R) be the set of the Must exit
edges. The dynamic benefit is derived from the observation
that each time such an edge is executed, any outgoing path
contains exactly one computation of e that can be eliminated
if: i) R is duplicated and ii) the value of e is available at the
exit edge. Let ex(a) be the execution frequency of edge a
and p(AVAIL out [n; the probability that the value
e is available when n is executed. After the region is dupli-
cated, the expected benefit connected with the exit edge a
is ex(a):p(AVAIL out [n; which corresponds to the
number of computations removed on all paths starting at a.
The benefit of duplicating the region R is thus the sum of
all exit edge benefits
a=(n;m)2Exits Must (R)
ex(a):p(AVAILout [n;
The probability p is computed from an edge profile using
frequency analysis [28]. In the frequency domain, the probability
of each data-flow fact occurring, rather than the
mere boolean meet-over-all-paths existence, is computed
by incorporating the execution probabilities of control
flow edges into the data-flow system. Because the frequency
analyzer cannot exploit bit-vector parallelism, but instead
computes data-flow solutions on floating point numbers, it
is desirable to reduce the cost of calculating the probabili-
ties. The CMP region lends itself to effectively restricting
the scope of the program that needs to be analyzed. Because
all region entry edges are either Must- or No-available, the
probability of e being available on these edges are 1 and 0,
respectively. Therefore, the probability p at any exit edge
can only be influenced by the paths within the region. As
a result, it is sufficient to perform the frequency analysis
for expression e on CMP [e], using entry edges as a precise
boundary condition for the CMP data-flow equation system.
In Section 5 we reduce the cost of frequency analysis through
a demand-driven approach.
The algorithm (PgPRE) that duplicates only profitable
CMP regions is given in Figure 5. It is structured as its
complete counterpart, ComPRE: after data-flow analysis,
we proceed to eliminate CMP regions, separately for each
expression. While in ComPRE it was sufficient to treat all
nodes from a single CMP together, selective duplication benefits
from dividing the CMP into disconnected subregions,
if any exist. Intuitively, hoisting of a particular expression
may be prevented by multiple groups of nodes, each in a
different part of the procedure. Therefore, line 3 groups
nodes from a connected subregion and frequency analysis
determines the benefit of the group (line 4). After all profitable
regions are eliminated, the motion-blocking effect of
CMP regions remaining in the program must be captured.
All that is needed is to apply the CM-PRE algorithm from
Section 2.1 on the improved control flow graph. Blocked
hoisting is avoided by recomputing availability (line using
the re-defined flow function f e
n from Section 2.1, which
asserts No-availability whenever a CMP is detected.
3.2 Speculative Code-Motion PRE
In code-motion PRE, hoisting of a computation e is blocked
whenever e would need to be placed on a control flow path p
that does not compute e in the original program. Such speculative
code motion is prevented because executing e along
path p could a) raise spurious exceptions in e (e.g., over-
flow, wrong address), and b) outweigh the dynamic benefit
of removing the original computation of e. The former restriction
can be relaxed for instruction that cannot except,
leading to safe speculation. New processor generations will
support control-speculative instructions which will suppress
raising the exception until the generated value is eventually
used, allowing unsafe speculation [26]. The latter problem
is solved in [20], where an aggressive code-motion PRE navigated
by path profiles is developed. The goal is to allow
speculative hoisting, but only into such paths on which dynamic
impairment would not outweigh the benefit of eliminating
the computation from its original position.
Next, we utilize the CMP region to determine i) the profitability
of speculative code motion and ii) the positions of
speculative insertion points that minimize live ranges of temporary
variables. Figure 6 illustrates the principle of speculative
PRE [20]. Instead of duplicating the CMP region, we
hoist the expression into all No-available entry edges. This
makes all exits fully available, enabling complete removal of
original computations along the Must exits. In our example,
moved into the No-available region entry edge e2 .
This hoisting is speculative because [a+b] is now executed on
each path going through e2 and e3 , which previously did not
contain the expression. The benefit is computed as follows.
The dynamic amount of computations is decreased by the
execution frequency ex(e4) of the Must-anticipable exit edge
(following which a computation was removed), and increased
by the frequency ex(e2) of the No-available entry edge (into
which the computation was inserted). Since speculation is
always associated with a CMP region, we are able to obtain
a simple (but precise) profitability test: speculative PRE of
an expression is profitable if the total execution frequency
of Must-anticipable exit edges exceeds that of No-availaible
entry edges. Note that the benefit is calculated locally by
examining only entry/exit edges, and not the paths within
the region, which was necessary in selective restructuring.
Hence, the speculative benefit is independent from branch
correlation and edge profiles are as precise as path profiles
in the case of speculative-motion PRE. As far as temporary
live ranges are concerned, insertion into entry edges results
in a shortest-live-range solution, and Theorem 3 still holds.
code motion
AVAIL=Must
AVAIL=No
ANTIC=Must
speculative ANTIC=No
removal
insertion
Optimization benefit:
a+b
a+b
Figure
Speculative code-motion PRE.
3.3 Partial Restructuring, Partial Speculation
In Section 3.1, edge profiles and frequency analysis were
used to estimate the benefit Rem of duplicating a region.
An alternative is to use path profiles [3, 7], which are convenient
for establishing cost-benefit optimization trade-offs
[4, 19, 20]. To arrive at the value of the region benefit with a
path profile, it is sufficient to sum the frequencies of Must-
Must paths, which are paths that cross any region entry
edge that is Must-available and any exit edge that is Must-
anticipated. These are precisely the paths along which value
reuse exists but is blocked by the region. While there is an
exponential number of profiled acyclic paths, only 5.4% of
procedures execute more than 50 distinct paths in spec95
[19]. This number drops to 1.3% when low-frequency paths
accounting for 5% of total frequency are removed. Since we
can afford to approximate by disregarding these infrequent
paths, summing individual path frequencies constitutes a
feasible algorithm for many CMP regions. Furthermore,
because they encapsulate branch correlation, path profiles
compute the benefit more precisely than frequency analysis
based on correlation-insensitive edge profiles.
Moreover, the notion of individual CMP paths leads to a
better profile-guided PRE algorithm. Considering the CMP
region as an indivisible duplication unit is overly conserva-
tive. While it may not be profitable to restructure the entire
region, the region may contain a few paths Must-Must paths
that are frequently executed and are inexpensive to dupli-
cate. Our goal is to find the largest subset (frequency-wise)
of region paths that together pass the threshold test T (R).
Similarly, speculative hoisting into all entry edges may fail
the profitability test. Instead, we seek to find a subset of
entry edges that maximizes the speculative benefit. In this
section, we show how partial restructuring and speculation
are carried out and combined.
Partial speculation selects for speculative insertion only a
subset I of the No region entries. The selection of entries influences
which subset R of region exits will be able to exploit
value reuse. R consists of all Must exits that will become
Must-available due to the insertions in I. The rationale behind
treating entries separately is that some entries may enable
little value reuse, hence they should not be speculated.
Note that No entry edges are the only points where speculative
insertion needs to be considered: insertions inside
the region would be partially redundant; insertions outside
the region would extend the live-ranges. Partial speculation
is optimal if the difference of total frequencies of R and I
is maximal (but non-negative). As pointed out in [22], this
Y
Y
A
A
O
c+d
R
a) source program b) speculation made profitable
O
c+d
RT
No-path peeled off
not profitable
profitable
speculation1001000901000 100
Figure
7: Integrating speculation and restructuring.
problem can be solved as a maximum network flow problem.
An interesting observation is that to determine optimal
partial speculation, a) edge profiles are not inferior to path
profiles and b) frequency analysis is not required. Therefore,
to exploit the power of path profiles, partial restructuring,
rather than (speculative) code motion alone, must be used.
This becomes more intuitive once we realize that without
control flow restructuring, one is restricted to consider only
an individual edge (but not a path) for expression insertion
and removal. To compare the CMP-based partial speculation
with the speculative PRE in [20], we show how to
efficiently compute the benefit by defining the CMP region
and how to apply edge profiles with the same precision as
path profiles. In acyclic code, we achieve the same preci-
sion; in cyclic code, we are more precise in the presence of
loop-carried reuse.
The task of partial restructuring is to localize a subgraph
of the CMP that has a small size but contains many hot
Must-Must paths. By duplicating only such a subregion,
we are effectively peeling off only hot paths with few in-
structions. In Figure 1(e), only the (presumably hot) path
through the node Q was separated. Again, the problem of
finding an optimal subregion, one whose benefit is maximized
but passes the T (R) predicate and is smaller than a
constant budget, is NP-hard. However, the empirically very
small number of hot paths promises an efficient exhaustive-search
algorithm.
Integrating partial speculation and restructuring offers
additional opportunities for improving the cost-benefit ra-
tio. We are no longer restricted to peeling off hot Must-Must
paths and/or selecting No-entries for speculation. When
the high frequency of a No entry prevents speculation, we
can peel off a hot No-available path emanating from the
thereby reducing entry edge frequency and allowing
the speculation, at the cost of some code duplication. Figure
7(a) shows an example program annotated with an edge
profile. Because peeling hot Must-Must paths from the high-lighted
CMP ([c+d]) would duplicate all blocks except S, we
try speculation. To eliminate the redundancy at the CMP
exit edge Y with frequency ex(Y computation
must be inserted into No-entries B and C. While B is low-frequency
(10), C is not (100), hence the speculation is dis-
advantageous, as ex(Y
Now assume that the exit branch in Q is strongly biased and
the path C; Q;X has a frequency of 100. That is, after edge
C is executed, the execution will always follow to X. We
can peel off this No-available path, as shown in (b), effectively
moving the speculation point C off this path. After
peeling, the frequency of C becomes 0 and the speculation
is profitable, ex(Y
4 Experiments
We performed the experiments using the HP Labs VLIW
back-end compiler elcor, which was fed spec95 benchmarks
that were previously compiled, edge-profiled, and inlined
(only spec95int) by the Impact compiler. Table 1 shows
program sizes in the total number of nodes and expres-
sions. Each node corresponds to one intermediate state-
ment. Memory requirements are indicated by the column
space, which gives the largest nodes-expressions product
among all procedures. The running time of our rather inefficient
implementation behaved quadratically in the number
of procedure nodes; for a procedure with 1,000 nodes, the
time was about 5 seconds on PA-8000. Typically, the
complete PRE ran faster than the subsequent dead code
elimination.
Experiment 1: Disabling effects of CMP regions.
The column labeled optimizable gives the percentage of expressions
that reuse value along some path; 13.9% of (static)
expressions have partially redundant computations. The
next column prevented-CMP reports the percentage of optimizable
expressions whose complete optimization by code
motion is prevented by a CMP region. Code-motion PRE
will fail to fully optimize 30.5% of optimizable expressions.
For comparison, column prevented-POE reports expressions
that will require restructuring in PoePRE.
Experiment 2: Loop invariant expressions. Next, we
determined what percentage of loop invariant (LI) expressions
can be removed from their invariant loops with code
motion. The column loop invar shows the percentage of optimizable
expressions that pass our test of loop-invariance.
The following column gives the percentage of LI expressions
that have a CMP region; an average of 72.5% of LI computations
cannot be hoisted from all enclosing invariant loops
without restructuring.
Experiment 3: Eliminated computations. The column
global CSE reports the dynamic amount of computations
removed by global common subexpression elimination;
this corresponds to all full redundancies. The column complete
PRE gives the dynamic amount of all partially redundant
statements. The fact that strictly partial redundancies
contribute only 1.7% (the difference between complete PRE
and global CSE) may be due to the style of Impact 's intermediate
code (e.g., multiple virtual registers for the same
variable). We expect a more powerful redundancy analysis
to perform better. Figure 8 plots the dynamic amount of
strictly partial redundancies removed by various PRE tech-
niques. Code-motion PRE yields only about half the benefit
of a complete PRE. Furthermore, speculation results in near-complete
PRE for most benchmarks, even without special
hardware support (i.e., safe speculation). Speculation was
carried out on the CMP as whole. Note that the graph accounts
for the dynamic impairment caused by speculation.
benchmark program size E-1: CM prevented E-2: loop inv E-3: dynamic E-4: code growth
spec95int
spec95fp
procedures nodes (k)
expressions
space
(M)
optimizable (%
of
prevented-CMP (%
of
prevented-POE (%
of
loop
invar
of
invar-prevent (%
of
global
of
all)
complete
of
all)
ComPRE (%
increase) PoePRE (%
increase)
099.go 372 153.6 37.3 5.8 10.2 29.6 45.4 7.1 83.4 9.5 11.7 49.9 90.2
126.gcc 1661 917.2 158.2 38.0 8.0 34.2 45.0 2.5 69.8 3.7 4.6 33.9 36.7
129.compress
130.li 357 37.4 8.4 2.0 11.8 22.4 34.4 10.4 69.9 6.8 8.0 21.5 35.1
132.ijpeg 472 81.8 22.8 1.2 13.9 24.1 45.3 5.1 78.1 4.3 5.1 48.8 104.7
134.perl 276 135.0 25.5 40.4 9.6 39.5 51.8 11.9 93.5 4.8 6.8 31.2 50.0
147.vortex 923 325.9 65.7 5.8 16.6 29.5 36.1 6.3 81.6 11.1 13.0 35.7 55.4
Avg: spec95int 542.1 216.7 42.0 12.2 12.1 29.1 43.4 8.2 75.0 7.4 9.1 33.3 56.5
103.su2cor 37 10.6 3.9 2.5 15.3 29.8 53.8 14.5 43.7 12.8 13.0 42.1 142.0
104.hydro2d 43 8.5 2.4 0.4 16.8 21.7 42.7 5.9 41.7 1.9 6.0 43.9 141.7
145.fpppp 37 13.6 6.7 19.6 14.6 52.2 57.7 43.0 91.9 7.1 7.7 2.4 18.2
146.wave5 110 33.3 12.3 5.3 12.4 34.8 47.8 4.9 66.2 7.1 7.8 36.6 107.6
Avg: spec95fp 39.2 11.4 4.4 4.7 16.2 32.4 49.8 15.3 69.2 8.3 10.0 94.3 313.0
Avg: spec95 326.6 128.7 25.9 9.0 13.9 30.5 46.1 11.3 72.5 7.8 9.5 59.5 166.4
Table
1: Experience with PRE based on control flow restructuring.
go
compress li ijpeg perl vortex AVG-int tomcatv swim su2cor hydro2d fpppp wave5 AVG-fp AVG1.03.05.0
Dynamic
computations
eliminated
code-motion PRE
safe speculative PRE
unsafe speculative PRE
complete PRE
Figure
8: Benefit of various PRE algorithms: dynamic op-
count decrease due to strictly partial redundancies. Each
algorithm also completely removes full redundancies.
The measurements indicate that an ideal PRE algorithm
should integrate both speculation and restructuring. Using
restructuring when speculation would waste a large portion
of benefit will provide an almost complete PRE with small
code growth.
Experiment 4: Code growth. Finally, we compare the
code growth incurred by ComPRE and PoePRE. To make
the experiment feasible, we limited procedure size by 3,000
nodes and made the comparison only on procedures that did
not exceed the limit in either algorithm. Overall, ComPRE
created about three times less code growth than PoePRE.
5 Demand-Driven Frequency Analysis
Not amenable to bit-vector representation, frequency analysis
[28] is an expensive component of profile-guided opti-
mizers. We have shown that ComPRE allows restricting the
scope of frequency analysis within the CMP region without
a loss of accuracy. However, in large CMP regions the cost
may remain high, and path profiles cannot be used as an
efficient substitute when numerous hot paths fall into the
region. One method to reduce the cost of frequency analysis
is computing on demand only the subset of data flow
solution that is needed by the optimization.
In this section, we develop a demand-driven frequency
analyzer which reduces data-flow analysis time by a) examining
only nodes that contribute to the solution and, option-
ally, b) terminating the analysis prematurely, when the solution
is determined with desired precision. Besides PRE, the
analyzer is suitable for optimizations where acceptable running
time must be maintained by restricting analysis scope,
as in run-time optimizations [5] or interprocedural branch
removal [10].
Frequency analysis computes the probability that a data-flow
fact will occur during execution. Therefore, the probability
"lattice" is an infinite chain of real numbers. Because
existing demand-driven analysis frameworks are built on iterative
approaches, they only permit lattices of finite size
[18] or finite height [23, 30] and hence cannot derive a frequency
analyzer. We overcome this limitation by designing
the demand-driven analyzer based upon elimination data-flow
methods [29] whose time complexity is independent of
the lattice shape. We have developed a demand-driven analysis
framework motivated by the Allen-Cocke interval elimination
solver [2]. Next, using the framework, a demand-driven
algorithm for general frequency data-flow analysis
was derived [8]. We present here the frequency solver specialized
for the problem of availability.
Definitions. Assume a forward data-flow problem specified
with an equation system
Vector
n ) is the solution for a node n, variable
n denotes the fact associated with expression e. The
equation system induces a dependence graph EG whose
nodes are variables x e
n and edges represent flow functions
an edge
exists if the value of x e
n is computed
from x d
pred(n). The graph EG is called an exploded
graph [23]. The data flow problems underlying ComPRE
are separable, hence x e
only depends on x e
m . In value-based
PRE [9], constant propagation [30], and branch correlation
analysis [10], edges
e, may exist, complicating
the analysis. The analyzer presented here handles such
general exploded graphs.
Requirements. The demand-driven analyzer grew out of
four specific design requirements:
1. Demand-driven. Rather than computing xn for each
node n, we determine only the desired x e n , i.e. the solution
for expression e at a node n. Analysis speed-up
is obtained by further requiring that only nodes
transitively contributing to the value of x e n are visited
and examined. To guarantee worst-case behavior,
when solutions for all EG nodes are desired, the solver's
time complexity does not exceed that of the exhaustive
Allen-Cocke method, O(N 2 ), where N is the number
of EG nodes.
2. Lattice-independent. The amount of work per node
does not depend on lattice size, only on the EG shape.
3. On-line. The analysis is possible even when EG is not
completely known prior to the analysis. To save time
and memory, our algorithm constructs EG as analysis
progresses. The central idea of on-demand construction
is to determine a flow function f e
only when its
target variable x e
n is known to contribute to the desired
solution. Furthermore, the solver must produce the solution
even when EG is irreducible, which can happen
even when the underlying CFG is reducible.
4. Informed. In the course of frequency analysis, the contribution
weight of each examined node to the desired
solution must be known. This information is used to
develop a version of the analyzer that approximates
frequency by disregarding low-contribution nodes with
the goal of further restricting analysis scope.
The exhaustive interval data-flow analysis [2] computes
xn for all n as follows. First, loop headers are identified to
partition the graph into hierarchic acyclic subregions, called
intervals. Second, forward substitution of equations is performed
within each interval to express each node solution in
terms of its loop header. The substitution proceeds in the
interval order (reverse postorder), so that each node is visited
only once. Third, mutual equation dependences across
loop back-edges are reduced with a loop breaking rule L:
)). The second and third
step remove cyclic dependences from all innermost loops in
EG; they are repeated until all nesting levels are processed
and all solutions are expressed in terms of the start node,
which is then propagated to all previously reduced equations
in the final propagation phase [2].
The demand-driven interval analysis substitutes only
those equations and reduces only those intervals on which
the desired x e
n is transitively dependent. To find the relevant
equations, we back-substitute equations (flow functions) into
the right-hand side of x e
n along the EG edges. The edges
are added to the exploded graph on-line, whenever a new
EG node is visited, by first computing the flow function of
the node and then inserting its predecessors into the graph.
As in [2], we define an EG interval to be a set of nodes
dominated by the sink of any back-edge. In an irreducible
EG, a back-edge is each loop edge sinking onto a loop entry
node. Because the EG shape is not known prior to analysis,
on-line identification of EG intervals relies only on the structure
of the underlying control flow graph. When the CFG
node of an EG node x is a CFG loop entry, then x may
be an EG loop entry, and we conservatively assume it is an
interval head. Within each interval, back-substitutions are
performed in reversed interval order. Such order provides
lattice-independence, as each equation needs to be substituted
only once per interval reduction, and there are at most
reductions. To find interval order on an incomplete EG,
we observe that within each EG interval, the order is consistent
with the reverse postorder CFG node numbering.
To loop-break cyclic dependencies along an interval back-
edge, the loop is reduced before we continue into the preceding
interval, recursively invoking reductions of nested loops.
This process achieves demand analysis of relevant intervals.
The desired solution is obtained when x e
n is expressed exclusively
using constant terms. At this point, we have also identified
an EG subgraph that contributes to x e
n , and removed
from it all cyclic dependences. A forward substitution on the
subgraph will yield solutions for all subgraph nodes which
can be cached in case they are later desired (worst-case running
time). This step corresponds to the propagation phase
in [2], and to caching in [18, 30].
The framework instance calculates the probability of expression
e being available at the exit of node n during the
execution: x e
denote the probability of edge a being taken, given its sink
node is executed. We relate the edge probability to the sink
(rather than the source, as in exhaustive analysis [28]) because
the demand solver proceeds in the backward direction.
The frequency flow function returns probability 1 when the
node computes the expression e and 0 when it kills the ex-
pression. Otherwise, the sum of probabilities on predecessors
weighted by edge execution probabilities is returned.
Predicates Comp and Transp are defined in Figure 3.
1:0 if Comp(n; e) - Transp(n; e),
0:0 if :Transp(n; e),
p((m; n)):x e
otherwise.
The demand frequency analyzer is shown in Figure 9.
Two data structures are used : sol accumulates the constant
terms of the desired probability x e
rhs is the current
right-hand side of x e
n after all back-substitutions. The variables
sol and rhs are organized as a stack, the top being used
in the currently analyzed interval. The algorithm treats rhs
both as a symbolic expression and as a working set of pending
nodes (or yet unsubstituted variables, to be precise). For
example, the value of rhs may be 0:25 m+0:4 k, where the
weights are contributions of nodes m and k to the desired
probability x e
n . If e is never available at m, and is available
at k with probability 0.5, then it is available at node n with
probability 0:25 0+0:4 formally, the contribution
weight of a node represents the probability that a
path from that node to n without a computation or a kill of
the expression e will be executed.
First, the rhs is set to 1:0 n in line 1. Then, flow functions
are back-substituted into rhs in post-order (line 3).
Substitutions are repeated until all variables have been replaced
with constants (line 2), which are accumulated in
sol. If a substituted node x computes the expression e,
its weight rhs[x] is added to the solution and x is removed
from the rhs by the assignment rhs[x] := 0:0 (line 6). In the
simple case when x is not a loop entry node (line 12), its
contribution c is added to each predecessor's contribution,
weighted by the edge probability p. If x is a loop entry node
(line 8), then before continuing to the loop predecessor, all
self-dependences of x are found in a call to reduce loop.
The procedure reduce loop mimics the main loop (lines 1-
but it pushes new entries on the stacks to initiate a reduction
of a new interval and also marks the loop entry node
to stop when back-substitution collected cyclic dependences
along all cyclic paths on the back-edge edge (y; x). The
result of reduce loop is returned in a sol-rhs pair (s; r),
where s is the constant and r the set of unresolved vari-
ables, e.g. 0:1. If EG is reducible, the
set r contains only x. The value 0:3 is the weight
of the x's self-dependence, which is removed by the loop
breaking rule derived for frequency analysis from the sum
of infinite geometric sequence (lines 10-11). After the algorithm
terminates, the stack visited (line 14) specifies the
order in which forward substitution is performed to cache
the results. Also shown in Figure 9 is an execution trace
of the demand-driven analysis. It computes the probability
that the expression computed in nodes F , H, and killed in
A, D, is available at node C. All paths where availability
holds are highlighted.
Approximate Data-Flow Analysis. Often, it is necessary
to sacrifice precision of the analysis for its speed. We
define here a notion of approximate data flow information,
which allows the analyzer a predetermined degree of conservative
imprecision. For example, given a 5% imprecision
level 0:05), the analyzer may output "available: 0.7,"
when the maximal fixed point solution is "available: 0.75."
The intention of permitting underestimation is to reduce the
analysis cost. When the analyzer is certain that the contribution
of a node (and all its incoming paths) to the overall
solution is less than the imprecision level, it can avoid analyzing
the paths and assume at the node the most conservative
data-flow fact.
Because the algorithm in Figure 9 was designed to be
informed, it naturally extends to approximate analysis. By
knowing the precise contribution weight of each node as the
analysis progresses, whenever the sum of weights in rhs at
the highest interval level falls below ffl (the while-condition
in line 2), we can terminate and guarantee the desired pre-
cision. An alternative scenario is more attractive, however.
When a low-weight node is selected in line 3, we throw it
away. We can keep disregarding such nodes until their total
weights exceed ffl. In essence, this approach performs analysis
along hot paths [4], and on-line region formation [21].
The idea of terminating the analysis before it could find
the precise solution was first applied in the implementation
of interprocedural branch elimination [10]. Stopping
after visiting a thousand nodes resulted in two magnitudes
of analysis speedup, while most optimization opportunities
were still discovered. However, without the approximate frequency
analyzer developed in this paper, we were unable to
a) determine the benefit of restructuring, b) select a profitable
subset of nodes to duplicate, and c) get a bound on
the amount of opportunities lost due to early termination.
Algorithm complexity. In an arbitrary exploded graph,
reduce loop may be (recursively) invoked on each node.
Hence, each node may be visited at most NE times, where
is the number of EG nodes, N the number of
CFG nodes, and S the number of optimized expressions.
With caching of results, then each node is processed in at
most one invocation of the algorithm in Figure 9, yielding
worst-case time complexity of O(N 2
real programs have loop nesting level bound by a small con-
stant, the expected complexity is (NS), as in [2].
Although most existing demand-driven data-flow algorithms
([18, 23], [30] in particular) can be viewed (like ours)
to operate on the principle of back-substituting flow functions
into the right-hand side of the target variable, they do
not focus on specifying a profitable order of substitutions
(unlike ours) but rely instead on finding the fixed point it-
eratively. Such an approach fails on infinite-height lattices
where CFG loops keep always iterating towards a better approximation
of the solution. Note that breaking each control
flow cycle by inserting a widening operator [13] does not
appear to resolve the problem because widening is a local
adjustment primarily intended to approximate the solution.
Therefore, in frequency analysis, too many iterations would
be required to achieve an acceptable approximation. Instead
of fixing the equation system locally, a global approach of
structurally identifying intervals and reducing their cyclic
dependences seems necessary. We have shown how to identify
intervals and perform substitutions in interval order on
demand, even when the exploded graph is not known prior
to the analysis. We believe that existing demand methods
can be extended to operate in a structural manner, enabling
the application of loop-breaking rules. This would make the
methods reminiscent of the elimination algorithms [29].
6 Conclusion and Related Work
The focus of this paper is to improve program transformations
that constitute value-reuse optimizations commonly
known as Partial Redundancy Elimination (PRE).
In the long history of PRE research and implementation,
three distinct transformations can be identified. The seminal
paper by Morel and Renviose [27] and its derivations
[11, 14, 15, 16, 17, 25] employ pure, non-speculative code
motion. Second, the complete PRE by Steffen [31] is based
on control flow restructuring. Third, navigated by path profile
information, Gupta et al apply speculative code motion
in order to avoid code-motion obstacles by controlled impairment
of some paths [20].
In this work, we defined the code-motion-preventing
(CMP) region, which is a CFG subgraph localizing adverse
effects of control flow on the desired value reuse. The notion
of the CMP is applied to enhance and integrate the three
existing PRE transformations in the following ways, 1. Code
motion and restructuring are integrated to remove all redundancies
at minimal code growth cost (ComPRE). 2. Morel
and Renviose's original method is expressed as a restricted
(motion-only) case of the complete algorithm (CM-PRE).
3. We develop an algorithm whose power adjusts contin-
Input: node n, expression e.
Output: in sol , the probability of e being available at the exit of n.
stack of reals (names sol , rhs refer always to top of stack)
stack of sets of unsubstituted nodes n with weights rhs [n]
post-order numbering of CFG nodes
while rhs not empty do
3 select from rhs a node x with smallest post-dfs(x)
5 end while
procedure substitute(node x)
if x has not been visited, determine its flow function
if x computes or kills e, adjust sol and remove x from rhs
if Comp(x; e) - Transp(x; e) then
7 else if :Transp(n; e) then rhs [x] := 0:0; return
back-edge is each edge that meets a loop-entry edge
8 if back-edge (y; x) exists then assume one back-edge per node
substitute for y until x occurs on the r.h.s.
9 (s; r) := reduce loop(y; x)
apply loop breaking rule: sum of infinite geom. sequence
substitute "acyclic" predecessors
for each non-backedge node z 2 pred(x) do
end for
x is now fully substituted
14 rhs [x] := 0:0; visited:push(x)
substitute
function reduce loop(node u, node v)
while rhs contains unmarked nodes do
17 select from rhs an unmarked node x with lowest post-dfs(x)
19 end while
reduce loop
F
post-dfs: H; G;
9 reduce loop(E; B)
14 rhs := 0:391 A
7 sol := 0:2818 unchanged / Final probability
7 rhs := 0:0
Figure
9: Demand-driven frequency analysis for availability of computations, and a trace of its execution.
ually between the motion-only and the complete PRE in
response to the program profile (PgPRE). 4. We demonstrate
that speculation can be navigated precisely by edge
profiles alone. 5. Path profiles are used to integrate the
three transformations and balance their power at the level
of CMP paths.
While PRE is significantly improved through effective
program transformations presented in this paper, a large
orthogonal potential lies in detecting more redundancies.
Some techniques have used powerful analysis to uncover
more value reuse than the traditional PRE analysis [9, 11].
However, using only code motion, they fail to completely
exploit the additional reuse opportunities. Thus, the transformations
presented here are applicable in other styles of
PRE as well, for example in elimination of loads.
Ammons and Larus [4] developed a constant propagation
optimization based on restructuring, namely on peeling of
hot paths. In their analysis/transformation framework, re-structuring
is used not only to exploit optimization opportunities
previously detected by the analysis, as is our case, but
also to improve the analysis precision by eliminating control
flow merges from the hot paths. Even though our PRE cannot
benefit from hot path separation (our distributive data-flow
analysis preserves reuse opportunities across merges), a
more complicated analysis (e.g., redundancy of array bound
checks) would be improved by their approach. After the
analysis, their algorithm recombines separated paths that
present no useful opportunities. It is likely that path recombination
can be integrated with code motion, as presented
in this paper, to further reduce the code growth.
In a global view, we have identified four main issues
with path-sensitive program optimizations [8]: a) solving
non-distributive problems without conservative approximation
(e.g. non-linear constant propagation), b) collecting
distinct opportunities (e.g., variable has different constant
along each path), c) exploiting distinct opportunities (e.g.,
enabling folding of path-dependent constants through re-
structuring), and d) directing the analysis effort towards hot
paths. In the approach of Ammons and Larus, all four issues
are attacked uniformly by separation of hot paths, their
subsequent individual analysis, and recombination. Our approach
is to reserve restructuring for the actual transforma-
tion. This implies a different overall strategy: a) we solve
non-distributive problems precisely along all paths by customizing
the data-flow name space [9], b) we collect distinct
opportunities through demand-driven analysis as in branch
elimination [10], which is itself a form of constant propa-
gation, c) we exploit all profitable opportunities with economical
transformations, and d) avoid infrequent program
regions using the approximation frequency analysis (the last
three presented in this paper).
Acknowledgments
We are indebted to the elcor and Impact compiler teams for
providing their experimental infrastructure. Sadun Anik,
Ben-Chung Cheng, Brian Dietrich, John Gyllenhaal, and
Scott Mahlke provided invaluable help during the implementation
and experiments. Comments from Glenn Am-
mons, Evelyn Duesterwald, Jim Larus, Mooly Sagiv, Bernhard
Steffen, and the anonymous reviewers helped to improve
the presentation of the paper. This research was partially
supported by NSF PYI Award CCR-9157371, NSF
grant CCR-9402226, and a grant from Hewlett-Packard to
the University of Pittsburgh.
--R
Compilers Principles
A program data flow analysis procedure.
Exploiting hardware performance counters with flow and context sensitive profiling.
Improving data-flow analysis with path profiles
Aggressive inlining.
Efficient path profiling.
Effective partial redundancy elimination.
A new algorithm for partial redundancy elimination based on SSA form.
Abstract intrepretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints.
How to analyze large programs efficiently and infor- matively
Practical adaptation of the global optimization algorithm of Morel and Renvoise.
"global optimization by suppression of partial redundancies"
A variation of Knoop
A practical framework for demand-driven interprocedural data flow analysis
Path profile guided partial redundancy elimination using speculation.
Partial redundancy elimination based on a cost-benefit analysis
Demand Interprocedural Dataflow Analysis.
Controlled node split- ting
Optimal code motion: Theory and practice.
Sentinel scheduling for VLIW and superscalar processors.
Global optimization by supression of partial redundancies.
Data flow frequency analysis.
Elimination algorithms for data flow analysis.
Precise interprocedural dataflow analysis with applications to constant propagation.
Property oriented expansion.
--TR
Compilers: principles, techniques, and tools
Elimination algorithms for data flow analysis
How to analyze large programs efficiently and informatively
A variation of Knoop, RuMYAMPERSANDuml;thing, and Steffen''s <italic>Lazy Code Motion</italic>
Sentinel scheduling
Effective partial redundancy elimination
Optimal code motion
A solution to a problem with Morel and Renvoise''s MYAMPERSANDldquo;Global optimization by suppression of partial redundanciesMYAMPERSANDrdquo;
Practical adaption of the global optimization algorithm of Morel and Renvoise
Demand interprocedural dataflow analysis
Region-based compilation
Fast, effective dynamic compilation
Data flow frequency analysis
Precise interprocedural dataflow analysis with applications to constant propagation
Efficient path profiling
Exploiting hardware performance counters with flow and context sensitive profiling
Aggressive inlining
Interprocedural conditional branch elimination
A new algorithm for partial redundancy elimination based on SSA form
Resource-sensitive profile-directed data flow analysis for code optimization
Path-sensitive value-flow analysis
A practical framework for demand-driven interprocedural data flow analysis
Improving data-flow analysis with path profiles
Global optimization by suppression of partial redundancies
A program data flow analysis procedure
Abstract interpretation
Property-Oriented Expansion
Controlled Node Splitting
Path Profile Guided Partial Redundancy Elimination Using Speculation
--CTR
Daniel A. Connors , Wen-mei W. Hwu, Compiler-directed dynamic computation reuse: rationale and initial results, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.158-169, November 16-18, 1999, Haifa, Israel
Jin Lin , Tong Chen , Wei-Chung Hsu , Pen-Chung Yew , Roy Dz-Ching Ju , Tin-Fook Ngai , Sun Chan, A compiler framework for speculative optimizations, ACM Transactions on Architecture and Code Optimization (TACO), v.1 n.3, p.247-271, September 2004
Zhang , Rajiv Gupta, Whole Execution Traces, Proceedings of the 37th annual IEEE/ACM International Symposium on Microarchitecture, p.105-116, December 04-08, 2004, Portland, Oregon
Bernhard Scholz , Nigel Horspool , Jens Knoop, Optimizing for space and time usage with speculative partial redundancy elimination, ACM SIGPLAN Notices, v.39 n.7, July 2004
Rei Odaira , Kei Hiraki, Sentinel PRE: Hoisting beyond Exception Dependency with Dynamic Deoptimization, Proceedings of the international symposium on Code generation and optimization, p.328-338, March 20-23, 2005
Timothy Heil , James E. Smith, Relational profiling: enabling thread-level parallelism in virtual machines, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.281-290, December 2000, Monterey, California, United States
David Gregg , Andrew Beatty , Kevin Casey , Brain Davis , Andy Nisbet, The case for virtual register machines, Science of Computer Programming, v.57 n.3, p.319-338, September 2005
Dhananjay M. Dhamdhere, E-path_PRE: partial redundancy elimination made easy, ACM SIGPLAN Notices, v.37 n.8, August 2002
Eduard Mehofer , Bernhard Scholz, Probabilistic data flow system with two-edge profiling, ACM SIGPLAN Notices, v.35 n.7, p.65-72, July 2000
Jin Lin , Tong Chen , Wei-Chung Hsu , Pen-Chung Yew , Roy Dz-Ching Ju , Tin-Fook Ngai , Sun Chan, A compiler framework for speculative analysis and optimizations, ACM SIGPLAN Notices, v.38 n.5, May
Spyridon Triantafyllis , Matthew J. Bridges , Easwaran Raman , Guilherme Ottoni , David I. August, A framework for unrestricted whole-program optimization, ACM SIGPLAN Notices, v.41 n.6, June 2006
Max Hailperin, Cost-optimal code motion, ACM Transactions on Programming Languages and Systems (TOPLAS), v.20 n.6, p.1297-1322, Nov. 1998
Ran Shaham , Elliot K. Kolodner , Mooly Sagiv, Heap profiling for space-efficient Java, ACM SIGPLAN Notices, v.36 n.5, p.104-113, May 2001
Reinhard von Hanxleden , Ken Kennedy, A balanced code placement framework, ACM Transactions on Programming Languages and Systems (TOPLAS), v.22 n.5, p.816-860, Sept. 2000
Jingling Xue , Qiong Cai, A lifetime optimal algorithm for speculative PRE, ACM Transactions on Architecture and Code Optimization (TACO), v.3 n.2, p.115-155, June 2006
Zhang , Rajiv Gupta, Whole execution traces and their applications, ACM Transactions on Architecture and Code Optimization (TACO), v.2 n.3, p.301-334, September 2005
Uday P. Khedker , Dhananjay M. Dhamdhere, Bidirectional data flow analysis: myths and reality, ACM SIGPLAN Notices, v.34 n.6, June 1999
Keith D. Cooper , Li Xu, An efficient static analysis algorithm to detect redundant memory operations, ACM SIGPLAN Notices, v.38 n.2 supplement, p.97-107, February
Keith D. Cooper , L. Taylor Simpson , Christopher A. Vick, Operator strength reduction, ACM Transactions on Programming Languages and Systems (TOPLAS), v.23 n.5, p.603-625, September 2001
eliminating array bounds checks on demand, ACM SIGPLAN Notices, v.35 n.5, p.321-333, May 2000
Litong Song , Krishna Kavi, What can we gain by unfolding loops?, ACM SIGPLAN Notices, v.39 n.2, February 2004
Glenn Ammons , James R. Larus, Improving data-flow analysis with path profiles, ACM SIGPLAN Notices, v.33 n.5, p.72-84, May 1998
Phung Hua Nguyen , Jingling Xue, Strength reduction for loop-invariant types, Proceedings of the 27th Australasian conference on Computer science, p.213-222, January 01, 2004, Dunedin, New Zealand
Youtao Zhang , Rajiv Gupta, Timestamped whole program path representation and its applications, ACM SIGPLAN Notices, v.36 n.5, p.180-190, May 2001
K. V. Seshu Kumar, Value reuse optimization: reuse of evaluated math library function calls through compiler generated cache, ACM SIGPLAN Notices, v.38 n.8, August
Sriraman Tallam , Xiangyu Zhang , Rajiv Gupta, Extending Path Profiling across Loop Backedges and Procedure Boundaries, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, p.251, March 20-24, 2004, Palo Alto, California
Qiong Cai , Jingling Xue, Optimal and efficient speculation-based partial redundancy elimination, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, March 23-26, 2003, San Francisco, California
Mary Lou Soffa, Load-reuse analysis: design and evaluation, ACM SIGPLAN Notices, v.34 n.5, p.64-76, May 1999
Matthew Arnold , Michael Hind , Barbara G. Ryder, Online feedback-directed optimization of Java, ACM SIGPLAN Notices, v.37 n.11, November 2002
Glenn Ammons , James R. Larus, Improving data-flow analysis with path profiles, ACM SIGPLAN Notices, v.39 n.4, April 2004
Motohiro Kawahito , Hideaki Komatsu , Toshio Nakatani, Partial redundancy elimination for access expressions by speculative code motion, SoftwarePractice & Experience, v.34 n.11, p.1065-1090, September 2004 | speculative execution;demand-driven frequency data-flow analysis;control flow restructuring;partial redundancy elimination;profile-guided optimization |
277715 | An implementation of complete, asynchronous, distributed garbage collection. | Most existing reference-based distributed object systems include some kind of acyclic garbage collection, but fail to provide acceptable collection of cyclic garbage. Those that do provide such GC currently suffer from one or more problems: synchronous operation, the need for expensive global consensus or termination algorithms, susceptibility to communication problems, or an algorithm that does not scale. We present a simple, complete, fault-tolerant, asynchronous extension to the (acyclic) cleanup protocol of the SSP Chains system. This extension is scalable, consumes few resources, and could easily be adapted to work in other reference-based distributed object systems---rendering them usable for very large-scale applications. | Introduction
Automatic garbage collection is an important feature for
modern high-level languages. Although there is a lot of accumulated
experience in local garbage collection, distributed
programming still lacks effective cyclic garbage collection.
A local garbage collector should be correct and complete.
A distributed garbage collector should also be asynchronous
(other spaces continue to work during a local garbage collection
in one space), fault-tolerant (it works even with unreliable
communications and space crashes), and scalable (since
networks are connecting larger numbers of computers over
increasing distances).
Previously published distributed garbage collection algorithms
fail in one or more of these requirements. In this paper
we present a distributed garbage collector for distributed
languages that provides all three of these desired properties.
Moreover, the algorithm is simple to implement and consumes
very few resources.
The algorithm described in this paper was developed
as part of a reference-based distributed object system for
(a dialect of ML with object-oriented ex-
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that
copies are not made or distributed for profit or commercial advantage
and that copies bear this notice and the full citation on the first page.
To copy otherwise, to republish, to post on servers or to
redistribute to lists, requires prior specific permission and/or a fee.
SIGPLAN PLDI'98 Montreal Canada
tensions). Remote references are managed using the Stub-
Scion Pair Chains (SSPC) system, extended with our cyclic
detection algorithm. Although our system is based on transparent
distributed references, our design assumptions are
weak enough to support other kinds of distributed languages;
those based on channels, for example (-calculus [8], join-calculus
[3], and others).
The next two sections of the paper introduce the basic
mechanisms of remote references and the SSPC system for
acyclic distributed garbage collection. Section 4 describes
our cycle detection algorithm, and includes a short example
showing how it works. Section 5 briefly investigates some
issues related to our algorithm. Sections 6 and 7 analyze
the algorithm in greater depth, and discuss some of the implementation
issues surrounding it. The final two sections
compare our algorithm with other recent work in distributed
garbage collection and present our conclusions.
Basics
We consider a distributed system consisting of a set of spaces.
Each space is a process, that has its own memory, its own
local roots, and its own local garbage collector. A space can
communicate with other spaces (on the same computer or
a different one) by sending asynchronous messages. These
messages may be lost, duplicated or delivered out of order.
Distributed computation is effected by sending messages
that invoke procedures in remote objects. These remote
procedure calls (RPCs) have the same components as a local
procedure call: a distinguished object that is to perform
the call, zero or more arguments of arbitrary (includ-
ing reference) type, and an optional result of arbitrary type.
The result is delivered to the caller synchronously; in other
words, the caller blocks for the duration of the procedure
call. Encoding an argument or result for inclusion in a message
is called marshaling; decoding by the message recipient
is called unmarshaling.
When an argument or result of an RPC has a reference
type (i.e. it refers to an object) then this reference can serve
for further RPCs from the recipient of the reference back to
the argument/result object. The object is also protected
from garbage collection while it remains reachable; i.e. until
the last (local or remote) reference to it is deleted.
In the following sections we will write nameX (A) to indicate
a variable called name located on space X that contains
information about object A. We will write a is increased to
b to mean the variable a is set to the maximum of variable
a and variable b.
A
stub(R)
R
Figure
1: A reference from A in space X to B in space Y .
2.1 Remote references
Marshaled references to local or remote objects are sent in
messages to be used in remote computations (e.g. for remote
invocation).
Such a reference R from object A in space X to object
B in space Y is represented by two objects: stubX (R) and
scionY (R). These are represented concretely by:
ffl a local pointer in X from A to stubX (R); and
ffl a local pointer in Y from scionY (R) to B.
A scion corresponds to an incoming reference, and is
treated as a root during local garbage collection. An object
having one or more incoming references from remote spaces
is therefore considered live by the local garbage collector,
even in the absence of any local reference to that object.
The stub is a local "proxy" for some remote object. It
contains the location of its associated matching scion. Each
scion has at most one matching stub, and each stub has
exactly one matching scion. If several spaces contain stubs
referring to some object B, then each will have a unique
matching scion in B's space: one scion for each stub.
A reference R is created by space Y and exported to
some other space X as follows. First a new scion scionY (R)
is created and marshaled into a message. The marshaled
representation encodes the location of scionY (R) relative to
X. The message is then sent to to X, where the location is
unmarshaled to create stubX (R).
3 Stub-Scion Pair Chains
The SSPC system [13] is a mechanism for distributed reference
tracking similar to Network Objects [1] and supporting
acyclic distributed garbage collection. It differs from Net-work
Objects in several important respects, such as: a reduction
of the number of messages required for sending a
reference, lower latencies, fault-tolerance, and support for
object migration. However, we will only describe here the
part needed to understand its garbage collector.
The garbage collector is based on reference listing (an
extension of reference counting that is better suited to unreliable
communications), with time-stamps on messages to
avoid race conditions.
The following explanation is based on the example shown
in
Figure
1. This simple example is easily generalizable to
situations having more references and spaces.
Each message is stamped by its sender with a monotonically
increasing time. When a message containing R is
sent by Y to X, the time-stamp of the message is stored in
scionY (R) in a field called scionstamp. When the message
is received by X, a field of stubX (R) called stubstamp is
increased to the time-stamp of the message. For stubX (R),
stubstamp contains the time-stamp of the most recent message
containing R that was received from Y . Similarly for
scionY (R), scionstamp is the time-stamp of the last message
containing R that was sent to X.
When object A becomes unreachable, stubX (R) is collected
by the local garbage collector of space X. When
stubX (R) is finalized, a value called increased
to the stubstamp field of X. thresholdX [Y ] therefore
contains the time-stamp of the last message received
from Y that contained a reference to an object whose stub
has since been reclaimed by the local garbage collector.
After each garbage collection in space X, a message LIVE
is sent to all the spaces in the immediate vicinity. The
immediate vicinity of space X is the set of spaces that have
stubs and scions whose associated scions and stubs are in X.
The LIVE message sent to space Y contains the names of all
the scions in Y that are still reachable from stubs in X. The
value of thresholdX [Y ] is also sent in the LIVE message to
Y . This value allows space Y to determine the most recent
message that had been received by X from Y at the time
the LIVE message was sent.
Space Y extracts the list of scion names on receipt of the
LIVE message. This list is compared to the list of existing
scions in Y whose matching stubs are located in X. Any
existing scions that are not mentioned in the list are now
known to be unreachable from X, and are called suspect.
A suspect scion can be deleted, provided there is no danger
that a reference to it is currently in transit between X and
Y .
To prevent an incorrect deletion of a suspect scion, the
scionstamp field of suspect scions is compared to the
contained in the LIVE message. If
then some stub referred to by a message sent after the last
one containing R has been collected. This implies that the
last message containing R was received before the LIVE was
sent, and so any stub created for R from this message must
no longer exist in space X. The suspect scion can therefore
be deleted safely.
To prevent out-of-order messages from violating this last
condition, any messages from Y marked with a time-stamp
smaller than the current value of thresholdX [Y ] are refused
by space X. (thresholdX [Y ] must therefore be initialized
with a time-stamp smaller than the time-stamp of
the first messages to be received.) This mechanism is called
threshold-filtering.
The LIVE message can be extended by a "missing time-
stamps" field, to inform the space Y of the time-stamps
which are smaller than thresholdX [Y ] and which have not
been received in a message yet. Y then has the possibility
of re-sending the corresponding messages using a new time-stamp
and newly-created scions, since older messages will
be refused by the threshold-filtering.
The above algorithm does not prevent premature deletion
of scions contained in messages that are delayed in tran-
sit. These deletions are however safe, since such delayed
messages will be refused by the threshold-filtering.
This situation can occur only if a more recent message
arrives before some other delayed message, and the more
recent message causes the creation of stubs that are subsequently
deleted by a local garbage collection before the
arrival of the delayed message. This can not happen with
FIFO communications (such as TCP/IP). Moreover, threshold-
filtering of delayed messages is not problematical for applications
using unreliable communications (such as UDP), since
these applications should be designed to function correctly
even in the presence of message loss. Threshold-filtering and
message loss due to faulty communication are indistinguishable
to the application.
The above distributed garbage collection mechanism is
fault-tolerant. Unreliable communications can not create
dangling pointers, and scions are never deleted in the case
of crashed spaces that contain matching stubs (which supports
extensions for handling crash recovery). Moreover, it
is scalable because each space only sends and receives messages
in its immediate vicinity, and asynchronous because
local garbage collections in each space are allowed at any
time with no need to synchronize with other spaces.
However, the mechanism is not complete. Distributed
cycles will never be deleted because of the use of reference-
listing. The remainder of this paper presents our contri-
bution: an algorithm to detect and cut distributed cycles,
rendering the SSPC garbage collector complete.
4 Detection of free distributed cycles
The detector of free distributed cycles is an extension to
the SSPC garbage collector. Spaces may elect to use the
acyclic SSPC GC without the detector extension (e.g. for
scalability reasons). Spaces that choose to be involved in
the detection of cycles are called participating spaces; other
spaces are called non-participating spaces. Our detector will
only detect cycles that lie entirely within a set of participating
spaces.
4.1
Overview
The algorithm is based on date propagation along chains of
remote pointers. The useful property of this propagation is
that reachable stubs receive increasing dates, whereas unreachable
stubs (belonging to a distributed cycle) are eventually
marked with constant dates.
A threshold date is computed by a central server. Stubs
marked with dates inferior to this threshold are known to
have constant dates, and are therefore unreachable. Each
participating space sends the minimum local date that it
wishes to protect to the central server (stubs with these
dates should not be collected). This information is based
not only on the dates marked on local stubs, but also on the
old dates propagated to outgoing references.
The algorithm is asynchronous (most values are computed
conservatively), tolerant to unreliable communications
(using old values in computations is always safe, since most
transmitted values are monotonically increasing) and benign
to the normal SSPC garbage collector (non-participating
spaces can work with an overlapping cluster of participating
spaces, even if they do not take part in cycle detection).
4.2 Data structures and messages
Stubs are extended with a time-stamp called stubdate. This
is the time of the most recent trace (possibly on a remote
site) during which the stub's chain was found to be rooted.
Stubs have a second time-stamp, called olddate, which is
the value of stubdate for the previous trace.
Scions are extended with a time-stamp called sciondate.
This is a copy of the most recently propagated stubdate
from the scion's matching stub-i.e. the time of the most
recent remote trace during which the scion's chain was found
to be rooted. The stubdates from a space are propagated
to their matching scions in some other space by sending a
STUBDATES message.
STUBDATES messages are stamped with the time of
the trace that generated them. Each site has a vector,
called cyclicthreshold, containing the time-stamp of the
last STUBDATES message received from each remote space.
The cyclicthreshold value for a remote space is periodically
propagated back to that space by sending it a THRESHOLD
message. The emission of THRESHOLD messages can
be delayed by saving the cyclicthreshold values for a given
time in a set called CyclicThresholdToSend until a particular
event.
Each site can protect outgoing references from remote
garbage collection. For this, it computes a time called lo-
calmin, which is sent in a LOCALMIN message to a dedicated
site, the Detection Server, where the minimum localmin of
all spaces is maintained in a variable called globalmin. LO-
CALMIN messages are acknowledged by the Detection Server
by sending back ACK messages.
Finally, to compute localmin, each site maintains a per-
space value, called ProtectNow, containing the new dates to
be protected at next local garbage collection. These values
are saved in a per-space table, called Protected Set, to
be re-used and thus protected for some other local garbage
collections.
4.3 The algorithm
A Lamport clock is used to simulate global time at each
participating space. 1
4.3.1 Local propagation
The current date of the Lamport clock is incremented before
each local garbage collection and used to mark local roots.
Each scion's sciondate is marked with a date received from
its matching stub. These dates are propagated from the
local roots and scions to the stubdate field of all reachable
stubs during the mark phase of garbage collection. If a stub
is reachable from different roots marked with different dates
then it is marked with the largest date.
Such propagation is easy to implement with minor modifications
to a tracing garbage collector. The scions are sorted
by decreasing sciondate, and the object memory traced
from each scion in turn. During the trace, the stubdate
for any visited unmarked stub is increased to the sciondate
of the scion from which the trace began.
4.3.2 Remote propagation
A modified LIVE message, called STUBDATES, is sent to all
participating spaces in the vicinity after a local garbage col-
lection. This message serves to propagate the dates from all
stubs to their matching scions. These dates will be propagated
(locally, from scions to stubs) by the receiving space
at next local garbage collection in that space.
clock is implemented by sending the current date in
all messages. (In our case, only those messages used for the detection
of free cycles are concerned). When such a message is received, the
current local date is increased to be strictly greater than the date in
the message.
increment current date;
FIFO add(cyclicthresholdtosend set,
(current date,cyclic threshold[]));
Mark from root(local roots,current date);
if scion.scion date ! globalmin then
scion.pointer := NULL;
else
if scion.scion date = NOW then
Mark from root(scion.pointer,current date);
else
Mark from root(scion.pointer,scion.scion date);
if stub.stub date ? stub.olddate then
decrease protect now[space] to stub.olddate;
stub.olddate := stub.stub date;g
FIFO add(protected set[space],
(protect now[space],current date));
protect now[space] := current date;
Send(space,STUBDATES,current date,
(stub.stub id,stub.stub date)g);
localmin := min(protected set[])
Send(server,LOCALMIN,current date,localmin);
Figure
2: Pseudo-code for a local garbage collec-
tion. The Protected Sets and Cyclicthreshold-
ToSend Set are implemented by FIFO queues with
three functions (add, head and remove).
4.3.3 Characterisation of free cycles
Local roots are marked with the current date, which is always
increasing. Reachable stubs are therefore marked with
increasing dates. On the other hand, the dates on stubs included
in unreachable cycles evolve in two different phases.
In the first phase, the largest date on the cycle is propagated
to every stub in the cycle. In the second phase, no new date
can reach the cycle from a local root, and therefore the dates
on the stubs in the cycle will remain constant forever.
Since unreachable stubs have constant dates, whereas
reachable stubs have increasing dates, it is possible to compute
an increasing threshold date called globalmin. Reachable
stubs and scions are always marked with dates larger
than globalmin. On the other hand, globalmin will eventually
become greater than the date of the stubs belonging
to a given cycle.
Scions whose dates are smaller than the current glob-
almin are not traced during a local garbage collection. Stubs
which were only reachable from these scions will therefore
be collected. The normal acyclic SSPC garbage collector
will then remove their associated scions, and eventually the
entire cycle.
4.3.4 Computation of globalmin
globalmin is computed by a dedicated space (the Detection
Server) as the minimum of the localmin values sent to it by
each participating space. 2 The central server always com-
globalmin could be computed with a lazy distributed consensus.
However, a central server is easier to implement (it can simply be
Receive(space,STUBDATES,
gc date ,stub set,
increase cyclicthreshold[space] to gc date;
old scion set := space.scions;
space.scions := fg;
old scion set, f
find(scion.scion id,stub set, found, stub date);
if found or scion.scionstamp ? threshold then f
threshold then
increase scion.scion date to stub date;
space.scions := space.scions U fsciong
gg
Figure
3: Pseudo-code for the STUBDATES handler.
The find function looks for a scion identifier in the
set of stubs received in the message. If the stub
is found in the set then found is set to true, and
stub date is set to the date on the associated stub.
If the scionstamp is greater than the threshold in
the message then the scion is kept alive and its date
is not set.
if gc date?threshold date[space] then f
increase threshold date[space] to current date;
localmin[space]:=localmin;
Figure
4: Pseudo-code for the Detection Server. The
message is treated only if garbage collection date is
the lattest date received from the space.
putes globalmin from the most recently received value of
localmin sent to it from each space. (See the pseudo-code
in
Figure
4.)
4.3.5 Computation of localmin
localmin is recomputed after each local garbage collection
in a given participating space. (The pseudo-code is shown
in
Figure
2.)
We now introduce the notion of a probably-reachable stub.
A stub is probably-reachable either when it has been used by
the mutator for a remote operation (such as an invocation)
since the last local garbage collection, or when its stubdate
is increased during the local trace.
This notion is neither a lower nor an upper approximation
of reachability. A stub might be both reachable and
not probably-reachable at the same time; it might also be
probably-reachable and not reachable at some other time.
However, on any reachable chain of remote references there
is at least one probably-reachable stub for each different date
on the chain. Therefore, since each space will "protect" the
date of its probably-reachable stubs, all dates on the chain
will be "protected".
To detect probably-reachable stubs after the local trace,
the previous stubdate of each stub (stored the olddate
one of the participating spaces), and local networks (where such a
collector is most useful) often have a centralized structure.
FIFO head(cyclicthresholdtosend set,
(date,cyclic thresholds to send[]));
if date - gc date then f
repeat f
FIFO remove(cyclicthresholdtosend set,
(date,cyclic thresholds to send[]));
until (date == gc date);
cyclic thresholds to send[space]);
Figure
5: Pseudo-code for the ACK message handler.
Old values in the CyclicthresholdToSend Set can
be discarded, since they are smaller than those which
will be sent in the THRESHOLD messages. Their corresponding
ProtectNow values in the Protected Sets
will therefore also be removed when the THRESHOLD
messages is received.
field), is compared to the newly-propagated stubdate. For
each participating space in the immediate vicinity, a date
(called ProtectNow) contains the minimum olddate of all
stubs which have been detected as probably-reachable since
the last local garbage collection.
The value of ProtectNow for each space is saved in a
per-space set, called Protected Set, after each garbage col-
lection. ProtectNow is then re-initialized to the current date.
The localmin for the space is then computed as the minimum
of all ProtectNow values in all the Protected Sets.
This new value of localmin is sent to the detection server
in a LOCALMIN message.
The next value of globalmin will be smaller than these
olddates. All olddates associated with stubs that were detected
probably-reachable since some of the latest garbage
collections will therefore be protected by the new value of
globalmin: stubs and scions marked with those dates will
not be collected. 3
globalmin must protect the olddates rather than the
stubdates. This is because the scions associated with probably-
reachable stubs must be protected against collection, and
these scions are marked with the olddate of their matching
stub. In fact globalmin not only protects the associated
scions, but also all references that are reachable from
probably-reachable stubs and which are marked with the
olddates of these stubs.
4.3.6 Reduction of the Protected Set
STUBDATES and LOCALMIN messages both contain the
date of the local garbage collection during which they were
sent.
When a STUBDATES message is received (see Figure 3),
the per-space threshold CyclicThreshold is increased to the
GC date contained in the message. The CyclicThreshold
for each participating space is saved in the CyclicThresh-
oldToSend Set before each local garbage collection.
Each LOCALMIN message received by the Detection Server
is acknowledged by a ACK message containing the same
GC date. When this ACK message is received (see Figure
3 The slightly cryptic phrase "some of the latest garbage collec-
tions" will be explained in full in the next section.
Receive(space,THRESHOLD,cyclic
FIFO head(protected set[space], (protect now,gc date));
while (gc date - cyclic threshold) f
FIFO remove(protected set[space],
(protect now, gc date));
FIFO head(protected set[space],
(protect now, gc date));
Figure
Pseudo-code for the THRESHOLD handler.
, the CyclicThresholds saved in the CyclicThreshold-
ToSend Set for the local garbage collection started at the
GC date of the ACK message are sent to their associated
space in THRESHOLD messages. Older values (for older local
garbage collections) in the CyclicThresholdToSend Set
are discarded (This is perfectly safe. When a space receives a
THRESHOLD message it will perform all of the actions that
should have been performed for any previous THRESHOLD
messages that were lost).
When a CyclicThreshold date is received in a THRESHOLD
message, all older ProtectNow values in the Protected
Set associated with the sending space are removed. (See
Figure
6.) These values will no longer participate in the
computation of globalmin.
We can now explain the cryptic phrase "some of the latest
garbage collections" that appeared in the previous section
The olddate on a probably-reachable stub is protected
by a ProtectNow in a Protected Set. It will continue to
be protected for a certain time, until several events have oc-
curred. The new stubdate must first be sent to the matching
scion in a STUBDATES message. From there it is propagated
from by a local trace to any outgoing stubs (new
probably-reachable stubs in that space will be detected during
this trace). The new localmin for that must then be
received and used by the detection server (ensuring that the
olddates on the newly detected probably-reachable stubs
are protected by next values of globalmin). After this, the
ACK message received from the detection server will trigger a
THRESHOLD message containing a Cyclicthreshold equal
to the GC date of the STUBDATES message (or greater if
other STUBDATES messages have been received before the
local garbage collection). Only after this THRESHOLD message
is received will the the ProtectNow be removed from its
Protected Set.
4.4 Example
Figures
7, 8 and 9 show a simple example of distributed
detection of free cycles.
Spaces A and B are participating spaces; space C is the
detection server. The system contains two distributed cycles
C(1) and C(2), each containing two objects: OA(1) and
OB (1) for C(1), OA(2) and OB (2) for C(2). C(1) is locally
reachable in A, whereas C(2) has been unreachable since
date 2. A local garbage collection in A at date 6 has propagated
this date to stubA (1), which was previously marked
with date 2. The Protected Set associated with B contains
a single entry: a ProtectNow 2 at date 6.
In figure 7, a local garbage collection occurs in B at
date 8. The date 6, marked on scionB (1), is propagated to
stubB (1) which was previously marked with 2. B saves the
localmins: A -> 2
globalmin
gc_date
gc_date dates
A
A
A
A
A
Figure
7: After a local garbage collection at date 8 on space B, the new localmin 2 is sent to the detection server C. After
the acknowledgment, the cyclic threshold 6 message is sent to A, which will remove this entry from its protected set.
new ProtectNow 2 associated with A in its Protected Set.
It then sends a STUBDATES message with the new stub-
dates to A, and a LOCALMIN message with its new localmin
2 to the detection server. After saving this new localmin,
the detection server sends an ACK message to B containing
the same date as the original LOCALMIN message. A glob-
almin value (possibly not up-to-date) can be piggybacked
on this message. After reception of this ACK message, B
sends a THRESHOLD message to A containing the date of
the last STUBDATES message received from A. A consequently
removes the associated ProtectNow entry from its
protected set, which is now empty.
In figure 8, a local garbage collection occurs in A at date
10. The current date 10 is propagated to stubA (1), previously
marked with 6. The ProtectNow associated with B is
therefore decreased to 6. stubA(2) does not participate in
the computation of ProtectNow, since is still marked with 2.
This ProtectNow is then saved in the Protected Set, and
the new localmin (6) is sent to the detection server. After
the reception of the ACK message from C, a THRESHOLD
message is sent ot B which removes the associated entry
from its Protected Set. However, its localmin on the detection
server is still equal to 2, thus, preventing globalmin
from increasing.
In figure 9, a local garbage collection occurs in B at date
12. The new localmin computed in B is equal to 6. The
new globalmin is therefore increased to 6. All scions marked
with smaller dates will not be traced, starting from the moment
that A and B receive this new value of globalmin.
Consequently scionA(2) and scionB (2) will not be traced in
subsequent garbage collections, and OA (2), OB (2), stubB (2)
and stubA(2) will be collected by local garbage collections.
At the same time, scionA(2) and scionB (2) will be collected
by the SSPC garbage collector when STUBDATES messages
that do not contain stubB (2) and stubA(2) are received by
A and B respectively. The cycle C(2) has now been entirely
collected.
5 Related issues
5.1 New remote references and non-participating spaces
When a new remote reference is created, the stub olddate
is set to the current date and the sciondate is initialized
with a special date called NOW. Moreover, each time a scion
location is resent to its associated space, a new stub may
be created if the previous one had already been collected.
sciondate is therefore re-initialized to NOW each time its
scion's location is resent in a message.
Scions marked with NOW propagate the current date at
each garbage collection. A newly-created scion therefore
behaves as a normal local root, until a new date is propagated
by a STUBDATES message from its matching stub.
The SSPC threshold is then compared to the scionstamp
to ensure that all messages containing the scion have been
received before fixing the sciondate.
This mechanism is also used to allow incoming references
from non-participating spaces. (STUBDATES messages
will never be received from non-participating spaces.)
The sciondates of their associated scions will therefore remain
at NOW forever, and they will act as local roots. Distributed
cycles that include these remote references will never
be collected. This is safe, and does not impact the completeness
of the algorithm for participating spaces.
We must also cope with outgoing references to non-participating
spaces. We must avoid putting entries in the Pro-
localmins: A -> 6
Figure
8: After a new local garbage collection in A, localmin A is increased to 6.
tected Sets for non-participating spaces, since no THRESHOLD
messages will be received to remove such entries. (This
would prevent localmin and hence globalmin from increas-
ing, thus stalling the detection process.) A space must therefore
only send STUBDATES messages to, and create entries
in the protected sets for, known participating spaces. The
list of participating spaces is maintained by the detection
server, and is sent to other participating spaces whenever
necessary (when new participating space arrives, when a
space quits the detection process, or if a space is suspected
of having crashed or is being too slow to respond).
5.2 Coping with mutator activity
The mutator can create and delete remote references in
the interval between local garbage collections. Dates on a
remotely-reachable object might therfore never increase because
of a "phantom reference": each time a local garbage
collection occurs in a space from which the object is reach-
able, the mutator gives the reference on the object to another
space and deletes the local reference - just before the
collection. Greater dates might therefore never be propagated
to the object and the object would be detected as a
it is still reachable (see Figure 10 for an
example).
Such transient references may move from stubs to scions
(for invocation) or from scions to stubs (by reference pass-
ing). In the first case, we mark the invoked scions with the
current date (This prevents globalmin from stalling). In the
second case, we ensure that each time a stub is used by the
mutator (for invocation, or copy to/from another space) its
olddate is used to increase the ProtectNow associated with
the space of its matching scion. The date of the ProtectNow
therefore always contains the minimum olddate of all the
stubs that have been used in the interval between two local
current
stub
R
O 11O
Figure
10: With its local reference to O 1 , A invokes
stub 1 which creates a new local reference in B to O 2 .
A deletes its local reference R 1 , and performs a new
local garbage collection. stub 1 is therefore re-marked
with 2, and localmin A is increased to 5. This is
incorrect, since the cycle is reachable from B. This
is the reason why the external mutator activity must
be monitored by the detector of free cycles.
garbage collections. This protects any object reachable from
these stubs against such transient "phantom references".
5.3 Fault tolerance
Our algorithm is tolerant to message loss and out-oforder
delivery. The STUBDATES, THRESHOLD, LOCALMIN and
ACK messages are only accepted if their sates are greater
than those of the previously received such message. More-
over, the computations are always conservative when using
old values. Even LOCALMIN messages may be lost: no ACK
messages will be sent and therefore no THRESHOLD will be
localmins: A -> 6
Figure
9: After a new local garbage collection in B, localmin B is set to 6, and globalmin is increased to 6. Thus, the free
cycle marked with 2 will be collected since its date is now smaller than globalmin.
to other spaces, which will continue to protect the dates that
the lost LOCALMIN messages would have protected.
Crashed spaces (or spaces that are too slow to respond)
are handled by the detection server, which can exclude any
suspected space from the detection process by sending a special
message to all participating spaces. The participating
spaces set the sciondates for scions whose matching stubs
are in the suspect space(s) to NOW, and remove all entries for
the suspected spaces in their Protected Sets.
Finally, the detection server may also crash. This does
not stop acyclic garbage collection, and only delays cyclic
garbage collection. A detection server can be restarted, and
dynamically rebuild the list of participating spaces through
some special recovery protocol. It then waits for each participating
space to send a new localmin value before computing
a new up-to-date value for globalmin.
6 Analysis
We can estimate the worst-case time needed to collect a
newly unreachable cycle. It is the time needed to propagate
dates greater than those on the cycle to all reachable stubs.
Assuming that spaces perform local garbage collections at
approximately the same rate, we define a period to be the
time necessary for spaces to perform a new local garbage
collection. The time needed to collect the cycle is equal to
the product of the length of the largest chain of reachable
references by the period:
We can also estimate the number and the size of the messages
that are sent after a local garbage collection. There is
one LIVE message (sent by the SSPC garbage collector), plus
one STUBDATES message and one THRESHOLD message
sent for each space in the immediate vicinity. The first two
messages can be concatenated into a single network message.
Hence there are only two messages sent for each space in the
vicinity. The STUBDATES message contains one identifier
and one date for each live stub referring to the destination
space, plus the SSPC threshold time-stamp. The THRESHOLD
message contains only the CyclicThreshold value for
the destination space.
One LOCALMIN message is also sent to the detection
server, and one ACK message sent back from the server.
The Protected Set contains triples for each space in the
vicinity. For a space X in the vicinity of Y , the number
of triples for X in the Protected Set of Y is equal to the
number of local garbage collections that have occurred on Y
since the last garbage collection on X. If the frequencies of
the garbage collections in the different participating spaces
are similar, the Protected Set should not grow too much.
If one space requires too many garbage collections, and its
Protected Set becomes too large, it should avoid performing
cyclic detection after each garbage collection (but not
stop garbage collections) until sufficient entries in its Protected
Set have been removed.
Finally, a very large number of spaces may use the same
detection server. The server only contains two dates per
participating space, and the computation of the minimum
of this array should not be expensive.
7 Implementation
Our algorithm has been incorporated into an implementation
of the SSP Chains system written in Objective-CAML
[5], using the Unix and Thread modules [6].
The Objective-Caml implementation of SSPC consists of
1300 lines of code, of which 200 are associated with the cyclic
GC algorithm. The propagation of dates by tracing was
implemented as a minor modification to the existing Caml
garbage collector [2]. The Mark from root(roots) function
was changed into Mark from root(roots,date), which
marks stubs reachable from a set of roots with the given
date. This function is then applied first to the normal local
roots with the current date (which is always greater than
all the dates on scions), and then to sets of scions sorted
by decreasing dates. Each reachable stub is therefore only
marked once, with the date of the first root from which it is
reachable.
Finalization of stubs (required for updating the threshold
when they are collected) is implemented by using a list
of pairs. Each pair contains a weak pointer to a stub and a
stubstamp field. After a garbage collection, the weak pointers
are tested to determine if their referent objects are still
live. The stubstamp field is used to update the threshold
if the weak pointer is found to be dangling.
The Protected Set is implemented as a FIFO queue for
each participating space. The head of the queue contains the
ProtectNow value, which can be modified by the mutator
between local garbage collections. When a THRESHOLD
message is received, entries are removed from the tail of the
queue until the last entry has a date greater than the one in
the message. Finally, localmin is computed as the minimum
of all entries in all queues.
Objective-CAML has high-level capabilities to automatically
marshal and unmarshal symbolic messages, easing the
implementation of complex protocols. Some modification
of the compiler and the standard object library was needed
to enable dynamic creation of classes of stubs and dynamic
type verification for SSPC. However, these modifications are
not related to either the acyclic GC or the cycle detector algorithm
8 Related work
8.1 Hughes'algorithm
Our algorithm was inspired Hughes' algorithm. In Hughes'
algorithm, each local garbage collection provokes a global
trace and propagates the starting date of the trace. How-
ever, the threshold date is computed by a termination algorithm
(due to Rana [11]). The date on a stub therefore
represents the starting date of the most recent global trace
in which the stub was detected as reachable. If the threshold
is the starting date of a terminated global trace, then
any stub marked with a strictly smaller date has not been
detected as reachable by this terminated global trace. It can
therefore be collected safely.
However, the termination algorithm used in this algorithm
requires a global clock, instantaneous communication,
and does not support failures. Moreover, each local garbage
collection in one space triggers new computations in all of
the participating spaces. Such behavior is not suitable for a
large-scale fault-tolerant system.
8.2 Recent work
Detecting free cycles has been addressed by several researchers.
A good survey can be found in [10]. We will only present
more recent work below.
All three of the recent algorithms are based on partitioning
into groups of spaces or nodes. Cycles are only collected
when they are included entirely within a single partition.
Heuristics are used to improve the partitioning. These algorithms
are complex, and may be difficult to implement.
Moreover, their efficiency depends greatly on the choice of
heuristic for selecting "suspect objects".
Maheshwari and Liskov's [7] work is based on back-tracing.
The group is traced in the opposite direction to references,
starting from objects that are suspected to belong to an
unreachable cycle. An heuristic based on distance selects
"suspected objects". If the backward trace does not encounter
a local root, the object is on a free cycle. Their
detector is asynchronous, fault-tolerant, and well-adapted
to large-scale systems. Nevertheless, back-tracing requires
extra data structures for each remote reference. Further-
more, every suspected cycle needs one trace, whereas our
algorithm collects all cycles concurrently.
Rodrigues and Jones's [12] cyclic garbage collector was
inspired by Lang et al.[4], dividing the network into groups
of processes. The algorithm collects cycles of garbage contained
entirely within a group. The main improvement is
that only suspect objects (according to an heuristics such
as Maheshwari and Liskov's distance) are traced. Global
synchronization is needed to terminate the detection. It is
difficult to know how the algorithm behaves when the group
becomes very large.
The DMOS garbage collector [9] has some desirable prop-
erties: safety, completeness, non-disruptiveness, incremen-
tality, and scalability. Spaces are divided into a number of
disjoint blocks (called "cars"). Cars from different spaces
are grouped together into trains. Reachable data is copied
from cars in one train to cars in other trains. Unreachable
data and cycles contained in one car or one train are left
behind and can be collected. Completeness is guaranteed
by the order of collections. This algorithm is highly complex
and has not been implemented. Moreover, problems
relating to fault-tolerance are not addressed by the authors.
9 Conclusion
We have described a complete distributed garbage collector,
created by extending an acyclic distributed garbage collector
with a detector of distributed garbage cycles. Our garbage
collector has some desirable properties: asynchrony between
participating spaces, fault-tolerance (messages can be lost,
participating spaces and servers can crash), low resource
requirements (memory, messages and time), and finally ease
of implementation.
It seems well adapted to large-scale distributed systems
since it supports non-participating spaces, and consequently
clusters of cyclically-collected spaces within larger groups of
interoperating spaces.
We are currently working on a new implementation for
the Join-Calculus language. Future work includes the handling
of overlapping sets of participating spaces, protocols
for server recovery, and performance mesurements.
Acknowledgments
The authors would like to thank Neilze Dorta for her study
of recent cyclic garbage collectors. We also thank Jean-Jacques
Levy and Damien Doligez for their valuable comments
and suggestions on improving this paper.
--R
Network objects.
A concurrent
The reflexive chemical abstract machine and the join-calculus
Garbage collecting the world.
The objective-caml system software
Unix system programming in caml light.
Collecting cyclic distributed garbage by back tracing.
A calculus of mobile processes I and II.
A survey of distributed garbage collection techniques.
A distributed solution to the distributed termination problem.
A cyclic distributed garbage collector for Network Ob- jects
chains: Robust
--TR
Garbage collecting the world
A concurrent, generational garbage collector for a multithreaded implementation of ML
A calculus of mobile processes, II
Network objects
Collecting distributed garbage cycles by back tracing
Garbage collecting the world
A Survey of Distributed Garbage Collection Techniques
A Cyclic Distributed Garbage Collector for Network Objects
A Calculus of Mobile Agents
--CTR
Fabrice Le Fessant, Detecting distributed cycles of garbage in large-scale systems, Proceedings of the twentieth annual ACM symposium on Principles of distributed computing, p.200-209, August 2001, Newport, Rhode Island, United States
Stephen M. Blackburn , Richard L. Hudson , Ron Morrison , J. Eliot B. Moss , David S. Munro , John Zigman, Starting with termination: a methodology for building distributed garbage collection algorithms, Australian Computer Science Communications, v.23 n.1, p.20-28, January-February 2001
Abhay Vardhan , Gul Agha, Using passive object garbage collection algorithms for garbage collection of active objects, ACM SIGPLAN Notices, v.38 n.2 supplement, February
Michael Hicks , Suresh Jagannathan , Richard Kelsey , Jonathan T. Moore , Cristian Ungureanu, Transparent communication for distributed objects in Java, Proceedings of the ACM 1999 conference on Java Grande, p.160-170, June 12-14, 1999, San Francisco, California, United States
Luc Moreau , Peter Dickman , Richard Jones, Birrell's distributed reference listing revisited, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.6, p.1344-1395, November 2005
Laurent Amsaleg , Michael J. Franklin , Olivier Gruber, Garbage collection for a client-server persistent object store, ACM Transactions on Computer Systems (TOCS), v.17 n.3, p.153-201, Aug. 1999 | distributed object systems;storage management;garbage collection;reference tracking |
277725 | The implementation of the Cilk-5 multithreaded language. | The fifth release of the multithreaded language Cilk uses a provably good "work-stealing" scheduling algorithm similar to the first system, but the language has been completely redesigned and the runtime system completely reengineered. The efficiency of the new implementation was aided by a clear strategy that arose from a theoretical analysis of the scheduling algorithm: concentrate on minimizing overheads that contribute to the work, even at the expense of overheads that contribute to the critical path. Although it may seem counterintuitive to move overheads onto the critical path, this "work-first" principle has led to a portable Cilk-5 implementation in which the typical cost of spawning a parallel thread is only between 2 and 6 times the cost of a C function call on a variety of contemporary machines. Many Cilk programs run on one processor with virtually no degradation compared to equivalent C programs. This paper describes how the work-first principle was exploited in the design of Cilk-5's compiler and its runtime system. In particular, we present Cilk-5's novel "two-clone" compilation strategy and its Dijkstra-like mutual-exclusion protocol for implementing the ready deque in the work-stealing scheduler. | Introduction
Cilk is a multithreaded language for parallel programming
that generalizes the semantics of C by introducing linguistic
constructs for parallel control. The original Cilk-1 release
[3, 4, 18] featured a provably efficient, randomized, "work-
stealing" scheduler [3, 5], but the language was clumsy,
because parallelism was exposed "by hand" using explicit
continuation passing. The Cilk language implemented by
This research was supported in part by the Defense Advanced
Research Projects Agency (DARPA) under Grant N00014-94-1-0985.
Computing facilities were provided by the MIT Xolas Project, thanks
to a generous equipment donation from Sun Microsystems.
To appear in Proceedings of the 1998 ACM SIGPLAN
Conference on Programming Language Design and Implementation
(PLDI), Montreal, Canada, June 1998.
our latest Cilk-5 release [8] still uses a theoretically efficient
scheduler, but the language has been simplified considerably.
It employs call/return semantics for parallelism and features
a linguistically simple "inlet" mechanism for nondeterministic
control. Cilk-5 is designed to run efficiently on contemporary
symmetric multiprocessors (SMP's), which feature
hardware support for shared memory. We have coded many
applications in Cilk, including the ?Socrates and Cilkchess
chess-playing programs which have won prizes in international
competitions.
The philosophy behind Cilk development has been to
make the Cilk language a true parallel extension of C, both
semantically and with respect to performance. On a parallel
computer, Cilk control constructs allow the program to
execute in parallel. If the Cilk keywords for parallel control
are elided from a Cilk program, however, a syntactically and
semantically correct C program results, which we call the C
elision (or more generally, the serial elision) of the Cilk
program. Cilk is a faithful extension of C, because the C
elision of a Cilk program is a correct implementation of the
semantics of the program. Moreover, on one processor, a
parallel Cilk program "scales down" to run nearly as fast as
its C elision.
Unlike in Cilk-1, where the Cilk scheduler was an identifiable
piece of code, in Cilk-5 both the compiler and runtime
system bear the responsibility for scheduling. To obtain ef-
ficiency, we have, of course, attempted to reduce scheduling
overheads. Some overheads have a larger impact on execution
time than others, however. A theoretical understanding
of Cilk's scheduling algorithm [3, 5] has allowed us to identify
and optimize the common cases. According to this abstract
theory, the performance of a Cilk computation can be
characterized by two quantities: its work , which is the total
time needed to execute the computation serially, and its
critical-path length , which is its execution time on an infinite
number of processors. (Cilk provides instrumentation
that allows a user to measure these two quantities.) Within
Cilk's scheduler, we can identify a given cost as contributing
to either work overhead or critical-path overhead. Much
of the efficiency of Cilk derives from the following principle,
which we shall justify in Section 3.
The work-first principle: Minimize the scheduling
overhead borne by the work of a computation.
Specifically, move overheads out of the work and
onto the critical path.
The work-first principle played an important role during the
design of earlier Cilk systems, but Cilk-5 exploits the principle
more extensively.
The work-first principle inspired a "two-clone" strategy
for compiling Cilk programs. Our cilk2c compiler [23] is a
source-to-source translator that transforms a
Cilk source into a C postsource which makes calls to Cilk's
runtime library. The C postsource is then run through the
gcc compiler to produce object code. The cilk2c compiler
produces two clones of every Cilk procedure-a "fast" clone
and a "slow" clone. The fast clone, which is identical in
most respects to the C elision of the Cilk program, executes
in the common case where serial semantics suffice. The slow
clone is executed in the infrequent case that parallel semantics
and its concomitant bookkeeping are required. All communication
due to scheduling occurs in the slow clone and
contributes to critical-path overhead, but not to work overhead
The work-first principle also inspired a Dijkstra-like [11],
shared-memory, mutual-exclusion protocol as part of the
runtime load-balancing scheduler. Cilk's scheduler uses a
"work-stealing" algorithm in which idle processors, called
thieves, "steal" threads from busy processors, called vic-
tims. Cilk's scheduler guarantees that the cost of stealing
contributes only to critical-path overhead, and not to
work overhead. Nevertheless, it is hard to avoid the mutual-exclusion
costs incurred by a potential victim, which contribute
to work. To minimize work overhead, instead of using
locking, Cilk's runtime system uses a Dijkstra-like protocol,
which we call the THE protocol, to manage the runtime
deque of ready threads in the work-stealing algorithm. An
added advantage of the THE protocol is that it allows an
exception to be signaled to a working processor with no additional
work overhead, a feature used in Cilk's abort mechanism
The remainder of this paper is organized as follows. Section
2 overviews the basic features of the Cilk language. Section
3 justifies the work-first principle. Section 4 describes
how the two-clone strategy is implemented, and Section 5
presents the THE protocol. Section 6 gives empirical evidence
that the Cilk-5 scheduler is efficient. Finally, Section 7
presents related work and offers some conclusions.
2 The Cilk language
This section presents a brief overview of the Cilk extensions
to C as supported by Cilk-5. (For a complete description,
consult the Cilk-5 manual [8].) The key features of the language
are the specification of parallelism and synchroniza-
tion, through the spawn and sync keywords, and the specification
of nondeterminism, using inlet and abort.
#include !stdlib.h?
#include !stdio.h?
#include !cilk.h?
cilk int fib (int n)
if (n!2) return n;
else -
int x,
return
cilk int main (int argc, char *argv[])
int n, result;
printf ("Result: %d"n", result);
return 0;
Figure
1: A simple Cilk program to compute the nth Fibonacci
number in parallel (using a very bad algorithm).
The basic Cilk language can be understood from an example
Figure
1 shows a Cilk program that computes the nth
Fibonacci number. 1 Observe that the program would be an
ordinary C program if the three keywords cilk, spawn, and
sync are elided.
The keyword cilk identifies fib as a Cilk procedure,
which is the parallel analog to a C function. Parallelism
is created when the keyword spawn precedes the invocation
of a procedure. The semantics of a spawn differs from a
C function call only in that the parent can continue to execute
in parallel with the child, instead of waiting for the
child to complete as is done in C. Cilk's scheduler takes the
responsibility of scheduling the spawned procedures on the
processors of the parallel computer.
A Cilk procedure cannot safely use the values returned by
its children until it executes a sync statement. The sync
statement is a local "barrier," not a global one as, for ex-
ample, is used in message-passing programming. In the Fibonacci
example, a sync statement is required before the
statement return (x+y) to avoid the anomaly that would
occur if x and y are summed before they are computed. In
addition to explicit synchronization provided by the sync
statement, every Cilk procedure syncs implicitly before it
returns, thus ensuring that all of its children terminate before
it does.
Ordinarily, when a spawned procedure returns, the returned
value is simply stored into a variable in its parent's
frame:
This program uses an inefficient algorithm which runs in exponential
time. Although logarithmic-time methods are known [9, p. 850],
this program nevertheless provides a good didactic example.
cilk int fib (int n)
int
inlet void summer (int result)
x += result;
return;
if (n!2) return n;
else -
summer(spawn fib (n-1));
summer(spawn fib (n-2));
return
Figure
2: Using an inlet to compute the nth Fibonnaci number.
Occasionally, one would like to incorporate the returned
value into the parent's frame in a more complex way. Cilk
provides an inlet feature for this purpose, which was inspired
in part by the inlet feature of TAM [10].
An inlet is essentially a C function internal to a Cilk pro-
cedure. In the normal syntax of Cilk, the spawning of a
procedure must occur as a separate statement and not in an
expression. An exception is made to this rule if the spawn
is performed as an argument to an inlet call. In this case,
the procedure is spawned, and when it returns, the inlet is
invoked. In the meantime, control of the parent procedure
proceeds to the statement following the inlet call. In princi-
ple, inlets can take multiple spawned arguments, but Cilk-5
has the restriction that exactly one argument to an inlet
may be spawned and that this argument must be the first
argument. If necessary, this restriction is easy to program
around.
Figure
illustrates how the fib() function might be coded
using inlets. The inlet summer() is defined to take a returned
value result and add it to the variable x in the frame of the
procedure that does the spawning. All the variables of fib()
are available within summer(), since it is an internal function
of fib(). 2
No lock is required around the accesses to x by summer,
because Cilk provides atomicity implicitly. The concern is
that the two updates might occur in parallel, and if atomicity
is not imposed, an update might be lost. Cilk provides
implicit atomicity among the "threads" of a procedure in-
stance, where a thread is a maximal sequence of instructions
ending with a spawn, sync, or return (either explicit
or implicit) statement. An inlet is precluded from containing
spawn and sync statements, and thus it operates atomically
as a single thread. Implicit atomicity simplifies reasoning
2 The C elision of a Cilk program with inlets is not ANSI C, because
ANSI C does not support internal C functions. Cilk is based on Gnu
C technology, however, which does provide this support.
about concurrency and nondeterminism without requiring
locking, declaration of critical regions, and the like.
Cilk provides syntactic sugar to produce certain commonly
used inlets implicitly. For example, the statement x +=
spawn fib(n-1) conceptually generates an inlet similar to
the one in Figure 2.
Sometimes, a procedure spawns off parallel work which it
later discovers is unnecessary. This "speculative" work can
be aborted in Cilk using the abort primitive inside an in-
let. A common use of abort occurs during a parallel search,
where many possibilities are searched in parallel. As soon as
a solution is found by one of the searches, one wishes to abort
any currently executing searches as soon as possible so as not
to waste processor resources. The abort statement, when
executed inside an inlet, causes all of the already-spawned
children of the procedure to terminate.
We considered using "futures" [19] with implicit synchro-
nization, as well as synchronizing on specific variables, instead
of using the simple spawn and sync statements. We
realized from the work-first principle, however, that different
synchronization mechanisms could have an impact only
on the critical-path of a computation, and so this issue was
of secondary concern. Consequently, we opted for implementation
simplicity. Also, in systems that support relaxed
memory-consistency models, the explicit sync statement
can be used to ensure that all side-effects from previously
spawned subprocedures have occurred.
In addition to the control synchronization provided by
sync, Cilk programmers can use explicit locking to synchronize
accesses to data, providing mutual exclusion and
atomicity. Data synchronization is an overhead borne on
the work, however, and although we have striven to minimize
these overheads, fine-grain locking on contemporary
processors is expensive. We are currently investigating how
to incorporate atomicity into the Cilk language so that protocol
issues involved in locking can be avoided at the user
level. To aid in the debugging of Cilk programs that use
locks, we have been developing a tool called the "Nonde-
[7, 13], which detects common synchronization
bugs called data races.
3 The work-first principle
This section justifies the work-first principle stated in Section
1 by showing that it follows from three assumptions.
First, we assume that Cilk's scheduler operates in practice
according to the theoretical analysis presented in [3, 5]. Sec-
ond, we assume that in the common case, ample "parallel
slackness" [28] exists, that is, the average parallelism of a
Cilk program exceeds the number of processors on which we
run it by a sufficient margin. Third, we assume (as is indeed
the case) that every Cilk program has a C elision against
which its one-processor performance can be measured.
The theoretical analysis presented in [3, 5] cites two fundamental
lower bounds as to how fast a Cilk program can run.
Let us denote by TP the execution time of a given computation
on P processors. Then, the work of the computation is
and its critical-path length is . For a computation with
work, the lower bound TP - T1=P must hold, because at
most P units of work can be executed in a single step. In
addition, the lower bound TP - must hold, since a finite
number of processors cannot execute faster than an infinite
Cilk's randomized work-stealing scheduler [3, 5] executes
a Cilk computation on P processors in expected time
assuming an ideal parallel computer. This equation resembles
"Brent's theorem'' [6, 15] and is optimal to within a
constant factor, since T1=P and are both lower bounds.
We call the first term on the right-hand side of Equation (1)
the work term and the second term the critical-path term.
Importantly, all communication costs due to Cilk's scheduler
are borne by the critical-path term, as are most of the other
scheduling costs. To make these overheads explicit, we define
the critical-path overhead to be the smallest constant
c1 such that
The second assumption needed to justify the work-first
principle focuses on the "common-case" regime in which a
parallel program operates. Define the average parallelism
as which corresponds to the maximum possible
speedup that the application can obtain. Define also
the parallel slackness [28] to be the ratio P=P . The assumption
of parallel slackness is that P=P AE c1 , which
means that the number P of processors is much smaller than
the average parallelism P . Under this assumption, it follows
that T1=P AE c1T1 , and hence from Inequality (2) that
and we obtain linear speedup. The critical-path
overhead c1 has little effect on performance when sufficient
slackness exists, although it does determines how much
slackness must exist to ensure linear speedup.
Whether substantial slackness exists in common applications
is a matter of opinion and empiricism, but we suggest
that slackness is the common case. The expressiveness of
Cilk makes it easy to code applications with large amounts
of parallelism. For modest-sized problems, many applications
exhibit an average parallelism of over 200, yielding substantial
slackness on contemporary SMP's. Even on Sandia
National Laboratory's Intel Paragon, which contains 1824
nodes, the ?Socrates chess program (coded in Cilk-1) ran
in its linear-speedup regime during the 1995 ICCA World
Computer Chess Championship (where it placed second in
a field of 24). Section 6 describes a dozen other diverse
applications which were run on an 8-processor SMP with
3 This abstract model of execution time ignores real-life details,
such as memory-hierarchy effects, but is nonetheless quite accurate [4].
considerable parallel slackness. The parallelisim of these applications
increases with problem size, thereby ensuring they
will run well on large machines.
The third assumption behind the work-first principle is
that every Cilk program has a C elision against which its
one-processor performance can be measured. Let us denote
by TS the running time of the C elision. Then, we define the
work overhead by Incorporating critical-path
and work overheads into Inequality (2) yields
since we assume parallel slackness.
We can now restate the work-first principle precisely. Minimize
c1 , even at the expense of a larger c1 , because c1 has a
more direct impact on performance. Adopting the work-first
principle may adversely affect the ability of an application
to scale up, however, if the critical-path overhead c1 is too
large. But, as we shall see in Section 6, critical-path overhead
is reasonably small in Cilk-5, and many applications
can be coded with large amounts of parallelism.
The work-first principle pervades the Cilk-5 implementa-
tion. The work-stealing scheduler guarantees that with high
probability, only O(PT1) steal (migration) attempts occur
(that is, O(T1 ) on average per processor), all costs for which
are borne on the critical path. Consequently, the scheduler
for Cilk-5 postpones as much of the scheduling cost as possible
to when work is being stolen, thereby removing it as
a contributor to work overhead. This strategy of amortizing
costs against steal attempts permeates virtually every
decision made in the design of the scheduler.
4 Cilk's compilation strategy
This section describes how our cilk2c compiler generates C
postsource from a Cilk program. As dictated by the work-
first principle, our compiler and scheduler are designed to
reduce the work overhead as much as possible. Our strategy
is to generate two clones of each procedure-a fast clone and
a slow clone. The fast clone operates much as does the C
elision and has little support for parallelism. The slow clone
has full support for parallelism, along with its concomitant
overhead. We first describe the Cilk scheduling algorithm.
Then, we describe how the compiler translates the Cilk language
constructs into code for the fast and slow clones of
each procedure. Lastly, we describe how the runtime system
links together the actions of the fast and slow clones to
produce a complete Cilk implementation.
As in lazy task creation [24], in Cilk-5 each proces-
sor, called a worker , maintains a ready deque (doubly-
ended queue) of ready procedures (technically, procedure
instances). Each deque has two ends, a head and a tail ,
from which procedures can be added or removed. A worker
operates locally on the tail of its own deque, treating it much
int fib (int n)
3 fib-frame *f; frame pointer
8 return n;
int x,
live vars
do C call
return 0; frame stolen
22 return (x+y);
Figure
3: The fast clone generated by cilk2c for the fib procedure
from Figure 1. The code for the second spawn is omitted.
The functions alloc and free are inlined calls to the runtime
system's fast memory allocator. The signature fib sig contains
a description of the fib procedure, including a pointer to the slow
clone. The push and pop calls are operations on the scheduling
deque and are described in detail in Section 5.
as C treats its call stack, pushing and popping spawned activation
frames. When a worker runs out of work, it becomes
a thief and attempts to steal a procedure another worker,
called its victim . The thief steals the procedure from the
head of the victim's deque, the opposite end from which the
victim is working.
When a procedure is spawned, the fast clone runs. Whenever
a thief steals a procedure, however, the procedure is
converted to a slow clone. The Cilk scheduler guarantees
that the number of steals is small when sufficient slackness
exists, and so we expect the fast clones to be executed most
of the time. Thus, the work-first principle reduces to minimizing
costs in the fast clone, which contribute more heavily
to work overhead. Minimizing costs in the slow clone, although
a desirable goal, is less important, since these costs
contribute less heavily to work overhead and more to critical-path
overhead.
We minimize the costs of the fast clone by exploiting the
structure of the Cilk scheduler. Because we convert a procedure
to its slow clone when it is stolen, we maintain the
invariant that a fast clone has never been stolen. Further-
more, none of the descendants of a fast clone have been
stolen either, since the strategy of stealing from the heads
of ready deques guarantees that parents are stolen before
their children. As we shall see, this simple fact allows many
optimizations to be performed in the fast clone.
We now describe how our cilk2c compiler generates post-
source C code for the fib procedure from Figure 1. An example
of the postsource for the fast clone of fib is given
in
Figure
3. The generated C code has the same general
structure as the C elision, with a few additional statements.
In lines 4-5, an activation frame is allocated for fib and
initialized. The Cilk runtime system uses activation frames
to represent procedure instances. Using techniques similar
to [16, 17], our inlined allocator typically takes only a few
cycles. The frame is initialized in line 5 by storing a pointer
to a static structure, called a signature, describing fib.
The first spawn in fib is translated into lines 12-18. In
lines 12-13, the state of the fib procedure is saved into
the activation frame. The saved state includes the program
counter, encoded as an entry number, and all live, dirty vari-
ables. Then, the frame is pushed on the runtime deque in
lines 14-15. 4 Next, we call the fib routine as we would
in C. Because the spawn statement itself compiles directly
to its C elision, the postsource can exploit the optimization
capabilities of the C compiler, including its ability to pass
arguments and receive return values in registers rather than
in memory.
After fib returns, lines 17-18 check to see whether the
parent procedure has been stolen. If it has, we return immediately
with a dummy value. Since all of the ancestors
have been stolen as well, the C stack quickly unwinds and
control is returned to the runtime system. 5 The protocol
to check whether the parent procedure has been stolen is
quite subtle-we postpone discussion of its implementation
to Section 5. If the parent procedure has not been stolen,
it continues to execute at line 19, performing the second
spawn, which is not shown.
In the fast clone, all sync statements compile to no-ops.
Because a fast clone never has any children when it is exe-
cuting, we know at compile time that all previously spawned
procedures have completed. Thus, no operations are required
for a sync statement, as it always succeeds. For exam-
ple, line 20 in Figure 3, the translation of the sync statement
is just the empty statement. Finally, in lines 21-22, fib deallocates
the activation frame and returns the computed result
to its parent procedure.
The slow clone is similar to the fast clone except that
it provides support for parallel execution. When a procedure
is stolen, control has been suspended between two of
the procedure's threads, that is, at a spawn or sync point.
When the slow clone is resumed, it uses a goto statement
to restore the program counter, and then it restores local
variable state from the activation frame. A spawn statement
is translated in the slow clone just as in the fast clone. For a
sync statement, cilk2c inserts a call to the runtime system,
which checks to see whether the procedure has any spawned
children that have not returned. Although the parallel book-
4 If the shared memory is not sequentially consistent, a memory
fence must be inserted between lines 14 and 15 to ensure that the
surrounding writes are executed in the proper order.
5 The setjmp/longjmp facility of C could have been used as well, but
our unwinding strategy is simpler.
keeping in a slow clone is substantial, it contributes little to
work overhead, since slow clones are rarely executed.
The separation between fast clones and slow clones also
allows us to compile inlets and abort statements efficiently
in the fast clone. An inlet call compiles as efficiently as an
ordinary spawn. For example, the code for the inlet call from
Figure
compiles similarly to the following Cilk code:
Implicit inlet calls, such as x += spawn fib(n-1), compile
directly to their C elisions. An abort statement compiles to
a no-op just as a sync statement does, because while it is
executing, a fast clone has no children to abort.
The runtime system provides the glue between the fast and
slow clones that makes the whole system work. It includes
protocols for stealing procedures, returning values between
processors, executing inlets, aborting computation subtrees,
and the like. All of the costs of these protocols can be amortized
against the critical path, so their overhead does not
significantly affect the running time when sufficient parallel
slackness exists. The portion of the stealing protocol executed
by the worker contributes to work overhead, however,
thereby warranting a careful implementation. We discuss
this protocol in detail in Section 5.
The work overhead of a spawn in Cilk-5 is only a few reads
and writes in the fast clone-3 reads and 5 writes for the fib
example. We will experimentally quantify the work overhead
in Section 6. Some work overheads still remain in our im-
plementation, however, including the allocation and freeing
of activation frames, saving state before a spawn, pushing
and popping of the frame on the deque, and checking if a
procedure has been stolen. A portion of this work overhead
is due to the fact that Cilk-5 is duplicating the work the C
compiler performs, but as Section 6 shows, this overhead is
small. Although a production Cilk compiler might be able
eliminate this unnecessary work, it would likely compromise
portability.
In Cilk-4, the precursor to Cilk-5, we took the work-first
principle to the extreme. Cilk-4 performed stack-based allocation
of activation frames, since the work overhead of
stack allocation is smaller than the overhead of heap alloca-
tion. Because of the "cactus stack" [25] semantics of the Cilk
stack, 6 however, Cilk-4 had to manage the virtual-memory
map on each processor explicitly, as was done in [27]. The
work overhead in Cilk-4 for frame allocation was little more
than that of incrementing the stack pointer, but whenever
the stack pointer overflowed a page, an expensive user-level
ensued, during which Cilk-4 would modify the
memory map. Unfortunately, the operating-system mechanisms
supporting these operations were too slow and un-
predictable, and the possibility of a page fault in critical sec-
6 Suppose a procedure A spawns two children B and C. The two
children can reference objects in A's activation frame, but B and C
do not see each other's frame.
tions led to complicated protocols. Even though these overheads
could be charged to the critical-path term, in practice,
they became so large that the critical-path term contributed
significantly to the running time, thereby violating the assumption
of parallel slackness. A one-processor execution of
a program was indeed fast, but insufficient slackness sometimes
resulted in poor parallel performance.
In Cilk-5, we simplified the allocation of activation frames
by simply using a heap. In the common case, a frame is
allocated by removing it from a free list. Deallocation is
performed by inserting the frame into the
management of virtual memory is required, except for
the initial setup of shared memory. Heap allocation contributes
only slightly more than stack allocation to the work
overhead, but it saves substantially on the critical path term.
On the downside, heap allocation can potentially waste more
memory than stack allocation due to fragmentation. For a
careful analysis of the relative merits of stack and heap based
allocation that supports heap allocation, see the paper by
Appel and Shao [1]. For an equally careful analysis that
supports stack allocation, see [22].
Thus, although the work-first principle gives a general understanding
of where overheads should be borne, our experience
with Cilk-4 showed that large enough critical-path
overheads can tip the scales to the point where the assumptions
underlying the principle no longer hold. We believe
that Cilk-5 work overhead is nearly as low as possible, given
our goal of generating portable C output from our compiler. 7
Other researchers have been able to reduce overheads even
more, however, at the expense of portability. For example,
lazy threads [14] obtains efficiency at the expense of implementing
its own calling conventions, stack layouts, etc.
Although we could in principle incorporate such machine-dependent
techniques into our compiler, we feel that Cilk-5
strikes a good balance between performance and portability.
We also feel that the current overheads are sufficiently low
that other problems, notably minimizing overheads for data
synchronization, deserve more attention.
5 Implemention of work-stealing
In this section, we describe Cilk-5's work-stealing mecha-
nism, which is based on a Dijkstra-like [11], shared-memory,
mutual-exclusion protocol called the "THE" protocol. In
accordance with the work-first principle, this protocol has
been designed to minimize work overhead. For example, on
a 167-megahertz UltraSPARC I, the fib program with the
THE protocol runs about 25% faster than with hardware
locking primitives. We first present a simplified version of
the protocol. Then, we discuss the actual implementation,
which allows exceptions to be signaled with no additional
overhead.
7 Although the runtime system requires some effort to port between
architectures, the compiler requires no changes whatsoever for different
platforms.
Several straightforward mechanisms might be considered
to implement a work-stealing protocol. For example, a thief
might interrupt a worker and demand attention from this
victim. This strategy presents problems for two reasons.
First, the mechanisms for signaling interrupts are slow, and
although an interrupt would be borne on the critical path,
its large cost could threaten the assumption of parallel slack-
ness. Second, the worker would necessarily incur some overhead
on the work term to ensure that it could be safely
interrupted in a critical section. As an alternative to sending
interrupts, thieves could post steal requests, and workers
could periodically poll for them. Once again, however, a cost
accrues to the work overhead, this time for polling. Techniques
are known that can limit the overhead of polling [12],
but they require the support of a sophisticated compiler.
The work-first principle suggests that it is reasonable to
put substantial effort into minimizing work overhead in the
work-stealing protocol. Since Cilk-5 is designed for shared-memory
machines, we chose to implement work-stealing
through shared-memory, rather than with message-passing,
as might otherwise be appropriate for a distributed-memory
implementation. In our implementation, both victim and
operate directly through shared memory on the victim's
ready deque. The crucial issue is how to resolve the race condition
that arises when a thief tries to steal the same frame
that its victim is attempting to pop. One simple solution
is to add a lock to the deque using relatively heavyweight
hardware primitives like Compare-And-Swap or Test-And-
Set. Whenever a thief or worker wishes to remove a frame
from the deque, it first grabs the lock. This solution has
the same fundamental problem as the interrupt and polling
mechanisms just described, however. Whenever a worker
pops a frame, it pays the heavy price to grab a lock, which
contributes to work overhead.
Consequently, we adopted a solution that employs Di-
jkstra's protocol for mutual exclusion [11], which assumes
only that reads and writes are atomic. Because our protocol
uses three atomic shared variables T, H, and E, we call
it the THE protocol. The key idea is that actions by the
worker on the tail of the queue contribute to work overhead,
while actions by thieves on the head of the queue contribute
only to critical-path overhead. Therefore, in accordance with
the work-first principle, we attempt to move costs from the
worker to the thief. To arbitrate among different thieves
attempting to steal from the same victim, we use a hardware
lock, since this overhead can be amortized against the
critical path. To resolve conflicts between a worker and the
sole thief holding the lock, however, we use a lightweight
Dijkstra-like protocol which contributes minimally to work
overhead. A worker resorts to a heavyweight hardware lock
only when it encounters an actual conflict with a thief, in
which case we can charge the overhead that the victim incurs
to the critical path.
In the rest of this section, we describe the THE protocol
9 T-;
return FAILURE;
return SUCCESS;
Thief
7 return FAILURE;
9 unlock(L);
return SUCCESS;
Figure
4: Pseudocode of a simplified version of the THE protocol.
The left part of the figure shows the actions performed by the
victim, and the right part shows the actions of the thief. None
of the actions besides reads and writes are assumed to be atomic.
For example, T-; can be implemented as
in detail. We first present a simplified protocol that uses
only two shared variables T and H designating the tail and
the head of the deque, respectively. Later, we extend the
protocol with a third variable E that allows exceptions to be
signaled to a worker. The exception mechanism is used to
implement Cilk's abort statement. Interestingly, this extension
does not introduce any additional work overhead.
The pseudocode of the simplified THE protocol is shown
in
Figure
4. Assume that shared memory is sequentially
consistent [20]. 8 The code assumes that the ready deque is
implemented as an array of frames. The head and tail of
the deque are determined by two indices T and H, which are
stored in shared memory and are visible to all processors.
The index T points to the first unused element in the array,
and H points to the first frame on the deque. Indices grow
from the head towards the tail so that under normal con-
ditions, we have T - H. Moreover, each deque has a lock L
implemented with atomic hardware primitives or with OS
calls.
The worker uses the deque as a stack. (See Section 4.)
Before a spawn, it pushes a frame onto the tail of the deque.
After a spawn, it pops the frame, unless the frame has been
stolen. A thief attempts to steal the frame at the head of
the deque. Only one thief at the time may steal from the
deque, since a thief grabs L as its first action. As can be
seen from the code, the worker alters T but not H, whereas
the thief only increments H and does not alter T.
The only possible interaction between a thief and its vic-
8 If the shared memory is not sequentially consistent, a memory
fence must be inserted between lines 5 and 6 of the worker/victim
code and between lines 3 and 4 of the thief code to ensure that these
instructions are executed in the proper order.
(b)000000000000000111111111111111000000000000111111111111111
H24(a)
(c)
Thief
Figure
5: The three cases of the ready deque in the simplified THE
protocol. A shaded entry indicates the presence of a frame at a
certain position in the deque. The head and the tail are marked
by T and H.
occurs when the thief is incrementing H while the victim
is decrementing T. Consequently, it is always safe for
a worker to append a new frame at the end of the deque
worrying about the actions of the thief. For
a pop operations, there are three cases, which are shown in
Figure
5. In case (a), the thief and the victim can both get
a frame from the deque. In case (b), the deque contains only
one frame. If the victim decrements T without interference
from thieves, it gets the frame. Similarly, a thief can steal
the frame as long as its victim is not trying to obtain it. If
both the thief and the victim try to grab the frame, however,
the protocol guarantees that at least one of them discovers
that H ? T. If the thief discovers that H ? T, it restores
H to its original value and retreats. If the victim discovers
that H ? T, it restores T to its original value and restarts the
protocol after having acquired L. With L acquired, no thief
can steal from this deque so the victim can pop the frame
without interference (if the frame is still there). Finally, in
case (c) the deque is empty. If a thief tries to steal, it will
always fail. If the victim tries to pop, the attempt fails and
control returns to the Cilk runtime system. The protocol
cannot deadlock, because each process holds only one lock
at a time.
We now argue that the THE protocol contributes little to
the work overhead. Pushing a frame involves no overhead
beyond updating T. In the common case where a worker
can succesfully pop a frame, the pop protocol performs only
6 operations-2 memory loads, 1 memory store, 1 decre-
ment, 1 comparison, and 1 (predictable) conditional branch.
Moreover, in the common case where no thief operates on
the deque, both H and T can be cached exclusively by the
worker. The expensive operation of a worker grabbing the
lock L occurs only when a thief is simultaneously trying to
steal the frame being popped. Since the number of steal
attempts depends on T1 , not on T1 , the relatively heavy
cost of a victim grabbing L can be considered as part of the
critical-path overhead c1 and does not influence the work
overhead c1 .
We ran some experiments to determine the relative performance
of the THE protocol versus the straightforward
protocol in which pop just locks the deque before accessing
it. On a 167-megahertz UltraSPARC I, the THE protocol
is about 25% faster than the simple locking protocol. This
machine's memory model requires that a memory fence instruction
(membar) be inserted between lines 5 and 6 of the
pop pseudocode. We tried to quantify the performance impact
of the membar instruction, but in all our experiments
the execution times of the code with and without membar
are about the same. On a 200-megahertz Pentium Pro running
Linux and gcc 2.7.1, the THE protocol is only about
5% faster than the locking protocol. On this processor, the
THE protocol spends about half of its time in the memory
fence.
Because it replaces locks with memory synchronization,
the THE protocol is more "nonblocking" than a straightforward
locking protocol. Consequently, the THE protocol is
less prone to problems that arise when spin locks are used
extensively. For example, even if a worker is suspended
by the operating system during the execution of pop, the
infrequency of locking in the THE protocol means that a
can usually complete a steal operation on the worker's
deque. Recent work by Arora et al. [2] has shown that a
completely nonblocking work-stealing scheduler can be im-
plemented. Using these ideas, Lisiecki and Medina [21] have
modified the Cilk-5 scheduler to make it completely non-
blocking. Their experience is that the THE protocol greatly
simplifies a nonblocking implementation.
The simplified THE protocol can be extended to support
the signaling of exceptions to a worker. In Figure 4, the
index H plays two roles: it marks the head of the deque, and
it marks the point that the worker cannot cross when it pops.
These places in the deque need not be the same. In the full
THE protocol, we separate the two functions of H into two
variables: H, which now only marks the head of the deque,
and E, which marks the point that the victim cannot cross.
exceptional condition has occurred,
which includes the frame being stolen, but it can also be used
for other exceptions. For example, setting causes the
worker to discover the exception at its next pop. In the
new protocol, E replaces H in line 6 of the worker/victim.
Moreover, lines 7-15 of the worker/victim are replaced by
a call to an exception handler to determine the type of
exception (stolen frame or otherwise) and the proper action
to perform. The thief code is also modified. Before trying to
Program Size
fib
blockedmul 1024 29.9 0.0044 6730 1.05 4.3 7.0 6.6
notempmul 1024 29.7 0.015 1970 1.05 3.9 7.6 7.2
strassen 1024 20.2 0.58 35 1.01 3.54 5.7 5.6
*cilksort 4; 100; 000 5.4 0.0049 1108 1.21 0.90 6.0 5.0
yqueens 22 150. 0.0015 96898 0.99 18.8 8.0 8.0
yknapsack
heat 4096 \Theta 512 62.3 0.16 384 1.08 9.4 6.6 6.1
4.3 0.0020 2145 0.93 0.77 5.6 6.0
Figure
The performance of example Cilk programs. Times are in seconds and are accurate to within about 10%. The serial programs
are C elisions of the Cilk programs, except for those programs that are starred (*), where the parallel program implements a different
algorithm than the serial program. Programs labeled by a dagger (y) are nondeterministic, and thus, the running time on one processor
is not the same as the work performed by the computation. For these programs, the value for T 1 indicates the actual work of the
computation on 8 processors, and not the running time on one processor.
steal, the thief increments E. If there is nothing to steal, the
restores E to the original value. Otherwise, the thief
steals frame H and increments H. From the point of view of
a worker, the common case is the same as in the simplified
protocol: it compares two pointers (E and T rather than H
and T).
The exception mechanism is used to implement abort.
When a Cilk procedure executes an abort instruction, the
runtime system serially walks the tree of outstanding descendants
of that procedure. It marks the descendants as aborted
and signals an abort exception on any processor working on
a descendant. At its next pop, an aborted procedure will
discover the exception, notice that it has been aborted, and
return immediately. It is conceivable that a procedure could
run for a long time without executing a pop and discovering
that it has been aborted. We made the design decision to
accept the possibility of this unlikely scenario, figuring that
more cycles were likely to be lost in work overhead if we
abandoned the THE protocol for a mechanism that solves
this minor problem.
6 Benchmarks
In this section, we evaluate the performance of Cilk-5. We
show that on 12 applications, the work overhead c1 is close
to 1, which indicates that the Cilk-5 implementation exploits
the work-first principle effectively. We then present a break-down
of Cilk's work overhead c1 on four machines. Finally,
we present experiments showing that the critical-path overhead
c1 is reasonably small as well.
Figure
6 shows a table of performance measurements taken
for 12 Cilk programs on a Sun Enterprise 5000 SMP with 8
167-megahertz UltraSPARC processors, each with 512 kilobytes
of L2 cache, 16 kilobytes each of L1 data and instruction
caches, running Solaris 2.5. We compiled our programs
with gcc 2.7.2 at optimization level -O3. For a full description
of these programs, see the Cilk 5.1 manual [8]. The
table shows the work of each Cilk program T1 , the critical
path and the two derived quantities P and c1 . The table
also lists the running time T8 on 8 processors, and the
speedup T1=T8 relative to the one-processor execution time,
and speedup TS=T8 relative to the serial execution time.
For the 12 programs, the average parallelism P is in most
cases quite large relative to the number of processors on a
typical SMP. These measurements validate our assumption
of parallel slackness, which implies that the work term dominates
in Inequality (4). For instance, on 1024 \Theta 1024 matri-
ces, notempmul runs with an average parallelism of 1970-
yielding adequate parallel slackness for up to several hundred
processors. For even larger machines, one normally
would not run such a small problem. For notempmul, as well
as the other 11 applications, the average parallelism grows
with problem size, and thus sufficient parallel slackness is
likely to exist even for much larger machines, as long as the
problem sizes are scaled appropriately.
The work overhead c1 is only a few percent larger than
1 for most programs, which shows that our design of Cilk-5
faithfully implements the work-first principle. The two cases
where the work overhead is larger (cilksort and cholesky)
are due to the fact that we had to change the serial algorithm
to obtain a parallel algorithm, and thus the comparison
is not against the C elision. For example, the serial C
algorithm for sorting is an in-place quicksort, but the parallel
algorithm cilksort requires an additional temporary
array which adds overhead beyond the overhead of Cilk it-
self. Similarly, our parallel Cholesky factorization uses a
quadtree representation of the sparse matrix, which induces
more work than the linked-list representation used in the
serial C algorithm. Finally, the work overhead for fib is
large, because fib does essentially no work besides spawning
procedures. Thus, the overhead
good estimate of the cost of a Cilk spawn versus a traditional
C function call. With such a small overhead for spawning,
one can understand why for most of the other applications,
which perform significant work for each spawn, the overhead
of Cilk-5's scheduling is barely noticeable compared to the
10% "noise" in our measurements.
195 MHz
MIPS R10000
Ultra SPARC I
200 MHz
Pentium Pro
Alpha 21164
overheads
THE protocol
frame allocation
state saving
115ns
113ns
78ns
27ns
Figure
7: Breakdown of overheads for fib running on one processor
on various architectures. The overheads are normalized to
the running time of the serial C elision. The three overheads are
for saving the state of a procedure before a spawn, the allocation
of activation frames for procedures, and the THE protocol. Absolute
times are given for the per-spawn running time of the C
elision.
We now present a breakdown of Cilk's serial overhead c1
into its components. Because scheduling overheads are small
for most programs, we perform our analysis with the fib
program from Figure 1. This program is unusually sensitive
to scheduling overheads, because it contains little actual
computation. We give a breakdown of the serial overhead
into three components: the overhead of saving state before
spawning, the overhead of allocating activation frames, and
the overhead of the THE protocol.
Figure
7 shows the breakdown of Cilk's serial overhead
for fib on four machines. Our methodology for obtaining
these numbers is as follows. First, we take the serial C fib
program and time its execution. Then, we individually add
in the code that generates each of the overheads and time
the execution of the resulting program. We attribute the
additional time required by the modified program to the
scheduling code we added. In order to verify our numbers,
we timed the fib code with all of the Cilk overheads added
(the code shown in Figure 3), and compared the resulting
time to the sum of the individual overheads. In all cases,
the two times differed by less than 10%.
Overheads vary across architectures, but the overhead of
Cilk is typically only a few times the C running time on this
spawn-intensive program. Overheads on the Alpha machine
are particularly large, because its native C function calls are
fast compared to the other architectures. The state-saving
costs are small for fib, because all four architectures have
write buffers that can hide the latency of the writes required.
We also attempted to measure the critical-path overhead
c1 . We used the synthetic knary benchmark [4] to
synthesize computations artificially with a wide range of
work and critical-path lengths. Figure 8 shows the outcome
from many such experiments. The figure plots the measured0.10.01
Normalized
Normalized Machine Size
Experimental data
Model
Work bound
Critical path bound
Figure
8: Normalized speedup curve for Cilk-5. The horizontal
axis is the number P of processors and the vertical axis is the
speedup T1=TP , but each data point has been normalized by dividing
by T1=T1 . The graph also shows the speedup predicted
by the formula
speedup T1=TP for each run against the machine size P for
that run. In order to plot different computations on the same
graph, we normalized the machine size and the speedup by
dividing these values by the average parallelism
as was done in [4]. For each run, the horizontal position of
the plotted datum is the inverse of the slackness P=P , and
thus, the normalized machine size is 1:0 when the number of
processors is equal to the average parallelism. The vertical
position of the plotted datum is (T1=TP
measures the fraction of maximum obtainable speedup. As
can be seen in the chart, for almost all runs of this bench-
mark, we observed TP - T1=P + 1:0T1 . (One exceptional
data point satisfies TP - T1=P
the work-first principle caused us to move overheads to the
critical path, the ability of Cilk applications to scale up was
not significantly compromised.
7 Conclusion
We conclude this paper by examining some related work.
Mohr et al. [24] introduced lazy task creation in their implementation
of the Mul-T language. Lazy task creation
is similar in many ways to our lazy scheduling techniques.
Mohr et al. report a work overhead of around 2 when comparing
with serial T, the Scheme dialect on which Mul-T
is based. Our research confirms the intuition behind their
methods and shows that work overheads of close to 1 are
achievable.
The Cid language [26] is like Cilk in that it uses C as
a base language and has a simple preprocessing compiler to
convert parallel Cid constructs to C. Cid is designed to work
in a distributed memory environment, and so it employs
latency-hiding mechanisms which Cilk-5 could avoid. (We
are working on a distributed version of Cilk, however.) Both
Cilk and Cid recognize the attractiveness of basing a parallel
language on C so as to leverage C compiler technology for
high-performance codes. Cilk is a faithful extension of C,
however, supporting the simplifying notion of a C elision
and allowing Cilk to exploit the C compiler technology more
readily.
TAM [10] and Lazy Threads [14] also analyze many of
the same overhead issues in a more general, "nonstrict" language
setting, where the individual performances of a whole
host of mechanisms are required for applications to obtain
good overall performance. In contrast, Cilk's multithreaded
language provides an execution model based on work and
critical-path length that allows us to focus our implementation
efforts by using the work-first principle. Using this
principle as a guide, we have concentrated our optimizing
effort on the common-case protocol code to develop an efficient
and portable implementation of the Cilk language.
Acknowledgments
We gratefully thank all those who have contributed to Cilk
development, including Bobby Blumofe, Ien Cheng, Don
Dailey, Mingdong Feng, Chris Joerg, Bradley Kuszmaul,
Phil Lisiecki, Alberto Medina, Rob Miller, Aske Plaat, Bin
Song, Andy Stark, Volker Strumpen, and Yuli Zhou. Many
thanks to all our users who have provided us with feedback
and suggestions for improvements. Martin Rinard suggested
the term "work-first."
--R
Empirical and analytic study of stack versus heap cost for languages with closures.
Thread scheduling for multiprogrammed multiprocessors.
Executing Multithreaded Programs Ef- ficiently
Scheduling multithreaded computations by work stealing.
The parallel evaluation of general arithmetic expressions.
Detecting data races in Cilk programs that use locks.
Introduction to Algorithms.
Solution of a problem in concurrent programming control.
Polling efficiently on stock hardware.
Efficient detection of determinacy races in Cilk programs.
Lazy threads: Implementing a fast parallel call.
Bounds on multiprocessing timing anoma- lies
Heaps o' stacks: Time and space efficient threads without operating system support.
The Cilk System for Parallel Multi-threaded Computing
Jr.
How to make a multiprocessor computer that correctly executes multiprocess programs.
Personal communication
Garbage collection is fast
The function of FUNCTION in LISP or why the FUNARG problem should be called the environment prob- lem
Parallel Symbolic Computing in Cid.
VLSI support for a cactus stack oriented memory organization.
A bridging model for parallel computation.
--TR
MULTILISP: a language for concurrent symbolic computation
VLSI Support for a cactus stack oriented memory organization
A bridging model for parallel computation
Introduction to algorithms
Fine-grain parallelism with minimal hardware support: a compiler-controlled threaded abstract machine
Polling efficiently on stock hardware
Whole-program optimization for time and space efficient threads
The cilk system for parallel multithreaded computing
Lazy threads
Cilk
Executing multithreaded programs efficiently
Efficient detection of determinacy races in Cilk programs
Thread scheduling for multiprogrammed multiprocessors
Detecting data races in Cilk programs that use locks
The Parallel Evaluation of General Arithmetic Expressions
Solution of a problem in concurrent programming control
Lazy Task Creation
Parallel Symbolic Computing in Cid
Garbage Collection is Fast, but a Stack is Faster
The Function of FUNCTION in LISP, or Why the FUNARG Problem Should be Called the Environment Problem
--CTR
Liang Peng , Weng-Fai Wong , Chung-Kwong Yuen, SilkRoad II: mixed paradigm cluster computing with RC_dag consistency, Parallel Computing, v.29 n.8, p.1091-1115, 1 August
Matteo Frigo, A fast Fourier transform compiler, ACM SIGPLAN Notices, v.39 n.4, April 2004
Kalyan S. Perumalla , Richard M. Fujimoto, Efficient large-scale process-oriented parallel simulations, Proceedings of the 30th conference on Winter simulation, p.459-466, December 13-16, 1998, Washington, D.C., United States
Doug Lea, A Java fork/join framework, Proceedings of the ACM 2000 conference on Java Grande, p.36-43, June 03-04, 2000, San Francisco, California, United States
Christopher J. Hughes , Radek Grzeszczuk , Eftychios Sifakis , Daehyun Kim , Sanjeev Kumar , Andrew P. Selle , Jatin Chhugani , Matthew Holliman , Yen-Kuang Chen, Physical simulation for animation and visual effects: parallelization and characterization for chip multiprocessors, ACM SIGARCH Computer Architecture News, v.35 n.2, May 2007
Yaron Shoham , Sivan Toledo, Parallel randomized best-first minimax search, Artificial Intelligence, v.137 n.1-2, p.165-196, May 2002
Philip Cox , Simon Gauvin , Andrew Rau-Chaplin, Adding parallelism to visual data flow programs, Proceedings of the 2005 ACM symposium on Software visualization, May 14-15, 2005, St. Louis, Missouri
Voon-Yee Vee , Wen-Jing Hsu, Locality-preserving load-balancing mechanisms for synchronous simulations on shared-memory multiprocessors, Proceedings of the fourteenth workshop on Parallel and distributed simulation, p.131-138, May 28-31, 2000, Bologna, Italy
Marcel van Lohuizen, A generic approach to parallel chart parsing with an application to LinGO, Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, p.507-514, July 06-11, 2001, Toulouse, France
Bryan Chan , Tarek S. Abdelrahman, Run-Time Support for the Automatic Parallelization of Java Programs, The Journal of Supercomputing, v.28 n.1, p.91-117, April 2004
Madanlal Musuvathi , Shaz Qadeer, Iterative context bounding for systematic testing of multithreaded programs, ACM SIGPLAN Notices, v.42 n.6, June 2007
Gregory W. Price , David K. Lowenthal, A comparative analysis of fine-grain threads packages, Journal of Parallel and Distributed Computing, v.63 n.11, p.1050-1063, November
Kenjiro Taura , Kunio Tabata , Akinori Yonezawa, StackThreads/MP: integrating futures into calling standards, ACM SIGPLAN Notices, v.34 n.8, p.60-71, Aug. 1999
Dorit Naishlos , Joseph Nuzman , Chau-Wen Tseng , Uzi Vishkin, Towards a first vertical prototyping of an extremely fine-grained parallel programming approach, Proceedings of the thirteenth annual ACM symposium on Parallel algorithms and architectures, p.93-102, July 2001, Crete Island, Greece
Robert Ennals , Simon Peyton Jones, Optimistic evaluation: an adaptive evaluation strategy for non-strict programs, ACM SIGPLAN Notices, v.38 n.9, p.287-298, September
Radu Rugina , Martin Rinard, Pointer analysis for multithreaded programs, ACM SIGPLAN Notices, v.34 n.5, p.77-90, May 1999
John S. Danaher , I.-Ting Angelina Lee , Charles E. Leiserson, Programming with exceptions in JCilk, Science of Computer Programming, v.63 n.2, p.147-171, 1 December 2006
Matteo Frigo, A fast Fourier transform compiler, ACM SIGPLAN Notices, v.34 n.5, p.169-180, May 1999
Rezaul Alam Chowdhury , Vijaya Ramachandran, The cache-oblivious gaussian elimination paradigm: theoretical framework, parallelization and experimental evaluation, Proceedings of the nineteenth annual ACM symposium on Parallel algorithms and architectures, June 09-11, 2007, San Diego, California, USA
Kenjiro Taura , Kenji Kaneda , Toshio Endo , Akinori Yonezawa, Phoenix: a parallel programming model for accommodating dynamically joining/leaving resources, ACM SIGPLAN Notices, v.38 n.10, October
Sanjeev Kumar , Christopher J. Hughes , Anthony Nguyen, Carbon: architectural support for fine-grained parallelism on chip multiprocessors, ACM SIGARCH Computer Architecture News, v.35 n.2, May 2007
Lawrence Rauchwerger , Nancy M. Amato, SmartApps: middle-ware for adaptive applications on reconfigurable platforms, ACM SIGOPS Operating Systems Review, v.40 n.2, April 2006
V. A. Vasenin , A. N. Vodomerov, A formal model of a system for automated program parallelization, Programming and Computing Software, v.33 n.4, p.181-194, July 2007
Radu Rugina , Martin Rinard, Symbolic bounds analysis of pointers, array indices, and accessed memory regions, ACM SIGPLAN Notices, v.35 n.5, p.182-195, May 2000
Matteo Frigo , Volker Strumpen, The cache complexity of multithreaded cache oblivious algorithms, Proceedings of the eighteenth annual ACM symposium on Parallelism in algorithms and architectures, July 30-August 02, 2006, Cambridge, Massachusetts, USA
Guang-Ien Cheng , Mingdong Feng , Charles E. Leiserson , Keith H. Randall , Andrew F. Stark, Detecting data races in Cilk programs that use locks, Proceedings of the tenth annual ACM symposium on Parallel algorithms and architectures, p.298-309, June 28-July 02, 1998, Puerto Vallarta, Mexico
Polyvios Pratikakis , Jaime Spacco , Michael Hicks, Transparent proxies for java futures, ACM SIGPLAN Notices, v.39 n.10, October 2004
Nimar S. Arora , Robert D. Blumofe , C. Greg Plaxton, Thread scheduling for multiprogrammed multiprocessors, Proceedings of the tenth annual ACM symposium on Parallel algorithms and architectures, p.119-129, June 28-July 02, 1998, Puerto Vallarta, Mexico
Michael A. Bender , Cynthia A. Phillips, Scheduling DAGs on asynchronous processors, Proceedings of the nineteenth annual ACM symposium on Parallel algorithms and architectures, June 09-11, 2007, San Diego, California, USA
Matteo Frigo, A fast Fourier transform compiler, ACM SIGPLAN Notices, v.39 n.4, April 2004
Girija J. Narlikar, Scheduling threads for low space requirement and good locality, Proceedings of the eleventh annual ACM symposium on Parallel algorithms and architectures, p.83-95, June 27-30, 1999, Saint Malo, France
Michael A. Bender , Jeremy T. Fineman , Seth Gilbert , Charles E. Leiserson, On-the-fly maintenance of series-parallel relationships in fork-join multithreaded programs, Proceedings of the sixteenth annual ACM symposium on Parallelism in algorithms and architectures, June 27-30, 2004, Barcelona, Spain
Dror Irony , Gil Shklarski , Sivan Toledo, Parallel and fully recursive multifrontal sparse Cholesky, Future Generation Computer Systems, v.20 n.3, p.425-440, April 2004
Charles R. Tolle , Timothy R. McJunkin , David J. Gorisch, Suboptimal Minimum Cluster Volume Cover-Based Method for Measuring Fractal Dimension, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.1, p.32-41, January
Robert D. Blumofe , Charles E. Leiserson, Scheduling multithreaded computations by work stealing, Journal of the ACM (JACM), v.46 n.5, p.720-748, Sept. 1999
Kunal Agrawal , Yuxiong He , Wen Jing Hsu , Charles E. Leiserson, Adaptive scheduling with parallelism feedback, Proceedings of the eleventh ACM SIGPLAN symposium on Principles and practice of parallel programming, March 29-31, 2006, New York, New York, USA
Radu Rugina , Martin C. Rinard, Symbolic bounds analysis of pointers, array indices, and accessed memory regions, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.2, p.185-235, March 2005
Kunal Agrawal , Yuxiong He , Charles E. Leiserson, Adaptive work stealing with parallelism feedback, Proceedings of the 12th ACM SIGPLAN symposium on Principles and practice of parallel programming, March 14-17, 2007, San Jose, California, USA
Radu Rugina , Martin C. Rinard, Pointer analysis for structured parallel programs, ACM Transactions on Programming Languages and Systems (TOPLAS), v.25 n.1, p.70-116, January
Girija J. Narlikar , Guy E. Blelloch, Pthreads for dynamic and irregular parallelism, Proceedings of the 1998 ACM/IEEE conference on Supercomputing (CDROM), p.1-16, November 07-13, 1998, San Jose, CA | programming language;runtime system;multithreading;critical path;parallel computing |
277743 | Optimizing direct threaded code by selective inlining. | Achieving good performance in bytecoded language interpreters is difficult without sacrificing both simplicity and portability. This is due to the complexity of dynamic translation ("just-in-time compilation") of bytecodes into native code, which is the mechanism employed universally by high-performance interpreters.We demonstrate that a few simple techniques make it possible to create highly-portable dynamic translators that can attain as much as 70% the performance of optimized C for certain numerical computations. Translators based on such techniques can offer respectable performance without sacrificing either the simplicity or portability of much slower "pure" bytecode interpreters. | Introduction
Bytecoded languages such as Smalltalk [Gol83], Caml
[Ler97] and Java [Arn96, Lin97] offer significant engineering
advantages over more conventional languages: higher
levels of abstraction, dynamic execution environments with
incremental debugging and code modification, compact representation
of executable code, and (in most cases) platform
independence.
The success of Java is due largely to its promise of platform
independence and compactness of code. The compactness
of bytecodes has important advantages for net-work
computing where code must downloaded "on-demand"
for execution on an arbitrary platform and operating system
while keeping bandwidth requirements to a minimum.
The disadvantage is that bytecode interpreters typically offer
lower performance than compiled code, and can consume
significantly more resources.
Most modern virtual machines perform some degree of
dynamic translation to improve program performance
[Deu84]. Such techniques significantly increase the complexity
of the virtual machine, which must be tailored for
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that
copies are not made or distributed for profit or commercial advantage
and that copies bear this notice and the full citation on the first page.
To copy otherwise, to republish, to post on servers or to
redistribute to lists, requires prior specific permission and/or a fee.
SIGPLAN '98 Montreal Canada
c
each hardware architecture in much the same way as a conventional
compiler's back-end. This increases development
costs (requiring specific knowledge about the target architecture
and the time for writing specific code), and reduces
reliability (by introducing more code to debug and support).
Some of these languages (Caml for example) also have
more traditional compilers that produce high-performance
native code, but this defeats the advantages that come with
platform independence and compactness.
We propose a novel dynamic retranslation technique that
can be applied to a certain class of virtual machines. This
technique delivers high performance, up to 70% that of optimized
C. It is easy to "retrofit" to existing virtual machines,
and requires almost no effort to port to a new architecture.
This paper continues as follows. The next section gives
a brief survey of bytecode interpretation mechanisms, providing
a context for the remainder of the paper. Our novel
dynamic retranslation technique is explained in Section 3.
Section 4 presents the results of applying the technique to
two interpreters: the small RISC-like interpreter that inspired
this work, and a "production" virtual machine for
Objective Caml. The last two sections contrast our technique
with related work and present some concluding remarks
Background
Interpreter performance can depend heavily on the representation
chosen for executable code, and the mechanism
used to dispatch opcodes. This section describes some of
the common techniques.
2.1 Pure bytecode interpreters
The inner loop of a pure bytecode interpreter is very simple:
fetch the next bytecode and dispatch to the implementation
using a switch statement. Figure 1 shows a typical pure
bytecode interpreter loop, and an array of bytecodes that
calculate will use this as a running example).
The interpreter is an infinite loop containing a switch
statement to dispatch successive bytecodes. Each case in
the body of the switch implements one bytecode, and passes
control to the next bytecode by breaking out of the switch
to pass control back to the start of the infinite loop.
Assuming the compiler optimizes the jump chains from
the breaks through the implicit jump at the end of the for
body back to its beginning, the overheads associated with
this approach are as follows:
compiled code:
unsigned char
bytecode-add, . -;
bytecode implementations:
unsigned char
for
unsigned char
switch (bytecode) -
case bytecode-push3:
break;
case bytecode-push4:
break;
case bytecode-add:
*stackPointer += stackPointer[1];
break;
Figure
1: Pure bytecode interpreter.
ffl increment the instructionPointer;
ffl fetch the next bytecode from memory;
ffl a redundant range check on the argument to switch;
ffl fetch the address of the destination case label from a
table;
ffl jump to that address;
and then at the end of each bytecode:
ffl jump back to the start of the for body to fetch the
next bytecode.
Eleven machine instructions must be executed on the PowerPC
to perform the push3 bytecode. Nine of these instructions
are dedicated to the dispatch mechanism, including
two memory references and two jumps (among the most expensive
instructions on modern architectures).
Pure bytecoded interpreters are easy to write and under-
stand, and are highly portable - but rather slow. In the
case where most bytecodes perform simple operations (as in
the push3 example) the majority of execution time is wasted
in performing the dispatch.
2.2 Threaded code interpreters
Threaded code [Bel73] was popularized by the Forth programming
language [Moo70]. There are various kinds of
threaded code, the most efficient of which is generally direct
threading [Ert93].
Bytecodes are simply integers: dispatch involves fetching
the next opcode (bytecode), looking up the address of
the associated implementation (either in an explicit table,
or implicitly using switch) and then transferring control to
that address. Direct threaded code improves performance
by eliminating this table lookup: executable code is represented
as a sequence of opcode implementation addresses,
and dispatch involves fetching the next opcode (implemen-
tation address) and jumping directly to that address.
An additional optimization eliminates the centralized dis-
patch. Instead of returning to a central dispatch loop, each
compiled code:
void
&&opcode-add, . -;
opcode implementations:
dispatch next instruction */
#define NEXT() goto *++instructionPointer
void
/* start execution: dispatch first opcode */
/* opcode implementations. */
opcode-push3:
opcode-push4:
opcode-add:
*stackPointer += stackPointer[1];
Figure
2: Direct threaded code.
direct threaded opcode's implementation ends with the code
required to dispatch the next opcode. The direct threaded
version of our '3 example is shown in Figure 2. 1
Execution begins by fetching the address of the first op-
code's implementation from the compiled code and then
jumping to that address. Each opcode performs its own
work, and then dispatches to the next opcode implied by
the compiled code. (Hence the name: control flow "threads"
its way through the opcodes in the order implied by the
compiled code, without ever returning to a central dispatch
loop.)
The overheads associated with threaded code are much
lower than those associated with a pure bytecode inter-
preter. For each opcode executed, the only additional overhead
is dispatching to the next opcode:
ffl increment the instructionPointer;
ffl fetch the next opcode address from memory;
ffl jump to that address.
Five machine instructions are required to implement push3
on the PowerPC. Three of these are associated with opcode
dispatch, with only one memory reference and one jump.
We have saved six instructions over the "pure bytecode"
approach. Most importantly we have saved one memory
reference and one jump instruction (both of which are ex-
pensive).
2.3 Dynamic translation to threaded code
The benefits of direct threaded code can easily be obtained
in a bytecoded language by translating the bytecodes into
1 The threaded code examples are written using the first-class labels
provided by GNU C. The expression "void assigns
the address (of type "void *") of the statement attached to the
given label to addr. Control can be transferred to this location using
a goto that dereferences the address: "goto *addr". Note that gcc's
first-class labels are not required to implement these techniques: the
same effects can be achieved with a couple of macros containing a few
lines of asm.
translation table:
void *opcodes[];
dynamic translator:
unsigned char *bytecodePointer = firstBytecode;
void
while (moreBytecodesToTranslate)
Figure
3: Dynamic translation of bytecodes into threaded
code.
direct threaded code before execution. This is illustrated in
Figure
3. The translation loop reads each bytecode, looks
up the address of its implementation in a table, and then
writes this address into the direct threaded code.
The only complication is that most bytecode sets have
extension bytes. These provide additional information that
cannot be encoded within the bytecode itself: branch offsets,
indices into literal tables or environments, and so on. These
extension bytes are normally placed inline in the translated
threaded code by the translator, immediately after the threaded
opcode corresponding to the bytecode.
Translation to threaded code permits other kinds of op-
timization. For example, Smalltalk provides four bytecodes
for pushing an implicit integer constant (between -1 and
onto the stack. The translator loop could easily translate
these as a single pushInteger opcode followed by the
constant to be pushed as an inline operand. The same treatment
can be applied to other kinds of literal quantity, relative
branch offsets, and so on. Another possibility is "partial
decoding", where the translator loop examines an "over-
loaded" bytecode at translation time, and translates it into
one of several threaded opcodes.
The translator loop must be aware of the kind of operand
that it is copying. A relative offset, for example, might
require modification or scaling during the translation loop.
It is possible to make an approximate evaluation of this
approach in a realistic system. Squeak [Ing97] is a portable
"pure bytecode" implementation of Smalltalk-80; it performs
numerical computations at approximately 3.7% the
speed of optimized C. BrouHaHa [Mir87] is a portable Smalltalk
virtual machine that is very similar to the Squeak VM,
except that it dynamically translates bytecodes into direct
threaded code for execution [Mir91]. BrouHaHa performs
the same numerical computations at about 15% the speed
of optimized C. Both implementations have been carefully
hand-tuned for performance; the essential difference between
them is the use of dynamic translation to direct threaded
code in BrouHaHa.
2.4 Optimizing common bytecode sequences
Bytecodes can typically only represent Threaded
opcodes can represent many more, since they are encoded
as pointers. Translating bytecodes into threaded code
therefore gives us the opportunity to make arbitrary transformations
on the executable code. One such transformation
is to detect common sequences of bytecodes and translate
them as a single threaded "macro" opcode; this macro op-code
performs the work of the entire sequence of original
bytecodes. For example, the bytecodes "push literal, push
variable, add, store variable" can be translated into a single
"add-literal-to-variable" opcode in the threaded code.
Such optimizations are effective because they avoid the
overhead of the multiple dispatches that are implied by the
original bytecodes (but elided within the macro opcode).
A single macro opcode that is translated from a sequence
of N original bytecodes avoids dispatches at
execution time.
This technique is particularly important in cases where
the bytecodes are simple (as in the '3
the implementation of each bytecode can be as short as
a single register-register machine instruction. The cost of
threading can often be significantly larger than the cost of
"useful" execution. If three instructions must be executed to
dispatch to the next opcode then the overhead for this threading
instructions executed
and 12 instructions for dispatching the threaded opcodes).
This overhead drops to 43% when the operation is optimized
into a single macro opcode (four useful instructions and 3
instructions for threading). 2
Dispatching to opcode implementations at non-contiguous
addresses also undermines code locality, causing un-necessary
processor pipeline stalls and inefficient utilization
of the instruction cache and TLBs. Combining common sequences
of bytecodes into a single macro opcode considerably
reduces these effects. The compiler will also have a chance to
make inter-bytecode optimizations (within the implementation
of the single macro opcode) that are impossible to make
between the implementations of the individual bytecodes.
Determining an appropriate set of common bytecode sequences
is not difficult. The virtual machine can be instrumented
to record execution traces, and a simple offline analysis
will reveal the likely candidates. The corresponding pattern
matching and macro opcode implementations can then
be incorporated manually into the VM. For example, such
analysis has been applied to an earlier version of the Objective
Caml bytecode set, resulting in a new set of bytecodes
that includes several "macro-style" operations.
2.5 Problems with static optimization
The most significant problem with this static approach
is that the number of possible permutations of even the
shortest common sequences of consecutive bytecodes is pro-
hibitive. For example, Smalltalk provides 4 bytecodes to
push the most popular integer constants (minus one through
two), and bytecodes to load and store 32 temporary and 256
"receiver" variables. Manually optimizing the possible permutations
for incrementing and decrementing a variable by
a small constant would require the translator to implement
2304 explicit special cases. This is clearly unreasonable.
The problem is made more acute since different applications
running on the same virtual machine will favor different
sequences of bytecodes. Statically chosing a single
"optimal" set of common sequences is therefore impossible.
Our technique focuses on making this choice at runtime,
which allows the set of common sequences to be nearly optimal
for the particular application being run.
"Instruction counting" is not a very accurate way to estimate the
savings, since the instructions that we avoid are some of the most
expensive to execute.
dynamic-opcode-push3-push4-add:
stackPointer-;
*stackPointer += stackPointer[1];
goto
Figure
4: Equivalent macro opcode for push3, push4, add.
int nfibs(int n)
return (n ! 2)
Figure
5: Benchmark function in C.
Dynamically rewriting opcode sequences
We generate implementations for common bytecode sequences
dynamically. These implementations are available as
new macro opcodes, where a single such macro opcode replaces
the several threaded opcodes generated from the original
common bytecode sequence. These dynamically generated
macro opcodes are executed in precisely the same
manner as the interpreter's predefined opcodes; the original
execution mechanism (direct threading) requires no modification
at all. The transformation can be performed either
during bytecode-to-threaded code translation, or as a separate
pass over already threaded code.
Figure
4 shows the equivalent C for a dynamically generated
threaded opcode for the sequence of three bytecodes
needed to evaluate the '3 + 4' example.
The translator concatenates the compiled C implementations
for several intrinsic threaded opcodes, each one corresponding
to a bytecode in the sequence being optimized.
Since this involves relocating code, it is only safe to perform
this concatenation for threaded opcodes whose implementation
is position independent. In general there are three cases
to consider when concatenating opcode implementations:
ffl A threaded opcode cannot be inlined if its implementation
contains a call to a C function, where the destination
address is relative to the processor's PC. Such
destination addresses would be invalidated as they are
copied to form the new macro opcode's implementation
ffl Any threaded opcode that changes the flow of control
through the threaded code must only appear at the
end of a translated sequence. This is because different
paths through the sequence might consume different
numbers of inline arguments.
ffl Any threaded opcode that is a branch destination can
only appear at the beginning of a macro opcode, since
incorporating it into the middle of a macro opcode
would delete the branch destination in the final threaded
code.
The above can be simplified to the following rule: we
only consider basic blocks for inlining, where a basic block
begins with a jump destination and ends with either a jump
nfibs: push r1 ; r1 saved during call
move
jge r0 r1 @
=cont
pop r1 ; restore r1
return
cont: move r0 r1 ; else arg -? r1
call @
=nfibs
call @
=nfibs
add
add
pop r1 ; restore r1
return
start: move #32 r0 ; call nfibs(32)
call @
=nfibs
print
Figure
Threaded code for nfibs benchmark, before inlining.
destination or a change of control flow. For inlining pur-
poses, opcodes that contain a C function call are considered
to be single-opcode basic blocks. (This restriction can be
relaxed if the target architecture and/or the compiler used
to build the VM uses absolute addresses for function call
destinations.)
Our technique was designed for (and works best with)
fine-grained opcodes, where the implementations are short
(typically a few machine instructions) and therefore the cost
of opcode dispatch dominates. The next section presents an
example in such a context.
3.1 Simple example
We will illustrate our technique by applying it to a simple
"RISC-like" virtual machine executing the "nfibs" func-
tion, as shown in Figure 5. 3
Our example interpreter implements a register-based execution
model. It has a handful of "registers" for performing
arithmetic, and a stack that is used for saving return addresses
and the contents of clobbered registers during subroutine
calls. The direct threaded code has two kinds of in-line
operand: instruction pointer-relative offsets for branch
destinations, and absolute addresses for function call destinations
The interpreter translates bytecodes into threaded code
in two passes. It makes a first pass over the bytecodes,
expanding them into threaded opcodes with no inlining, exactly
as explained in Section 2.3. Figure 6 shows a symbolic
listing of the nfibs function, implemented for our example
interpreter's opcode set, after this initial translation into
threaded code.
Bytecode operands are placed inline in the threaded code
during translation. For example, the offset for the jge op-code
and the call destinations are placed directly in the
opcode stream, immediately after the associated opcode.
These are represented as the pseudo-operand '@' in the fig-
3 This doubly-recursive function has the interesting property that
its result is the number of function calls required to calculate the
result.
nfibs:
=cont
=nfibs
=nfibs
Figure
7: Threaded code for nfibs benchmark, after
inlining. The implementations of the new macro
opcodes are shown on the right.
ure, and appear on a separate line in the code prefixed with
'='.
After this initial translation to threaded code, a second
pass performs inlining on the threaded code: basic blocks
are identified, used to dynamically generate new threaded
macro opcodes, and the corresponding original sequences of
threaded opcodes are replaced with single macro opcodes.
The rewriting of the threaded code can be performed in-situ,
since optimizing an opcode sequence will always result in a
shorter sequence of optimized code; there is no possibility of
overwriting an opcode that has not yet been considered for
inlining.
Figure
7 shows the code for the nfibs function after in-lining
has taken place. The function has been reduced to
five threaded macro opcodes (shown as '%1' through `%5'),
each replacing a basic block in the original code. The implementation
of each new macro opcode is the concatenation of
the implementations of the opcodes that it replaces. These
new implementations are written in a separate area of memory
called the macro cache. Five such implementations are
required for nfibs, and are shown within curly braces in the
figure. Each one ends with a copy of the implementation of
the pseudo-opcode !thr?, which is the threading operation
to dispatch the next opcode.
Inline arguments are copied verbatim, except for cont (a
jump offset) which is adjusted appropriately by the transla-
tor. (These inline arguments are used by the macro opcode
implementations at the points marked with '@' in the figure.)
To help with the identification of basic blocks, we divide
our threaded opcodes into four classes, as follows:
INLINE - the opcode's implementation can be inlined
into a macro opcode without restriction (the arithmetic
opcodes belong to this class);
PROTECT - the implementation contains a C function
call and therefore cannot be inlined (the print opcode
belongs to this class);
FINAL - the opcode changes the flow of control and
therefore defines the end of a basic block (e.g. the call
RELATIVE - the opcode changes the flow of control
and therefore defines the end of a basic block (e.g. the
conditional branch jge).
The only difference between FINAL and RELATIVE is the way
in which the opcode's inline operand is treated. In the first
case the operand is absolute, and can be copied directly into
the final translated code. In the second case the operand is
relative to the current threaded program counter, and so
must be adjusted appropriately in the final translated code.
Figure
8 shows the translator code that initializes the
threaded opcode table, along with representative implementations
of several of our threaded opcodes (each of the four
classes of threaded opcode is represented).
#define
#define POP() (*sp-)
#define GET() ((long)(*++ip)) /* read inline operand */
#define NEXT() goto *++ip /* dispatch next opcode */
#define PROTECT (0x00) /* never expanded */
#define INLINE (1!!0) /* expanded */
#define FINAL (1!!1) /* expanded, ends a basic block */
#define RELATIVE (1!!2) /* expanded, ends a basic block,
offset follows */
#define OP(NAME, NARGS, FLAGS) "
case
if (!initialIP) break; "
start-#NAME:
/* opcode body */
#define
/* initialize rather than execute (see macro 'OP') */
for (int
switch (op) -
OP(jge-r0-r1, 1, RELATIVE) - register long
if (r0 ?= r1) ip += offset;
OP(call, 1, FINAL) - register long dest = GET();
*)dest -
default:
fprintf(stderr, "panic: op %d is undefined!"n", op);
abort();
Figure
8: Opcode table initialization.
The translator's inlining loop is shown in Figure 9. It is
not as complex as it might first appear. code is a pointer to
the translated threaded code, which is rewritten in-situ. in
and out are indices into code pointing to the next opcode
to be copied (or inlined) and the location to which it will be
copied, respectively (in ?= out at all times).
The loop considers each in opcode for inlining: the inlining
loop is entered only if both the current opcode and the
opcode following it can be inlined. If this is not the case,
the opcode at in is copied (along with any inline arguments)
directly to out.
nextMacro is a pointer to the next unused location in
the macro cache. The inlining loop first writes this address
to out (it represents the threaded opcode for the macro
implementation that is about to be generated), and then
copies the compiled implementations of opcodes from in
into the macro cache. The inlined threaded opcodes are not
copied, although any inline arguments that are encountered
are copied directly to out.
The inlining loop continues until it copies the implementation
of an opcode that explicitly ends a basic block
or RELATIVE), or until the next opcode is either non-inlinable
int
while
int nextIn = in
long
if (info[thisOp].flags == INLINE &&
info[nextOp].flags != PROTECT &&
/* CAN INLINE: create new macro opcode at nextMacro */
void
new macro opcode */
while (info[thisOp].flags != PROTECT) -
icopy(info[thisOp].addr, ep, info[thisOp].size);
if (info[thisOp].flags == RELATIVE) -
locn of offset */
for (int
if (info[thisOp].flags == FINAL -
info[thisOp].flags == RELATIVE -
destination[in])
break; /* end of basic block */
/* copy threading operation */
icopy(info[thr].addr, ep, info[thr].size);
/* CAN'T INLINE: copy opcode and inline arguments */
if (info[thisOp].flags == RELATIVE) -
/* copy literal arguments */
for (int
Figure
9: Dynamic translator loop.
(PROTECTED) or a branch destination (implicitly ending the
current basic block). The translator then appends the implementation
of the pseudo-opcode thr, which is the "thre-
ading" operation itself. Finally, the nextMacro location is
updated ready for the next inlining operation.
The translator loop uses an array of flags "destination"
to identify branch destinations within the threaded code.
This array is easily constructed during the translator's first
pass, when bytecodes are expanded into non-inlined threaded
code. The loop also creates two arrays, relocations
and patchList, that are used to recalculate relative branch
offsets. 4
The inlining loop concatenates opcode implementations
using the icopy function, shown in Figure 10. This function
is similar to bcopy except that it also synchronizes the pro-
cessor's instruction and data caches to ensure that the new
macro opcode's implementation is executable. It contains
the only line of platform-dependent code in our interpreter.
4 The branch destination identification and relative offset recalculation
are not shown here. These can be seen in the full source code
for the example interpreter (see the Appendix).
static inline void icopy(void *source, void *dest, size-t size)
bcopy(source, dest, size);
while (size ?
asm ("dcbst 0,%0; sync; icbi 0,%0; isync" :: "r"(p));
#elif defined(-sparc)
asm ("flush %0; stbar" :: "r"(p));
/* no-op */
#elif defined(.)
#endif
dest += 4; size -= 4;
Figure
10: The icopy function, containing the single
line of platform-dependent code.
3.2 Saving space
Translating multiple copies of the same opcode sequences
would waste space. We therefore keep a cache of dynamically
generated macro opcodes, keyed by a hash value computed
from the incoming (unoptimized) opcodes during
translation. In the case of a cache hit we reuse the existing
macro opcode in the translated code, and immediately
reclaim the macro cache space occupied by the newly
translated version. In the case of a cache miss, the newly
generated macro opcode is used in the translated code and
the hash table updated to include the new opcode. This
ensures that we never have more than one macro opcode
corresponding to a given sequence of unoptimized opcodes.
4 Experimental results
We are particularly interested in the performance benefits
when dynamic inlining is applied to interpreters with fine-grain
instruction sets. Nevertheless, we were also curious to
see how the technique would perform when applied to an
interpreter having a more coarse-grained bytecode set. We
took measurements in both of these contexts, using our own
RISC-like interpreter and the widely-used (but less suited)
interpreter for the Objective Caml language.
4.1 Fine-grained opcodes
Our RISC-like interpreter has an opcode set similar to that
presented in Section 3.1. It can be configured (at compile
time) to use bytecodes, direct threaded code, or direct threaded
code with dynamically-generated macro opcodes. The
performance of two benchmarks was measured using this in-
terpreter: the function-call intensive Fibonacci benchmark
presented earlier (nfibs), and a memory intensive, function
call free, prime number generator (sieve).
Table
1 shows the number of seconds required to execute
these benchmarks on several architectures (133MHz
Pentium, SparcStation 20, and 200MHz PowerPC 603ev).
The figures shown are for a simple bytecode interpreter, the
same interpreter performing translation into direct threaded
code, direct threaded code with dynamic inlining of common
opcode sequences, and the benchmark written in C
and compiled with the same optimization options (-O2) as
our interpreter. The final column shows the performance of
the inlined threaded code compared to optimized C.
nfibs
machine bytecode threaded inlined C inlined/C
Pentium 63.2 37.1 22.3 11.1 49.8%
sieve
machine bytecode threaded inlined C inlined/C
Pentium 25.1 17.6 13.2 4.6 34.8%
Table
1: nfibs and sieve benchmark results for the
three architectures tested. The final column shows
the speed of the inlined threaded code relative to optimized
C.
Pentium
Pentium
bytecode direct threaded inlined
Figure
11: Benchmark performance relative to optimized C.
nfibs spends much of its time performing arithmetic between
registers. Memory (stack) operations are performed
only during function call and return.
Our interpreter allocates the first few VM registers in
physical machine registers whenever possible. The opcodes
that perform arithmetic are therefore typically compiled into
a single machine instruction on the Sparc and PowerPC.
These two architectures show a marked improvement in performance
when common sequences are inlined into single
macro opcodes, due to the significantly reduced ratio of op-code
dispatch to "real" work. The effect is less pronounced
on the Pentium, which has so few machine registers that
all the VM registers must be kept in memory. Each arithmetic
opcode compiles into several Pentium instructions,
and therefore the ratio of dispatch overhead to real work
is lower than for the RISC architectures.
We observe a marked improvement (approximately a factor
of two) between successive versions of the interpreter for
nfibs.
sieve shows a less pronounced improvement because it
spends the majority of its time performing memory opera-
tions. The contribution of opcode dispatch to the overall
execution time is therefore smaller than with nfibs.
It is also interesting to observe the performance of each
version of the interpreter relative to that of optimized C.
Figure
11 shows that nfibs gains approximately 14% the
speed of optimized C when moving from a bytecoded representation
to threaded code. The gain when moving from
threaded to inlined threaded code is more dependent on the
architecture: approximately 20% for the Pentium, and 38%
for the Sparc. The gains for sieve are both smaller and less
dependent on the architecture: approximately 9% at each
step, for all three architectures.
4.2 Objective Caml
We also applied our technique to the Objective Caml byte-code
interpreter, in order to obtain realistic measurements
of its performance and overheads in a less favorable environment
Objective Caml was chosen because the design and implementation
of the interpreter's core is clean and simple,
and so understanding it before making the required modifications
did not present a significant challenge. Furthermore
it is a fully-fledged system that includes a bytecode com-
piler, a benchmark suite, and some large applications. This
made it easier to collect meaningful statistics.
The interpreter is also equipped with a mechanism to
bulk-translate the bytecodes into threaded code at startup
(on those platforms that support it). 5 We needed only to
extend this initial translation phase to perform the analysis
of opcode sequences, generate macro opcode implementa-
tions, and rewrite the threaded code in-situ to use these
dynamically-generated macro opcodes. Implementing our
technique for the Caml virtual machine took one day. There
were only two small details that required careful attention.
The first was the presence of the SWITCH opcode. This
performs a multi-way branch, and is followed in the threaded
code by an inline table mapping values onto branch offsets.
We added a special case to our translator loop to handle this
opcode.
The second was the existence of a handful of opcodes
that consume two inline arguments (a literal and a relative
offset). We introduced a new opcode class RELATIVE2
for these, which differs from RELATIVE only by copying an
additional inline literal argument before the offset in the
translator loop.
Our translation algorithm was identical in all other respects
to the one presented in Section 3.
We ran the standard Objective Caml benchmark suite 6
with our modified VM (see Table 2). The VM was instrumented
to gather statistics relating to execution speed,
5 It uses gcc's first-class labels to do this portably.ftp://ftp.inria.fr/INRIA/Projects/cristal/Xavier.Leroy/
benchmarks/objcaml.tar.gz
boyer
fib
genlex
qsort
qsort*
sieve
soli
soli*
takc
taku
speed (inlined/non-inlined)
Pentium Sparc PowerPC
Figure
12: Objective-Caml benchmark results for the three architectures tested. The vertical axis shows the performance
relative to the original (non-inlining) interpreter. Asterisks indicate versions of the benchmarks compiled with array bounds
checking disabled.
boyer term processing, function calls
fib integer arithmetic, function calls (1 arg)
genlex lexing, parsing, symbolic processing
kb term processing, function calls, functionals
qsort integer arrays, loops
sieve integer arithmetic, list processing, functionals
soli puzzle solving, arrays, loops
takc integer arithmetic, function calls (3 args, curried)
taku integer arithmetic, function calls (3 args, tuplified)
Table
2: Objective Caml benchmarks.
memory usage, and the characteristics of dynamically generated
macro opcodes.
Figure
12 shows the performance of the benchmarks after
inlining, relative to the original performance without inlining
It is important to note that the Objective Caml byte-code
set has already been optimized statically, as described
in Section 2.4 [Ler98]. Any further improvements are therefore
due mainly to the elimination of dispatch overhead in
common sequences that are particular to each application.
Virtual machines whose bytecode sets have not been "stat-
ically" optimized in this way would benefit more from our
technique.
We can see from the figure that the majority of benchmarks
benefit from a significant performance advantage after
inlining. In most cases the inlined version runs more than
50% faster than the original, with two of the benchmarks
running twice as fast as the original non-inlined version on
the Sparc.
It is clear that the improvements are related to the processor
architecture. This is probably due to differences in
the cost of the threading operation. On the Sparc, for ex-
ample, avoiding the pipeline stalls associated with threading
seems to make a significant difference.
Figure
13 shows the final size of the macro cache for
each benchmark on the Sparc, plotted as a factor of the size
of the original (unoptimized) code. The final macro cache135
cache
size
original
code
size
original code size (kbytes)
Figure
13: Macro cache size (diamonds) and optimized
threaded code size (crosses), plotted as a factor
of the original code size.
sizes vary slightly for each architecture, since they depend
on the size of the bytecode implementations. However, the
shape is the same in each case. The average ratios of original
bytecode size to the macro cache size show that the cost is
between three and four times the size of the original code on
the Sparc. (The ratio is almost identical for the PowerPC,
and slightly smaller for the Pentium.)
We observe that this ratio decreases gradually as the
original code size increases. This is to be expected, since
larger bodies of code will tend to reuse macro opcodes rather
than generating new ones. We tested this by translating the
bytecoded version of the Objective Caml compiler: 421,532
bytes of original code generated 941,008 bytes of macro op-code
implementation on the Sparc. This is approximately
2.2 times the size of the original code, and is shown as the
rightmost point in the graph.
Inlined threaded code is always smaller than the original
code from which is generated. Figure 13 also shows the final
optimized code size for each benchmark. We observe that
the ratio is independent of the size of the benchmark. This is
also to be expected, since the reduction in size is dependent
on the average number of opcodes in a common sequence
and the density of the corresponding macro opcodes in the
final code. These depend mainly on the characteristics of
the language and its opcode set.
Some systems have a long-lived object memory, and generate
new executable code at runtime. A realistic implementation
for such systems would recycle the macro cache space,
and possibly use profiling to optimize only popular areas of
the program. For example, the 68040LC emulator found on
Macintosh systems performs dynamic translation of 68040
into PowerPC code; it normally requires only 250Kb of cache
in which the most commonly used translated code sequences
are stored [Tho95]. A similar (fixed) cache size is effective
in the BrouHaHa Smalltalk system [Mir97].
Translation speed is also an important factor. To measure
this we ran the Object Caml bytecode compiler (a much
larger program than any of the benchmarks) with our modified
interpreter. The 105,383 opcodes of the Objective Caml
compiler are translated in 0.22 seconds on the Sparc, a rate
of 480,000 opcodes per second. The inlining interpreter executes
the compiler at a rate of 2.4 million opcodes per sec-
ond. Translation is therefore approximately five times slower
than execution. 7
5 Related work
BrouHaHa and Objective Caml have both demonstrated the
benefits of creating specialized macro opcodes that perform
the work of a sequence of common opcodes. In Objective
Caml this led to a new bytecode set. In BrouHaHa the
standard Smalltalk-80 bytecodes are translated into threaded
code for execution; the detection of a limited number
of pre-determined common bytecode sequences is performed
during translation, and a specialized opcode is substituted
in the executable code. Our contribution is the extension of
this technique to dynamically analyze and generate implementations
for new macro opcodes at runtime.
Several systems use concatenation of pre-compiled sequences
of code at runtime [Aus96, Noe98], but in a completely
different context. Their precompiled code sequences
are generic "templates" that can be parameterized at run-time
with particular constant values.
A template-based approach is also used in some commercial
Smalltalk virtual machines that perform dynamic
compilation to native code [Mir97]. However, this technique
is complex and requires a significant effort to implement the
templates for a new architecture.
An interesting system for portable dynamic code generation
is vcode [Eng96], an architecture-neutral runtime as-
sembler. It generates code that approaches the performance
of C on some architectures. Its main disadvantage is that
retrofitting it to an existing virtual machine requires a significant
amount of effort - certainly more than the single day
that was required to implement our technique in a production
virtual machine. (Our simple nfibs benchmark runs
about 40% faster using vcode, compared to our RISC-like
inlined threaded code virtual machine.)
Superoperators [Pro95] are a technique for specializing
a bytecoded C interpreter according to the program that
it is to execute. This is possible because the specialized
7 Since translation is performed only once for each opcode, the
"break-even" point is passed in any program that executes more than
six times the number of opcodes that it contains.
interpreter is generated at the same time as the compiled
(bytecoded) representation of the program. A compile-time
analysis of the program chooses likely candidates for super-
operators, which are then implemented as new interpreter
bytecodes.
Superoperators are similar to our macro opcodes. One
advantage is that their corresponding synthesized bytecodes
can benefit from some of the inter-opcode optimizations that
our simple concatenation of implementations fails to exploit.
However, superoperators require bytecodes corresponding
precisely with the nodes used to build parse trees - which
might not always be the best choice of bytecode set. It would
also be tricky to use superoperators in an incremental system
such as Smalltalk, where new executable code is generated at
runtime. Nevertheless, an investigation of merging some of
the techniques of superoperators and dynamically-generated
macro opcodes might be very worthwhile.
6 Conclusions
This work was inspired by the need to create an interpreter
with a very fine-grain RISC-like opcode set, that is both
general (not tied to any particular high-level language) and
amenable to traditional compiler optimizations. The cost
of opcode dispatch is more significant in such a context,
compared to more abstract interpreters whose bytecodes are
carefully matched to the language semantics.
The expected benefits of our technique are related to the
average semantic content of a bytecode. We would expect
languages such as Tcl and Perl, which have relatively high-level
opcodes, to benefit less from macroization. Interpreters
with a more RISC-like opcode set will benefit more - since
the cost of dispatch is more significant when compared to the
cost of executing the body of each bytecode. The Objective
Caml bytecode set is positioned between these two extremes,
containing both simple and complex opcodes. 8
Vcode has better performance than our technique because
its instruction set matches very closely the underlying
architecture. It can exert very fine control over the code that
is generated, such as performing some degree of reordering
for better instruction scheduling. We believe that similar results
can be achieved with our RISC-like inlining threaded
code interpreter, but in a more portable manner.
The performance of macro opcodes is limited by the inability
of the compiler to perform the inter-opcode optimizations
that are possible when a static analysis is performed
and new macro opcodes implemented manually in the in-
terpreter. We believe that these limitations are less important
when using a very fine-grain opcode set, corresponding
more closely to a traditional RISC architecture. Most op-codes
will be implemented as a single machine instruction,
and new opportunities for inter-opcode optimization will be
available to the translator's code generator.
Our technique is portable, simple to implement, and orthogonal
to the implementation of the virtual machine's op-
codes. In reducing the overhead of opcode dispatch, it helps
to bring the performance of fine-grained bytecodes to the
same level as that of more abstract, language-dependent op-code
sets.
8 Significant overheads are associated with the technique used to
check for stack overflow and pending signals in Objective Caml, but
a discussion of these is beyond the scope of this paper.
speed (seconds) space (bytes)
Pentium Sparc PowerPC Sparc
benchmark original inlined original inlined original inlined original inlined cache
boyer 2.0 1.81 (111%) 2.3 1.50 (154%) 1.4 1.19 (113%) 13800 8324 42012
fib 2.0 1.44 (140%) 4.0 2.47 (163%) 1.6 1.12 (139%) 5288 3320 20160
genlex 1.0 0.93 (110%) 1.1 0.84 (127%) 0.7 0.59 (118%) 45696 26856 156892
kb 10.3 8.15 (126%) 16.9 7.71 (219%) 6.3 5.36 (118%) 20968 13048 75868
qsort 5.8 3.95 (146%) 9.5 5.39 (175%) 4.1 2.98 (137%) 6676 3932 26416
qsort* 4.8 3.04 (158%) 8.0 4.26 (188%) 3.3 2.27 (147%) 6532 3884 25280
sieve 3.0 2.79 (107%) 2.5 2.22 (110%) 1.9 1.86 (100%) 5200 3312 20124
soli 3.1 2.18 (144%) 5.1 2.98 (170%) 2.1 1.50 (142%) 6644 3952 25516
soli* 2.4 1.38 (172%) 4.0 2.00 (202%) 1.6 0.93 (168%) 6544 3908 24548
takc 2.8 1.91 (144%) 5.0 3.26 (152%) 2.1 1.47 (142%) 4784 3012 18652
taku 4.9 3.20 (152%) 7.0 4.14 (170%) 3.2 2.33 (139%) 4812 3036 18296
Table
3: Raw results for the Objective-Caml benchmarks.
Acknowledgements
The authors would like to thank Xavier Leroy, John Mal-
oney, Eliot Miranda, Dave Ungar, Mario Wolczko and the
anonymous referees, for their helpful comments on a draft
of this paper.
--R
The Java Programming Lan- guage
Communications of the ACM
Efficient Implementation of the Smalltalk-80 System
Engler, vcode: A Retargetable
A Portable Forth Engine
Back to the Future: the Story of Squeak
The Objective Caml system release 1.05
The Java Virtual Machine Specification
Fast Direct
Optimizing an ANSI C Interpreter with Superoperators
Building the Better Virtual CPU
--TR
Smalltalk-80: the language and its implementation
BrouHaHa- A portable Smalltalk interpreter
Fast, effective dynamic compilation
Back to the future
The Java programming language (2nd ed.)
Java Virtual Machine Specification
Efficient implementation of the smalltalk-80 system
--CTR
Alex Iliasov, Templates-based portable just-in-time compiler, ACM SIGPLAN Notices, v.38 n.8, August
Fabrice Bellard, QEMU, a fast and portable dynamic translator, Proceedings of the USENIX Annual Technical Conference 2005 on USENIX Annual Technical Conference, p.41-41, April 10-15, 2005, Anaheim, CA
Jinzhan Peng , Gansha Wu , Guei-Yuan Lueh, Code sharing among states for stack-caching interpreter, Proceedings of the 2004 workshop on Interpreters, virtual machines and emulators, June 07-07, 2004, Washington, D.C.
Ben Stephenson , Wade Holst, Multicodes: optimizing virtual machines using bytecode sequences, Companion of the 18th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications, October 26-30, 2003, Anaheim, CA, USA
Brian Davis , John Waldron, A survey of optimisations for the Java Virtual Machine, Proceedings of the 2nd international conference on Principles and practice of programming in Java, June 16-18, 2003, Kilkenny City, Ireland
M. Anton Ertl , David Gregg, Combining stack caching with dynamic superinstructions, Proceedings of the 2004 workshop on Interpreters, virtual machines and emulators, June 07-07, 2004, Washington, D.C.
Andrew Beatty , Kevin Casey , David Gregg , Andrew Nisbet, An optimized Java interpreter for connected devices and embedded systems, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Bertil Folliot , Ian Piumarta , Fabio Riccardi, A dynamically configurable, multi-language execution platform, Proceedings of the 8th ACM SIGOPS European workshop on Support for composing distributed applications, p.175-181, September 1998, Sintra, Portugal
Marc Berndl , Laurie Hendren, Dynamic profiling and trace cache generation, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, March 23-26, 2003, San Francisco, California
Brian Davis , Andrew Beatty , Kevin Casey , David Gregg , John Waldron, The case for virtual register machines, Proceedings of the workshop on Interpreters, virtual machines and emulators, p.41-49, June 12-12, 2003, San Diego, California
M. Anton Ertl , David Gregg, Retargeting JIT Compilers by using C-Compiler Generated Executable Code, Proceedings of the 13th International Conference on Parallel Architectures and Compilation Techniques, p.41-50, September 29-October 03, 2004
Henrik Nssn , Mats Carlsson , Konstantinos Sagonas, Instruction merging and specialization in the SICStus Prolog virtual machine, Proceedings of the 3rd ACM SIGPLAN international conference on Principles and practice of declarative programming, p.49-60, September 05-07, 2001, Florence, Italy
Mourad Debbabi , Abdelouahed Gherbi , Lamia Ketari , Chamseddine Talhi , Hamdi Yahyaoui , Sami Zhioua, a synergy between efficient interpretation and fast selective dynamic compilation for the acceleration of embedded Java virtual machines, Proceedings of the 3rd international symposium on Principles and practice of programming in Java, June 16-18, 2004, Las Vegas, Nevada
Mathew Zaleski , Marc Berndl , Angela Demke Brown, Mixed mode execution with context threading, Proceedings of the 2005 conference of the Centre for Advanced Studies on Collaborative research, p.305-319, October 17-20, 2005, Toranto, Ontario, Canada
M. Anton Ertl , David Gregg, Optimizing indirect branch prediction accuracy in virtual machine interpreters, ACM SIGPLAN Notices, v.38 n.5, May
Yunhe Shi , David Gregg , Andrew Beatty , M. Anton Ertl, Virtual machine showdown: stack versus registers, Proceedings of the 1st ACM/USENIX international conference on Virtual execution environments, June 11-12, 2005, Chicago, IL, USA
Marc Berndl , Benjamin Vitale , Mathew Zaleski , Angela Demke Brown, Context Threading: A Flexible and Efficient Dispatch Technique for Virtual Machine Interpreters, Proceedings of the international symposium on Code generation and optimization, p.15-26, March 20-23, 2005
Benjamin Vitale , Tarek S. Abdelrahman, Catenation and specialization for Tcl virtual machine performance, Proceedings of the 2004 workshop on Interpreters, virtual machines and emulators, June 07-07, 2004, Washington, D.C.
K. S. Venugopal , Geetha Manjunath , Venkatesh Krishnan, sEc: A Portable Interpreter Optimizing Technique for Embedded Java Virtual Machine, Proceedings of the 2nd Java Virtual Machine Research and Technology Symposium, p.127-138, August 01-02, 2002
M. Anton Ertl , David Gregg , Andreas Krall , Bernd Paysan, Vmgen: a generator of efficient virtual machine interpreters, SoftwarePractice & Experience, v.32 n.3, p.265-294, March 2002
Jeffery von Ronne , Ning Wang , Michael Franz, Interpreting programs in static single assignment form, Proceedings of the 2004 workshop on Interpreters, virtual machines and emulators, June 07-07, 2004, Washington, D.C.
Mathew Zaleski , Angela Demke Brown , Kevin Stoodley, YETI: a graduallY extensible trace interpreter, Proceedings of the 3rd international conference on Virtual execution environments, June 13-15, 2007, San Diego, California, USA
Mourad Debbabi , Abdelouahed Gherbi , Azzam Mourad , Hamdi Yahyaoui, A selective dynamic compiler for embedded Java virtual machines targeting ARM processors, Science of Computer Programming, v.59 n.1-2, p.38-63, January 2006
Arun Kejariwal , Xinmin Tian , Milind Girkar , Wei Li , Sergey Kozhukhov , Utpal Banerjee , Alexander Nicolau , Alexander V. Veidenbaum , Constantine D. Polychronopoulos, Tight analysis of the performance potential of thread speculation using spec CPU 2006, Proceedings of the 12th ACM SIGPLAN symposium on Principles and practice of parallel programming, March 14-17, 2007, San Jose, California, USA
David Gregg , Andrew Beatty , Kevin Casey , Brain Davis , Andy Nisbet, The case for virtual register machines, Science of Computer Programming, v.57 n.3, p.319-338, September 2005
Etienne M. Gagnon , Laurie J. Hendren, SableVM: a research framework for the efficient execution of java bytecode, Proceedings of the JavaTM Virtual Machine Research and Technology Symposium on JavaTM Virtual Machine Research and Technology Symposium, p.3-3, April 23-24, 2001, Monterey, California
Gregory T. Sullivan , Derek L. Bruening , Iris Baron , Timothy Garnett , Saman Amarasinghe, Dynamic native optimization of interpreters, Proceedings of the workshop on Interpreters, virtual machines and emulators, p.50-57, June 12-12, 2003, San Diego, California
Ana Azevedo , Arun Kejariwal , Alex Veidenbaum , Alexandru Nicolau, High performance annotation-aware JVM for Java cards, Proceedings of the 5th ACM international conference on Embedded software, September 18-22, 2005, Jersey City, NJ, USA
Scott Thibault , Charles Consel , Julia L. Lawall , Renaud Marlet , Gilles Muller, Static and Dynamic Program Compilation by Interpreter Specialization, Higher-Order and Symbolic Computation, v.13 n.3, p.161-178, Sept. 2000
Mahmut Taylan Kandemir, Improving whole-program locality using intra-procedural and inter-procedural transformations, Journal of Parallel and Distributed Computing, v.65 n.5, p.564-582, May 2005
John Aycock, A brief history of just-in-time, ACM Computing Surveys (CSUR), v.35 n.2, p.97-113, June | threaded code;bytecode interpretation;dynamic translation;inlining;just-in-time compilation |
277884 | Transient loss performance of a class of finite buffer queueing systems. | Performance-oriented studies typically rely on the assumption that the stochastic process modeling the phenomenon of interest is already in steady state. This assumption is, however, not valid if the life cycle of the phenomenon under study is not large enough, since usually a stochastic process cannot reach steady state unless time evolves towards infinity. Therefore, it is important to address performance issues in transient state.Previous work in transient analysis of queueing systems usually focuses on Markov models. This paper, in contrast, presents an analysis of transient loss performance for a class of finite buffer queueing systems that are not necessarily Markovian. We obtain closed-form transient loss performance measures. Based on the loss measures, we compare transient loss performance against steady-state loss performance and examine how different assumptions on the arrival process will affect transient loss behavior of the queueing system. We also discuss how to guarantee transient loss performance. The analysis is illustrated with numerical results. | Introduction
For a queueing system with finite buffer, loss performance
is usually of great interest. One may want to know
the probability that loss of workload occurs, or the expected
workload loss ratio, due to buffer overflow. In the
existing literature, this issue is typically addressed under
the assumption that the stochastic process modeling the
phenomenon of interest is already in steady state. This
assumption is, however, not valid if the life cycle of the
This work was supported by Academia Sinica and National Natural
Science Foundation of China.
phenomenon under study is not large enough, since usually
a stochastic process cannot reach steady state unless
time evolves towards infinity. For example, let us consider
performance guarantee for real-time communications
in high-speed networks, where performance must be defined
on a per-connection basis [10]. Since the duration
of any connection in a real-world network is always finite,
the performance of the connection is in fact dominated by
transient behavior of the underlying traffic process carried
by the connection. Therefore, it is important to address
performance issues in transient state. In this work, we
assume that arrival processes are stationary. For a non-stationary
process, both steady state and transient state
are meaningless.
Previous work in transient analysis of queueing systems
usually focuses on Markov models, for example, see [1, 17].
Due to complexity involved in analysis, one may have to re-sort
to simulation. Nagarajan and Kurose have examined
transient loss performance defined on an interval basis for
packet voice connections in high-speed networks by simulation
[13]. They observed that in transient state, voice
connections experienced more serious performance degra-
dations. Our work is different from the previous work in
that we analyze transient loss performance of a class of
finite buffer queueing systems that may not be necessarily
Markovian.
Now let us consider a finite buffer queueing system
fed by a two-state fluid-type stochastic process, where the
state of the arrival process represents the arrival rate of
workload to the system, that is, the amount of workload
arrived per time unit. When the arrival process is in a
given state, the arrival rate is a constant. The server of
the system processes workload in the queue at a constant
service rate unless the queue is empty. This queueing system
is not necessarily Markovian. Fig. 1 shows a typical
scenario illustrating such a system. The notations in Fig.
are explained as follows, which will also be used in the
following sections.
ffl R(t): the arrival process representing the amount of
workload arrived per time unit at time t.
ffl r1 and r0 : the arrival rates associated with the states
of the arrival process where r1 ? r0 - 0.
ffl ON and OFF: we use ON and OFF to refer to states
corresponding to r1 and r0 , respectively, and assume
OFF ON ON
OFF
time
Figure
1: A finite buffer queueing system fed by a general
two-state fluid-type stochastic process.
that the initial state of the process is ON. Note that
when R(t) is in the OFF state, r0 may not necessarily
be zero.
ffl Sn and 1: the lengths of the nth ON
and OFF intervals, respectively. For the time being,
we assume that both Sn and are i.i.d. random
variables with arbitrarily given distributions. So we
can also denote Sn and Tn by S and T , respectively.
ffl B: the buffer size of the queueing system measured
by the maximum amount of workload that can be
accommodated by the buffer.
ffl C: the service rate of the server measured by the
amount of workload processed per time unit.
In this paper, our aim is to investigate loss performance
of such a class of queueing systems. Instead of loss performance
defined in idealized steady state, we are interested
in transient loss performance. For example, we want to
know the probability that loss of workload occurs during
the nth ON interval, or the expected workload loss ratio
of the nth ON interval, for arbitrary n - 1. We have
obtained closed-form loss performance measures defined
in transient state. Based on the performance measures,
we can therefore contrast transient loss performance with
steady-state loss performance and compare transient loss
behaviors of the queueing systems fed by different stochastic
processes. For some applications, performance guarantee
is important. So we will also discuss how to guarantee
transient loss performance. The results obtained are
useful, since many interesting phenomena in performance-oriented
studies can be modeled by two-state fluid-type
stochastic processes, and the application of our results can
also be extended to multiple-state processes.
In the following, we first present our analysis regarding
how to define, compute and guarantee transient loss performance
(Section 2). To compare transient loss against loss
in steady state, we also describe how to compute steady-state
limits of the loss measures (Section 3). Then we use
numerical results to illustrate the analysis (Section 4). After
a brief discussion on some implications and application
of the results (Section 5), we conclude the paper and point
out some issues to be explored further (Section 6).
Transient Loss Performance Analysis
For a finite buffer queueing system fed by a two-state
fluid-type stochastic process with arrival rates r1 ? r0 , if
the service rate C of the system is greater than r0 but
less than r1 , then loss of workload due to buffer overflow
can only occur when the arrival process is in the ON state
In fact, this is the only non-trivial case that
should be considered. If C - r0 , then loss performance will
be out of control in transient state. On the other hand, if
loss will never occur for certain. Therefore, in
the following, we assume r0 ! C ! r1 , and consider only
loss measures defined for ON intervals of an arrival pro-
cess. Since transient performance of the queueing system
depends on temporal behavior of the system state, before
we derive transient loss measures, we shall first investigate
dynamical evolution of system state variables.
2.1 Stochastic Dynamics of the Queueing System
The state variables of interest regarding the queueing
system are Wn and Qn , which represent the amounts of
workload in the buffer at the beginning and the end of
the nth ON interval of the arrival process, respectively.
Suppose that initially
where w1 is a given constant between 0 and B. According
to flow balance, the evolution of Wn and Qn is as follows.
ae
ae
where n - 1. Since S and T are random variables, Wn
and Qn are also random variables. Denote by 'n(w) and
/n(q) the probability density functions of Wn and Qn , re-
spectively. Evidently, 'n(w) and /n(q) can be viewed as
a solution of the above evolution equations in the probabilistic
sense, which can be readily obtained by recurrence.
and
where for
is the probability density function of Qn conditioned on
is the probability density function of S,
is the probability density function of Wn conditioned on
is the probability density function of T ,
and ffi(\Delta) is the Dirac delta function. In general, the Dirac
function is defined by
ae
An important property of the Dirac delta function is
is a function
defined for x 2 (\Gamma1; 1). It is easy to verify that the
above probability density functions are nonnegative and
normalizable.
The process Wn has significant impact on transient loss
performance of the queueing system. To reach steady state
for realizing steady-state performance, the system needs
time to forget its history characterized by the initial condition
and the distributions of Wn for
2.2 Transient Loss Measures
We consider two transient loss measures defined for ON
intervals of an arrival process. In the following, we denote
an arrival process R(t) by R(n) when t is in the nth ON
interval where n - 1. Our first transient loss measure is the
probability that loss of workload occurs during the nth ON
interval due to buffer overflow, denoted by
r1 g. The second transient loss measure is the expected
workload loss ratio of the nth ON interval, denoted by
We first compute the loss probability.
In order to derive g, we consider
of exposition, denote the nth ON interval by [0; S).
We use w(t) to represent the amount of workload in the
buffer at t where t 2 [0; S). Given the event that
loss of workload due to buffer overflow occurs in the interval
is equivalent to existing - 2 [0; S) such that
depends on w, we express such
dependence by -(w). It is easy to see
Therefore, for
For simply use
as transient loss measure since w1 is a constant. For
we have
where 'n (w) can be readily obtained by recurrence as
shown in the previous subsection.
Now we consider the expected workload loss ratio. Denote
by ' the fraction of workload lost due to buffer overflow
in the nth ON interval. We assume first
since in [0; S),
loss of workload due to buffer overflow begins only after
reaches -(w). If S ? -(w), then the amount
of workload lost in [0; S) equals the amount of workload
arrived in [0; S) minus the amount of workload accepted in
[0; S), which is r1S \Gamma (CS
where
Therefore,
ae
\Theta
' depends on S, we can also denote ' by '(S).
where FS (s) is the probability distribution function of S.
Denote '(s) by u, we have
s
and
When equivalent
to Therefore, for
Z u 0P
ae
oe
du:
Since w1 is a constant, for simply use
Z u 0P
ae
oe
du
as transient loss measure. For n - 2, we have
Note that for n - 2, the loss probability and the expected
loss ratio depend on the distributions of both S and T since
'n(w) depends on the distributions of not only S but also
2.
2.3 Investigating Transient Loss Behavior
We can investigate transient loss behavior of the queueing
system by computing the loss measures for any given
- 1. But it is not necessary for us to do so. In fact, we
can focus only on a series of a small number of ON and
OFF intervals. Since both ON and OFF intervals have
impact on transient loss performance, such a series must
contain at least two ON intervals and one OFF interval
between them, as shown in Fig. 2.
ON OFF ON
Figure
2: An ON-OFF-ON series of the arrival process.
Now let us consider such an ON-OFF-ON series. Without
loss of generality, the two ON intervals in the series can
be numbered by 2. Since loss behavior of the
queueing system during the first ON interval in the series
cannot capture the impact of the OFF interval, we examine
loss behavior of the system in the second ON interval.
Using the results in the previous subsections, we have
dqdw
dw
dq
and
Z BZ u 0P
ae
oe
dqdudw
Z BZ u 0P
ae
oe
Z u 0P
ae
oe
du
dq
Z u 0P
ae
oe
du: (2)
Before the ON-OFF-ON series begins, the history of the
system is summarized by As time evolves, the
ON-OFF-ON series will probabilistically duplicate itself,
that is, this pattern will appear repeatedly with lengths
of ON and OFF intervals drawn from the same distribu-
tions. At the beginning of each such duplication, there is
an amount of workload w1 left in the buffer, reflecting the
impact of the history of the system. Therefore, in order
to investigate transient loss performance of the queueing
system, it is sufficient to consider or
only for examine the behavior
of the loss measures as w1 varies between 0 and B. In
fact, by focusing only on this simple pattern, we can observe
any possible transient loss behavior of the queueing
system.
2.4 Stochastically Bounding Transient Loss
For some applications, users may require statistically
guaranteed loss performance. For example, it may be required
that the loss probability or the expected loss ratio
should not exceed some given number. This is an important
issue for real-time communications in high-speed networks
and has been addressed typically under the assumption
of steady state. Here we discuss how to guarantee
loss performance in transient state for queueing systems
with two-state fluid-type arrival processes. It may not be
convenient to use the loss measures defined for individual
ON intervals directly for transient loss performance guar-
antee. Instead, we consider some upper bounds on the loss
measures.
For arbitrary n ? 1, if at the end of the (n \Gamma 1)th
ON interval, the buffer has been full already, that is, if
then for the nth ON interval, the loss measures
will be relatively larger compared to the case of
since a smaller Qn\Gamma1 will likely result in a smaller Wn and a
smaller Wn implies a relatively less chance of buffer overflow
and a relatively smaller expected loss ratio. There-
fore,
and B]. The
probability density function of Wn under the condition
0; otherwise.
Accordingly,
and
Now we show that the upper bounds also hold for the
first ON interval if the buffer is initially empty, that is,
In this case, we have of course
? 1. This is because
to Wn - W1 . Therefore, during the nth ON interval, the
queueing system is likely to suffer more severe loss than it
does in the first ON interval.
Based on the upper bounds on the transient loss performance
measures, we can determine the buffer size B and
the service rate C such that
or
is a given loss performance parameter. By
doing so, the loss measures defined for individual ON intervals
will not exceed the given loss performance parameter
if initially the buffer is empty.
In the above, we have assumed that both Sn and are
i.i.d. random variables, so we can use S and T to represent
Sn and respectively. Now we can allow Sn to have
different distributions for different n. In this case, S can
be viewed as a random variable with an arbitrarily given
distribution and stochastically greater than all Sn , that is,
Similarly, we can
also allow Tn to have different distributions. In this case,
T can be viewed as a random variable with an arbitrarily
given distribution and all are stochastically greater
than T , that is, PfTn ? tg - PfT ? tg for all t - 0. The
intuition behind the above arguments is that a stochastically
larger S or a stochastically smaller T will likely lead
to a larger Wn and will therefore result in a larger loss
probability and expected loss ratio. So the above bounds
on transient loss measures are still valid.
3 Steady-State Limits of Transient Loss Measures
To compare transient loss against loss in steady state,
we need to compute the steady-state limits of the loss
measures. The loss measures in steady state are defined
by Pflossg
For a given stationary arrival process, we have the following
limiting conditional probability density functions
and limiting probability density functions
regarding the state variables of the queueing system. To
compute the steady-state loss measures, we need to know
the steady-state probability density function '(w).
Now we show how to compute '(w). Since in steady
state we have by definition
Z B'(wjy)/(y)dy and
Z B/(yjx)'(x)dx
we see that '(w) satisfies a homogeneous Fredholm integral
equation of the second kind
Z BK(w;x)'(x)dx (7)
with kernel
R B'(wjy)/(yjx)dy. We can see
readily
where
and H(w;x) can also be obtained readily.
In general, the Fredholm integral equation can be solved
numerically. However, due to the singularity of K(w;x)
caused by the Dirac delta function ffi(w) at
solve (7) directly. The singularity caused by the Dirac
function ffi(w) at in '(w).
Since this singularity in '(w) is intrinsic and cannot be
removed from '(w), the functional form of '(w) will be
where '0 (w) and v(w) are unknown functions. Due to
the property of the Dirac delta function ffi(w), only v(0)
rather than the whole v(w) will appear in the steady-state
loss measures. In the following, we outline a procedure for
computing '0(w) and v(0). Substituting (9) and (8) into
(7), we have
Z BK0(w;x)'0(x)dx
Z BH(w;x)'0(x)dx
Comparing both sides of the above equation, we see
Z BK0(w;x)'0(x)dx: (10)
Note that
where j is a small positive variable. To avoid the trivial
case of
unknown, we let
and divide both sides of (10) by v(0), so
The above equation is a non-homogeneous Fredholm integral
equation of the second kind in ' 0 (w) with kernel
K0(w;x) and can be solved numerically with the standard
method [4].
obtained by solving (12), now we can determine
v(0) as follows. Since by definition
integrating both sides of the above equation from 0 to B,
we have
R Bv(w)ffi(w)dw
Therefore,
Substituting (13) into (11), we obtain finally
Based on the above results, the steady-state loss measures
now can be computed as follows,
and
-Z u 0P
ae
oe
du
Z BZ u 0P
ae
oe
4 Numerical Studies
In this section, we numerically investigate the following
issues: 1) transient loss behaviors of the queueing system
with different arrival processes, 2) transient loss versus
steady-state loss, 3) the tightness of the upper bounds on
the transient loss measures, and 4) the reliability of transient
loss performance guarantee with system resources determined
according to the upper bounds on the transient
loss measures. For simplicity, we assume that S and T are
identically distributed and all arrival processes considered
have the same E[S] and E[T ]. We consider exponential
distribution as well as two heavy-tailed distributions for S
and T . The first heavy-tailed distribution is
which is a variant of the conventional Pareto distribution.
The reason for us to use this variant form of the Pareto
distribution is that the random variable of interest in our
study takes values from [0; 1) while for the conventional
Pareto distribution, the random variable takes values from
heavy-tailed distribution is
defined by the following probability density function
ae
ffe \Gammaff A ff t \Gamma(ff+1) t ? A
2:
The arrival processes determined by (16) and (17) are long-range
dependent (LRD) processes [2, 5, 12], henceforth
referred to as LRD process I and LRD process II, respec-
tively. Clearly, the arrival process determined by exponentially
distributed S and T is a Markov process. Therefore,
the queueing system with such a Markov arrival process
can also be modeled by a Markov process. However, the
queueing system with an LRD arrival process is not Markovian
anymore.
For simplicity, we let
Under the assumption that all arrival processes considered
have the same E[S] and E[T ] and S and T are identically
distributed, the expectations determined by (16) and (17)
are the same. So the parameter A in (17) can be determined
by the above parameters.
As shown in Subsection 2.3, we can investigate transient
loss behavior of the queueing system by considering
only for 2.
are functions
of w1 , in the following, we simply use w to represent
and denote by P (w) and
by E(w) for convenience of exposition.
4.1 Transient Loss of Different Arrival Processes
We consider three arrival processes determined by the
distributions of S and T as described above. The assumption
that all arrival processes considered have the same
E[S] and E[T ] implies that the marginal state distributions
of the arrival processes are the same. We assume first
that the buffer size and the service rate
all the arrival processes. Then, we compare transient loss
behaviors of the arrival processes based on the functions
P (w) and E(w) defined by (1) and (2), respectively. Note
that w in P (w) and E(w) is previously denoted by w1 in
(1) and (2).
We observe that even though the marginal state distributions
of the arrival processes are the same, the transient
loss measures of the arrival processes can be significantly
different (Fig. 3). The difference in transient loss behavior
is only due to the distributions of S and T . We notice in
Fig. 3 (b) that if w exceeds some critical value, then the expected
loss ratio of the Markov process is less than that of
LRD process I. This is due to the effect of the heavy-tailed
distributions of S and T . Such effect becomes more significant
if C (or B) is large. For example, suppose that we
Fig. 3 (c) and (d). Note
that a large C implies a stringent loss performance require-
ment. In this case, we observe that for any w 2 [0; B], both
the loss measures of the Markov process are less than those
of LRD processes I and II. This is because for the Markov
process, the transient loss measures decay exponentially
fast as C or B increases, but for the LRD processes, the
loss measures decay only hyperbolically.
O O O
O
O
O
O
O
O
O O
E(w)
(b)
(d)
O O
O O
O
O
O
O
O
O O
O O O O O
O
O
O
O
O
O
O O O O O O O
O
O
O
O
(a)
(c)0.09580.0025810.034P(w) E(w)
Markov Process LRD Process II
O
Process I
Figure
3: Transient loss behaviors of different arrival pro-
cesses. (a) The loss probability with
(b) The expected loss ratio with
The loss probability with 0:95. (d) The
expected loss ratio with
4.2 Transient Loss versus Steady-State Loss
Based on (1), (2), (14) and (15), we can compare transient
and steady-state loss measures for different arrival
processes with different buffer size B and service rate C.
We first fix B and let C take several different values (Fig.
4). Then, we fix C and let B take different values (Fig. 5).
The loss measures in steady state are computed according
to the procedure outlined in Section 3.
Unlike the steady-state loss measures, which are simply
some constants, the transient loss measures are functions
of the initial value w that summarizes the history of the
system. For all the arrival processes considered, we observe
that as the initial value varies, the transient loss measures
will become greater than the corresponding steady-state
loss measures. That is, steady-state loss measures can underestimate
actual loss in transient state. As the buffer
size B or the service rate C increases, we can still observe
such underestimates. Although increasing B or C can decrease
the probability for w to take larger values, due to
the randomness of ON and OFF periods, it is still possible
for w to take values greater than some critical value
beyond which the steady-state loss measures will underestimate
transient loss. We even observe that if C (or B) is
large enough, then for all w 2 [0; B], the steady-state loss
measures are strictly less than the corresponding transient
loss measures, for example, see the case of C = 0:9 in Fig.
4.3 Upper Bounds on Transient Loss Performance
We compute upper bounds (3) and (4) on transient
(a)0
E(w)
E(w)
E(w)w w
(d)
(b)0.002
Markov Process
LRD Process
LRD Process II
Steady-State Loss Performance Transient Loss Performance
Figure
4: Transient loss versus steady-state loss for
and 0:9. (a) The loss probability of the
Markov process. (b) The expected loss ratio of the Markov
process. (c) The loss probability of LRD process I. (d)
The expected loss ratio of LRD process I. (e) The loss
probability of LRD process II. (f) The expected loss ratio
of LRD process II.
loss measures for different arrival processes with the same
buffer size 0:8. Then we simulate
actual transient loss performance with 95% confidence
intervals. The results are shown in Table 1. We observe
that the upper bounds on transient loss performance for
all the arrival processes are quite tight. Of course increasing
will affect the tightness of the upper bounds, since
for a large B, it is seldom for Qn\Gamma1 to reach B while a
tight bound implies that the actual value of Qn\Gamma1 is close
to B. The service rate C and the arrival process will also
affect the tightness of the upper bounds. If we increase C,
then the actual loss probability or expected loss ratio will
decrease faster than the corresponding upper bound does.
Accordingly, the bound will become loose. However, the
decay of the actual loss measures also depends on the distributions
of S and T that determine the behavior of the
arrival process. In contrast, decreasing B or C will lead to
tighter upper bounds.
4.4 Transient Loss Performance Guarantee
Some applications need loss performance guarantee. As
shown in Subsection 4.2 (Fig. 4 and Fig. 5), transient
(a)0
E(w)
E(w)
E(w)(d)
(b)
Markov Process
LRD Process I
LRD Process II
Steady-State Loss Performance Transient Loss Performance
Figure
5: Transient loss versus steady-state loss for
0:8 and 4. The horizontal axis represents w
scaled by B. (a) The loss probability of the Markov pro-
cess. (b) The expected loss ratio of the Markov process.
(c) The loss probability of LRD process I. (d) The expected
loss ratio of LRD process I. (e) The loss probability of LRD
process II. (f) The expected loss ratio of LRD process II.
loss is significantly different from steady-state loss. Due
to the randomness of ON and OFF periods, w can take
any value between 0 and B and beyond some critical value
of w, transient loss measures P (w) and E(w) will exceed
the corresponding steady-state loss measures. Therefore,
loss performance guarantee based on steady-state loss measures
is not appropriate.
Suppose that the buffer size 0:01. Note that for a
given C, a small B implies a small delay. For a given
loss performance parameter, we can determine the service
rate C properly for guaranteeing transient loss performance
from (5) and (6) based on the upper bounds.
We consider both the loss probability and the expected
loss ratio. To be specific, we let the loss performance parameter
. The reliability of transient
loss performance guarantee based on the upper bounds
can be verified by simulation. Table 2 shows the service
rate C determined according to the upper bounds and the
transient loss performance measured from simulation with
95% confidence intervals for different arrival processes. We
see that transient loss performance guarantee based on the
upper bounds is indeed reliable.
loss probability
arrival process upper bound simulation result
Markov process 0.323 0.264 \Sigma4:590 \Theta 10 \Gamma3
LRD process I 0.294 0.176
LRD process II 0.269 0.181 \Sigma4:593 \Theta 10 \Gamma3
expected loss ratio
arrival process upper bound simulation result
Markov process
LRD process I 0.034 0.017 \Sigma5:477 \Theta 10 \Gamma4
LRD process II 0.028
Table
1. Upper bounds on the loss measures versus the
corresponding simulation results.
Markov process
ffl C measured loss frequency
ffl C measured loss ratio
LRD process I
ffl C measured loss frequency
ffl C measured loss ratio
LRD process II
ffl C measured loss frequency
ffl C measured loss ratio
Table
2. Transient loss performance measured from simulation
and service rate C determined based on the upper
bounds.
Discussions
In this section we discuss briefly some implications of
our results, regarding the choice of loss performance measures
and the issue of network traffic modeling in the presence
of long-range dependence (LRD) [2, 5, 7, 8, 12, 14, 16].
We also outline a scheme for guaranteeing transient loss
performance for real-time communications.
5.1 Loss Performance and Modeling Issue
Our analysis and numerical results have shown that
transient loss is significantly different from steady-state
loss. In idealized steady state, the system has forgotten
its history. As a result, the steady-state loss measures are
only some constants. In contrast, transient loss measures
depend on the history of the system and therefore are vari-
ables. If the convergence towards steady state is not fast
enough or if the life cycle of the underlying physical process
is not large enough for the system to reach steady state,
then steady-state loss performance can fail to capture the
actual loss behavior in transient state. To use steady-state
loss measures to approximate transient loss performance,
it may be necessary to verify whether the system can approach
steady state fast enough within the life cycle of the
underlying physical process.
From the numerical results presented in Section 4, we
see that for a given arrival process, the loss probability is
greater than the expected loss ratio. This phenomenon
can be explained as follows. Let us focus on the nth
ON interval of an arrival process. According to the results
in Subsection 2.2, given the loss probability
equals
but the expected loss ratio is
since '(s) ! 1. Note that
if the loss probability is equal to 1, then it only means
that loss of workload due to buffer overflow will occur for
certain during the ON interval and does not necessarily
imply that all workload arrived in the interval is lost. As
a consequence, for a given loss performance parameter ffl,
requiring a loss probability less than ffl means a more stringent
loss performance guarantee and hence needs more resources
compared with the case of requiring an expected
loss ratio less than the same ffl. Therefore, for loss-tolerant
applications, it is more appropriate to use the expected loss
ratio as the performance measure, which requires fewer
resources. With the loss probability as the performance
measure, resource allocation for loss-tolerant applications
can be over-conservative and lead to poor utilization of
resources.
Within the framework of steady-state performance anal-
ysis, it is important to study the marginal state distribution
of the arrival process. However, the marginal distribution
is not important to transient loss performance. What
is important is the distributions of random variables such
as S and T that determine the behavior of the arrival pro-
cess. Our numerical results have shown that different two-state
fluid-type arrival processes with the same marginal
state distribution but different distributions of S and T
can experience significantly different transient loss.
The above conclusion may be helpful to resolve the controversy
regarding the relevance of LRD in network traffic
[5, 14, 18, 8, 7]. The issue is whether it is valid to
use Markov processes such as the well-known Markov fluid
models for traffic engineering in the presence of LRD. This
issue has been extensively investigated under the assumption
that the traffic process is already in steady state. Now
let us consider a two-state fluid-type stochastic process in
transient state. If the process is Markovian, then the distributions
of S and T are exponential. However, if S and T
obey some heavy-tailed distributions, then the two-state
fluid is an LRD process. Although in steady state, the
Markov model may still be used [7, 8, 18], transient loss
behaviors of the Markov and LRD processes are significantly
different due to different distributions of S and T .
Our numerical results imply that LRD can have significant
impact on loss performance in transient state. Therefore,
Markov models are not valid for loss performance analysis
in the presence of LRD in transient state. As shown in
Fig. 3 (c) and (d), if a Markov process is used to model
an LRD process, then the Markov model can indeed underestimate
transient loss of the LRD process even though
both processes have the same marginal state distribution.
5.2 Transient Loss Performance Guarantee for Real-Time
Communications
Now let us consider a real-time connection in a high-speed
network. Real-time communications are typically
delay-sensitive but tolerate some fraction of traffic loss
specified by a given loss performance parameter. Suppose
that the maximum allowed delay at a link is D (time
units). A simple way to meet the delay requirement is to
allocate a buffer DC to the connection where C is
the bandwidth to be determined. As a result, excessive
delay is turned into loss. Transient loss performance guarantee
then can be achieved by characterizing bandwidth
requirements properly such that the transient loss measure
will not exceed a given performance parameter. Since the
buffer size required by real-time communications is usually
very small due to the delay constraint, we can assume
that the buffer requirement can always be satisfied, so we
consider only bandwidth allocation.
For the traffic process carried by the connection, there
are two cases. In the first case, static bandwidth allocation
will still be relatively efficient in the sense that the
waste of bandwidth is not serious. In this case, we can
use a two-state fluid-type stochastic process to bound the
bit rate of the underlying traffic source where the state of
the two-state process represents the bit rate of the bounding
process and r0 ! C ! r1 . Note that C ! r1 implies
statistical multiplexing. By replacing B by DC in (5) or
(6), we can determine the bandwidth C according to the
performance parameter for guaranteeing transient loss performance
In the second case, however, static bandwidth allocation
will result in poor utilization of bandwidth due to
multiple time scales in the traffic process [6]. Therefore,
dynamic bandwidth allocation may be necessary to avoid
the waste of bandwidth. Dynamic bandwidth allocation
requires a mechanism for detecting transitions of the bit
rate of the underlying traffic process. In fact, the problem
of detecting the rate change has been extensively studied
and there have been different methods proposed to solve
the problem [9, 3]. In addition, dynamically increasing
or decreasing bandwidth requires a signaling mechanism.
Such signaling protocols exist. For example, the Dynamic
Connection Management scheme in the Tenet Protocol
Suite [15] or the ATM signaling protocol can be used for
this purpose [11]. Therefore, we can assume that the rate
transition can be effectively detected and dynamical band-width
increase or decrease can be effectively accomplished,
so in the following, we focus only on the issue of dynamic
bandwidth allocation.
The state space of a traffic process with multiple time
scales exhibits typically the following structure. The whole
state space consists of several subspaces. The transitions
between states within each subspace are much more frequent
than the transitions between states of any two sub-
spaces. Within each subspace, the underlying traffic process
can be modeled or bounded by a two-state fluid-type
stochastic process with r1 and r0 representing bit rates
. The two-state process depends on the sub-
space, that is, for different subspaces, the two-state processes
are different. For a given subspace, r0 bounds the
bit rates represented by some states of the subspace, and r1
bounds the bit rates represented by all other states of the
subspace that cannot be bounded by r0 . Now we can still
apply our results regarding two-state fluid-type stochastic
processes to allocate a bandwidth to the connection when
the underlying traffic process is in an arbitrarily given sub-
space. When the traffic process moves from a subspace to
another subspace, the bandwidth allocated to the connection
will also change accordingly. With such a scheme,
for multiple time-scale traffic, bandwidth utilization will
remain high while transient loss performance can still be
reliably guaranteed.
6 Conclusions and Future Work
Suppose that we model a physical process as an arrival
process to a finite buffer queueing system. If the life
cycle of the physical process is not large enough for the
queueing system to reach steady state, then it is necessary
to study loss performance of the system in transient state
rather than in idealized steady state. However, previous
work in transient analysis of queueing systems typically focuses
on Markov models. In this paper, we have presented
an analysis of transient loss performance for a class of finite
buffer queueing systems fed by two-state fluid-type
stochastic processes that may not be necessarily Marko-
vian. We have discussed how to define, compute and guarantee
transient loss performance and illustrated the analysis
with numerical results. Our work is useful since many
interesting phenomena in performance-oriented studies can
be modeled by two-state fluid-type stochastic processes,
and our results can also be used for multiple-state processes
with multiple time scales.
A lot of work remains to be done. For example, it may
be interesting to implement and demonstrate the scheme
of transient loss performance guarantee for real-time communications
outlined in Subsection 5.2 in an experimental
environment such as an ATM test bed. Another issue to
be explored further is to extend the analytical results to
more general stochastic processes.
--R
"Transient analysis of Markovian queueing systems and its application to congestion-control modeling"
"Long range dependence in variable bit rate video traffic"
"Predictive dynamic bandwidth allocation for efficient transport of real-time VBR video over ATM"
Computational Methods for Integral Equations
"Exper- imental queueing analysis with long-range dependent packet traffic"
"RCBR: a simple and efficient service for multiple time-scale traffic"
"On the relevance of long-range dependence in network traffic"
"What are the implications of long-range dependence for VBR- video traffic engineering?"
"Quick detection of changes in traffic statistics: application to variable rate com- pression"
"On computing per-session performance bounds in high-speed multi-hop computer networks"
"An empirical evaluation of adaptive QOS renegotiation in an ATM net- work"
"On the self-similar nature of ethernet traffic (extended version)"
"On defining, computing and guaranteeing quality-of-service in high-speed networks"
"On the use of fractional Brownian motion in the theory of connectionless networks"
"Dynamic management of guaranteed performance multimedia con- nections"
"Wide-area traffic: the failure of Poisson modeling"
"Transient analysis of cumulative measures of Markov models"
"The importance of long-range dependence of VBR video traffic in ATM traffic engineering: myths and realities"
--TR
Computational methods for integral equations
On defining, computing and guaranteeing quality-of-service in high-speed networks
On computing per-session performance bounds in high-speed multi-hop computer networks
On the self-similar nature of Ethernet traffic (extended version)
Dynamic management of guaranteed-performance multimedia connections
area traffic
RCBR
Experimental queueing analysis with long-range dependent packet traffic
What are the implications of long-range dependence for VBR-video traffic engineering?
The importance of long-range dependence of VBR video traffic in ATM traffic engineering
On the relevance of long-range dependence in network traffic | stochastic modeling;transient loss performance;queueing systems |
277888 | Queueing-based analysis of broadcast optical networks. | We consider broadcast WDM networks operating with schedules that mask the transceiver tuning latency. We develop and analyze a queueing model of the network in order to obtain the queue-length distribution and the packet loss probability at the transmitting and receiving side of the nodes. The analysis is carried out assuming finite buffer sizes, non-uniform destination probabilities and two-state MMBP traffic sources; the latter naturally capture the notion of burstiness and correlation, two important characteristics of traffic in high-speed networks. We present results which establish that the performance of the network is a complex function of a number of system parameters, including the load balancing and scheduling algorithms, the number of available channels, and the buffer capacity. We also show that the behavior of the network in terms of packet loss probability as these parameters are varied cannot be predicted without an accurate analysis. Our work makes it possible to study the interactions among the system parameters, and to predict, explain and fine tune the performance of the network. | Introduction
It has long been recognized that Wavelength Division Multiplexing
(WDM) will be instrumental in bridging the gap
between the speed of electronics and the virtually unlimited
bandwidth available within the optical medium. The wave-length
domain adds a significant new degree of freedom to
network design, allowing new network concepts to be devel-
oped. For a local area environment with a small number
of users, the WDM broadcast-and-select architecture has
emerged as a simple and cost-effective solution. In such a
LAN, nodes are connected through a passive broadcast star
coupler and communicate using transceivers tunable across
the network bandwidth.
This work was supported in part by NSF grant NCR-9701113.
A significant amount of research effort has been devoted
to the study of WDM architectures in recent years [4]. The
performance analysis of these architectures has been typically
carried out assuming uniform traffic and memoryless
arrival processes [16, 3, 5]. However, it has been established
that, in order to study correctly the performance of a net-
work, one needs to use models that capture the notion of
burstiness and correlation in the traffic stream, and which
permit non-uniformly distributed destination probabilities
[8, 9]. Two studies of optical networks that use non-Poisson
traffic models appeared recently in [13, 14]. The work in
[13] derives a stability condition for the HiPeR-' reservation
protocol, while [14] studies the effects of wavelength conversion
in wavelength routing networks. We are not aware of
any queueing-based studies of broadcast WDM networks.
In this paper we revisit the well known broadcast-and-
select WDM architecture in an attempt to investigate the
performance of broadcast optical networks under more realistic
traffic assumptions and finite buffer capacity. Specif-
ically, we develop a queueing-based decomposition algorithm
to study the performance of a network operating under schedules
that mask the transceiver tuning latency [6, 12, 1, 2,
11]. The analysis is carried out using Markov Modulated
Bernoulli Process (MMBP) arrival models that naturally
capture the important characteristics of traffic in high-speed
networks. Additionally, our analysis allows for unequal traffic
flows to exist between sets of nodes. Our work makes
it possible to study the complex interaction among the various
system parameters such as the arrival processes, the
number of available channels, and the scheduling and load
balancing algorithms. To the best of our knowledge, such
a comprehensive performance analysis of a broadcast WDM
architecture has not been done before.
The next section presents the queueing and traffic model
and provides some background information. The performance
analysis of the network is presented in Sections 3 and
4, numerical results are given in Section 5, and we conclude
the paper in Section 6.
System Model
In this section we introduce a model for the media access
control (MAC) layer in a broadcast-and-select WDM LAN.
The model consists of two parts, a queueing network and a
transmission schedule. We also present a traffic model to
characterize the arrival processes to the network.
l l
l l
l l
l l
node N
CCfixed
optical
filters
l (1)
l (N)
passive star
transmitting
queues receiving
queues
to users
node 1
node N
node 1
tunable
lasers
from users
Figure
1: Queueing model of a broadcast WDM architecture
with N nodes and C wavelengths
2.1 The Queueing Model
We consider an optical network architecture with N nodes
communicating over a broadcast passive star that can support
Figure 1). Each
node is equipped with a laser that enables it to inject signals
into the optical medium, and a filter capable of receiving
optical signals. The laser at each node is tunable
over all available wavelengths. The optical filters, on the
other hand, are fixed to a given wavelength. Let -(j) denote
the receiving wavelength of node j. Since C - N ,
a set Rc of nodes may be sharing a single wavelength -c :
Each node consists of a transmitting side and a receiving
side, as Figure 1 illustrates. New packets (from users) arrive
at the transmitting side of a node i and are buffered at a finite
capacity queue, if the queue is not full. Otherwise, they
are dropped. As Figure 1 indicates, the buffer space at the
transmitting side of each node is assumed to be partitioned
into C independent queues. Each queue c;
at the transmitting side of node i contains packets destined
for the receivers which listen to wavelength -c . This arrangement
eliminates the head-of-line problem, and permits
a node to send several packets back-to-back when tuned to
a certain wavelength. We let B (in)
ic denote the capacity of
the transmitting queue at node i corresponding to channel
-c .
Packets buffered at a transmitting queue are sent on a
FIFO basis onto the optical medium by the node's laser. A
schedule (discussed shortly) ensures that transmissions on
a given channel will not collide, hence a transmitted packet
will be correctly received by its destination node. Upon
arriving at the receiving side of its destination node, a packet
is placed in another finite capacity buffer before it is passed
to the user for further processing. We let B (out)
j denote the
buffer capacity of the receiving queue at node j. Packets
arriving to find a full receiving queue are lost. Packets in a
receiving queue are also served on a FIFO basis.
Packets in the network have a fixed size and the nodes
operate in a slotted mode. Since there are N nodes but C -
N channels, the passive star (i.e., each of the C channels)
must run at a rate N
C times faster than the rate at which
users at each node can generate or receive packets ( N
need
not be an integer). In other words, the MAC-to-network
interface runs faster than the user-to-MAC interface. Thus,
we distinguish between arrival slots (which correspond to
the packet transmission time at the user rate) and service
slots (which are equal to the packet transmission time at the
channel rate within the network). Obviously, the duration of
a ga a ag g g1 N Nl c
Frame
(a)
(b)
arrival slot service slot
c
c
Figure
2: (a) Schedule for channel -c , and (b) detail corresponding
to node 2
a service slot is equal to C
N times that of an arrival slot. All
N nodes are synchronized at service slot boundaries. Using
timing information about service slots and the relationship
between service and arrival slots one can derive the timing
of arrival slots. Hence, we assume that all users are also
synchronized at arrival slot boundaries.
2.2 Transmission Schedules
One of the potentially difficult issues that arises in a WDM
environment, is that of coordinating the various transmit-
ters/receivers. Some form of coordination is necessary because
(a) a transmitter and a receiver must both be tuned
to the same channel for the duration of a packet's transmis-
sion, and (b) a simultaneous transmission by one or more
nodes on the same channel will result in a collision. The
issue of coordination is further complicated by the fact that
tunable transceivers need a non-negligible amount of time
to switch between wavelengths.
Several scheduling algorithms have been proposed for the
problem of scheduling packet transmissions in such an environment
[6, 12, 1, 2, 11]. Although these algorithms differ
in terms of their design and operation, surprisingly the resulting
schedules are very similar. A model that captures
the underlying structure of these schedules is shown in Figure
2. In such a schedule, node i is assigned a ic contiguous
service slots for transmitting packets on channel -c . These
a ic slots are followed by a gap of g ic - 0 slots during which
no node can transmit on -c . This gap may be necessary
to ensure that the laser at node i + 1 has sufficient time to
tune from wavelength -c\Gamma1 to -c before it starts transmis-
sion. Note that in Figure 2 we have assumed that an arrival
slot is an integer multiple of service slots. This may not
be true in general, and it is not a necessary assumption for
our model. Observe also that, although a schedule begins
and ends on arrival slot boundaries, the beginning or end of
transmissions by a node does not necessarily coincide with
the beginning or end of an arrival slot (although they are,
obviously, synchronized with service slots).
We assume that transmissions by the transmitting queues
onto wavelength -c follow a schedule as shown in Figure 2.
This schedule repeats over time. Each frame of the schedule
consists of M arrival slots. Quantity a ic
can be seen as the number of service slots per frame
allocated to node i, so that the node can satisfy the required
quality of service of its incoming traffic intended for
wavelength -c . By fixing a ic , we indirectly allocate a certain
amount of the bandwidth of wavelength -c to node i.
This bandwidth could, for instance, be equal to the effective
bandwidth [7] of the total traffic carried by node i on
wavelength -c . In general, the estimation of the quantities
a is part of the connection
admission algorithm [7], and it is beyond the scope of this
paper. We note that as the traffic varies, a ic may vary as
well. In this paper, we assume that quantities a ic are fixed,
since this variation will more likely take place over larger
scales in time.
2.3 Traffic Model
The arrival process to each transmitting queue of the net-work
is characterized by a two-state Markov Modulated Bernoulli
Process (MMBP), hereafter referred to as 2-MMBP.
A 2-MMBP is a Bernoulli process whose arrival rate varies
according to a two-state Markov chain. It captures the
notion of burstiness and the correlation of successive interarrival
times, two important characteristics of traffic in
high-speed networks. For details on the properties of the
2-MMBP, the reader is referred to [10]. (We note that the
algorithm for analyzing the network was developed so that
it can be readily extended to MMBPs with more than two
states.)
We assume that the arrival process to transmitting queue
given by a 2-
MMBP characterized by the transition probability matrix
by A ic as follows:
ic q (01)
ic
ic q (11)
ic
and A
ic
(1)
In (1), q (kl)
is the probability that the 2-
MMBP will make a transition to state l, given that it is
currently at state k. Obviously, q (k0)
Also, ff (0)
ic (ff (1)
ic ) is the probability that an arrival will occur
in a slot at state 0 (1). Transitions between states of the
occur only at the boundaries of arrival slots. We
assume that the arrival process to each transmitting queue is
given by a different 2-MMBP. From (1) and [10], the steady-state
arrival probability for the arrival process to this queue
is
ic ff (0)
ic ff (1)
ic
ic
(2)
the probability that a packet generated
at node i will have j as its destination node. We will refer
to as the routing probabilities; this description implies
that the routing probabilities can be node-dependent and
non-uniformly distributed. The destination probabilities of
successive packets are not correlated. That is, in a node,
the destination of one packet does not affect the destination
of the packet behind it. Given these assumptions, the
probability that a packet generated at node i will have to
be transmitted on wavelength -c is:
Obviously, the relationship between r ic and fl ic is given by
Queueing Analysis
In this section we analyze the queueing network shown in
Figure
1, which represents the tunable-transmitter, fixed-
receiver optical network under study. The arrival process to
passive starN
l
filters
optical
fixed
l
c
c
corresponding
to
listening to
l
l
c
c
l c
l
c
transmitting queues
receiving queues
Figure
3: Queueing sub-network for wavelength -c
each transmitting queue is assumed to be a 2-MMBP, and
the access of the transmitting queues to the wavelengths
is governed by a schedule similar to the one described in
Section 2.2. We analyze this queueing network in order to
obtain the queue-length distribution in a transmitting or
receiving queue, from which performance measures such as
the packet-loss probability can be obtained.
3.1 Transmitting Side Analysis
We first note that the original queueing network can be decomposed
into C sub-networks, one per wavelength, as in
Figure
3. For each wavelength -c , the corresponding sub-network
consists of N transmitting queues, and all the receiving
queues that listen to wavelength -c . Each transmitting
queue i of the sub-network is the one associated with
wavelength -c in the i-th node. These transmitting queues
will transmit to the receiving queues of the sub-network over
wavelength -c . Note that, due to the independence among
the C queues at each node, the transmission schedule (i.e.,
the fact that different nodes transmit on the same wave-length
at different times), and the fact that each receiver
listens to a specific wavelength, this decomposition is exact.
In view of this decomposition, it suffices to analyze a single
sub-network, since the same analysis can be applied to all
other sub-networks.
Consider now the sub-network for wavelength -c . We
will analyze this sub-network by decomposing it into individual
transmitting and receiving queues. As discussed in
the previous section, each transmitting queue i of the sub-network
is only served for a ic consecutive service slots per
frame. During that time, no other transmitting queue is
served. Transmitting queue i is not served in the remaining
slots of the frame. In view of this, there is no dependence
among the transmitting queues of the sub-network, and consequently
each one can be analyzed in isolation in order to
obtain its queue-length distribution. (Each receiving queue
will also be considered in isolation in Section 3.2.)
From the queueing point of view, the queueing network
shown in Figure 3 can be seen as a polling system in discrete
time. Despite the fact that polling systems have been extensively
analyzed, we note that very little work has been done
within the context of discrete time (see, for example, [18]).
In addition, this particular problem differs from the typical
polling system since we consider receiving queues, which are
not typically analyzed in polling systems.
a ic
(a)
Frame
l c
(b)
observation instant
transition instant
service completion
instant
2-MMBP state
arrival instant
Figure
4: (a) Service period of transmitting queue i on channel
-c , and (b) detail showing the relationship among service
completion, arrival, 2-MMBP state transition, and observation
instants within a service and an arrival slot
3.1.1 The Queue-Length Distribution of a Transmitting
Queue
Consider transmitting queue i of the sub-network for -c in
isolation. This queue receives exactly a ic service slots on
wavelength -c , as shown in Figure 4(a). The block of a ic
service slots may not be aligned with the boundaries of the
arrival slots. For instance, in the example shown in Figure
4(a), the block of a ic service slots begins at the second service
slot of arrival slot x \Gamma 1, and it ends at the end of the
second service slot in arrival slot x
number within a frame.
For each arrival slot, define v ic (x) as the number of service
slots allocated to transmitting queue i, that lie within
arrival slot x 1 . Then, in the example in Figure 4(a), we
0 for all other x 0 . Obviously, we have
We analyze transmitting queue i by constructing its underlying
Markov chain embedded at arrival slot boundaries.
The order of events is as follows. The service (i.e., trans-
mission) completion of a packet occurs at an instant just
before the end of a service slot. An arrival may occur at
an instant just before the end of an arrival slot, but after
the service completion instant of a service slot whose end
is aligned with the end of an arrival slot. The 2-MMBP
describing the arrival process to the queue makes a state
transition immediately after the arrival instant. Finally, the
Markov chain is observed at the boundary of each arrival
slot, after the state transition by the 2-MMBP. The order
of these events is shown in Figure 4(b).
The state of the transmitting queue is described by the
tuple (x;
In
Figure
4, we assume that each arrival slot contains an integral
number of service slots. If this is not the case, v ic (x) is defined as
the number of service slots that end within arrival slot x (i.e., if there
is a service slot that lies partially within arrival slots x and x
will be counted in v ic
ffl x represents the arrival slot number within a frame
ffl y indicates the number of packets in the transmitting
queue
ic ), and
ffl z indicates the state of the 2-MMBP describing the
arrival process to this queue, that is,
It is straightforward to verify that, as the state of the
queue evolves in time, it defines a Markov chain. Let \Phi
denote modulo-M addition, where M is the number of arrival
slots per frame. Then, the transition probabilities out
of state (x; y; z) are given in Table 1. Note that, the next
state after (x; always has an arrival slot number equal
to x \Phi 1. In the first row of Table 1 we assume that the 2-
MMBP makes a transition from state z to state z 0 (from (1),
this event has a probability q (zz 0 )
ic of occurring), and that no
packet arrives to this queue during the current slot (from
(1) and (3), this occurs with probability
at most v ic are serviced during arrival slot
x \Phi 1, and since no packet arrives, the queue length at the
end of the slot is equal to maxf0; y 1)g. In the
second row of Table 1 we assume that the 2-MMBP makes
a transition from state z to state z 0 and a packet arrives to
the queue. This arriving packet cannot be serviced during
this slot, and has to be added to the queue. Finally, the
expression for the new queue length ensures that it will not
exceed the capacity B (in)
ic of the transmitting queue.
The probability transition matrix of this Markov chain is
straightforward to derive from Table 1. This matrix defines
a p-cyclic Markov chain [15], and therefore it can be solved
using any of the techniques for p-cyclic Markov chains in
[15, ch. 7]. We have used the LU decomposition method in
[15] to obtain the steady state probability - ic (x; z) that at
the end of arrival slot x, the 2-MMBP is in state z and the
transmitting queue has y packets. The steady-state probability
that the queue has y packets at the end of slot x,
independent of the state of the 2-MMBP is:
Finally, we note that all of the results obtained in this
subsection can be readily extended to MMBP-type arrival
processes with more than two states.
3.2 Receiving Side Analysis
Consider the sub-network for wavelength -c in Figure 3, and
observe that the arrival process to the receiving queues sharing
-c is the combination of the departure processes from
the transmitting queues corresponding to -c . An interesting
aspect of the departure process from the transmitting
queues is that for each frame, during the sub-period a ic we
only have departures from the i-th queue. This period is
then followed by a gap g ic during which no departure occurs.
This cycle repeats for the next transmitting queue. Thus,
in order to characterize the overall departure process offered
as the arrival process to these receiving queues, it suffices to
characterize the departure process from each transmitting
queue, and then combine them. (We note that this overall
departure process is quite different from the typical superposition
of a number of departure processes into a single
stream, where, at each slot, more than one packet may be
departing.) The overall departure process is completely defined
given the queue-length distribution of all transmitting
Table
1: Transition probabilities out of state (x; y; z) of the Markov chain
Current State Next State Transition Probability
ic ff (z)
ic
queues in the sub-network (which may be obtained using
the analysis in Section 3.1), since then the probability that
a packet will be transmitted on channel -c in any given service
slot is known.
However, the individual arrival processes to each of the
receiving queues listening on -c are not independent. Specif-
ically, if j and j 0 are two receivers on -c , and there is a
transmission from transmitting queue i to receiving queue
j in a given service slot, then there can be no arrival to
receiving queue j 0 in the same service slot. We will nevertheless
make the assumption that these arrival processes
are indeed independent, and that each is an appropriately
thinned (based on the routing probabilities) version of the
departure process from the transmitting queues. Note that
this is an approximation only when there are multiple nodes
with receivers fixed on channel -c . This assumption allows
us to decompose the sub-network of Figure 3 into individual
receiving queues and to analyze each of them in isolation 2 .
3.2.1 The Queue-Length Distribution of a Receiving Queue
As in the previous section, we obtain the queue-length distribution
of receiving queue j at arrival slot boundaries. During
an arrival slot x a packet may be transmitted to the user
from the receiving queue. However, during slot x, there may
be several arrivals to this receiving queue from the transmitting
queues. Let (x; w) be the state associated with receiving
queue j, where
ffl x indicates the arrival slot number within the frame
ffl w indicates the number of packets at the receiving
queue
We assume the following order of events. A packet will
begin to depart from the receiving queue at an instant immediately
after the beginning of an arrival slot and the departure
will be completed just before the end of the slot.
A packet from a transmitting queue arrives at an instant
just before the end of a service slot, but before the end-of-
departure instant of an arrival slot whose end is aligned with
the end of the service slot. Finally, the state of the queue
is observed just before the end of an arrival slot and after
the arrival associated with the last service slot has occurred
(see
Figure
5(b)).
We also note that the approach of analyzing each receiving queue
in isolation gives correct results for the individual receiving queues;
after all, in steady-state, the probability that a packet transmitted
by node i on - c will have j as its destination will equal the routing
This approach is an approximation only when one attempts
to combine results from individual receiving queues to obtain
the overall performance for the network. It is possible to apply techniques
to adjust for this approximation when aggregating individual
results [17]. We will not consider such techniques here, instead we
will only concentrate on individual queues.
(a)
a i+1,c
a ic
Frame
l c
(b)
instant at which
departure starts
observation instant
arrival instant
instant at which
departure ends
x
x
Figure
5: (a) Arrivals to receiving queue j from transmitting
queues i and detail showing the relationship
of departure, arrival, and observation instants
Let u j (x) be the number of service slots of any transmitting
queue on channel -c within arrival slot x. We have:
where v ic (x) is as defined in (4). Quantity u j (x) represents
the maximum number of packets that may arrive to
receiving queue j within slot x. In the example of Figure
5(a) where we show the arrival slots during which packets
from transmitting queues i and may arrive to
receiving queue j, we have: u
Observe now that (a) at each state transition x advances
by one (modulo-M ), (b) exactly one packet departs from
the queue as long as the queue is not empty, (c) a number
packets may be transmitted from the
transmitting queues to receiving queue j within arrival slot
x \Phi 1, and that (d) the queue capacity is B (out)
. Then, the
transition probabilities out of state (x; w) for this Markov
chain can be obtained from Table 2.
In
Table
is the probability that transmitting
queue packets to receiving queue j given that
the system is at the end of arrival slot x (in other words, it
is the probability that s i packets are transmitted within slot
To obtain L
ij as the conditional
3 Since in most cases only one or two transmitting queues will transmit
to the same channel within an arrival slot (refer also to Figure
2), the summation and product in the expression in the last column
of
Table
2 do not necessarily run over all N values of i, only over one
Table
2: Transition probabilities out of state (x; w) of the Markov chain
Current State Next State Transition Probability
probability that a packet is destined for node j, given that
the packet is destined to be transmitted on -c , the receive
wavelength of node j:
r ic
as the conditional probability of having
y packets at the i-th transmitting queue given that the
system is observed at the end of slot x:
Then, for r 0
is given by
ic
Expression can be explained by noting that transmitting
queue i will transmit s i packets to receiving queue j during
arrival slot x \Phi 1 if (a) v ic
packets in its transmitting queue for -c at the beginning of
the slot (equivalently, at the end of slot x), and (c) exactly
s i of minfy; v ic (x \Phi 1)g packets that will be transmitted by
this queue in this arrival slot are for receiver j. Expression
represents the "thinning" of the arrival processes to the
various receiving queues of the sub-network using the r 0
routing probabilities, and discounts the correlation among
arrival streams to the different queues. Expression (9) is
the crux of our approximation for the receiving side of the
network.
If r 0
1, in which case j is the only node listening
on wavelength -c , the expression for must be
modified as follows (recall that there is no approximation in
this case):
ic
Expressions and (10) are based on the assumption
that v ic
ic which we believe is a reasonable one.
In the general case, quantity v ic (x \Phi 1) in both expressions
must be replaced by minfv ic
ic g.
The transition matrix of the Markov chain defined by the
evolution of the state (x; w) of receiving queue j also defines
a p-cyclic Markov chain. We have used the LU decomposition
method as prescribed in [15] to obtain - j (x; w), the
steady-state probability that receiving queue j has w packets
at the end of slot x.
or two values of i. Thus, this expression can be computed very fast,
not in exponential time as implied by the general form presented in
the table.
4 Packet-Loss Probability
We now use the queue-length distributions - ic (x; y) and
derived in the previous section, to obtain the packet-loss
probability at the transmitting and receiving queues.
4.1 The Packet-Loss Probability at a Transmitting Queue
Let\Omega ic be the packet-loss probability at the c-th transmitting
queue of node i, i.e., the probability that a packet arriving
to that queue will be
lost.\Omega ic can be expressed as:
lost per frame at queue c; node i]
E[# arrivals per frame at queue c; node i]
The expectation in the denominator can be seen to be
equal to M fl ic , where fl ic is the steady-state arrival probability
of the arrival process to this queue from (2). To
obtain the expectation in the numerator, let us refer to Figure
which shows the service completion, arrival, and
observation instants within slot x. We observe that, due to
the fact that at most one packet may arrive in slot x, if the
number v ic (x) of slots during which this queue is serviced
within arrival slot x is not zero (i.e., v ic (x) ? 0), no arriving
packet will be lost. Even if the c-th queue at node i is full at
the beginning of slot x, v ic packets will be serviced
during this slot, and the order of service completion and
arrival instants in Figure 4(b) guarantees that an arriving
packet will be accepted. On the other hand, if v ic
for slot x, then an arriving packet will be discarded if and
only if the queue is full at the beginning of x (equivalently,
at the end of the slot before x). Since the 2-MMBP can
be in one of two states, we have that the numerator of (11)
is equal to
x:v ic (x)=0
z=0 ff (z)
\Psi denotes regular subtraction with the exception that, if
and the summation runs over
all x for which v ic Using these expressions and the
fact that - ic
M for all x, we obtain an expression for
\Omega ic as follows:
x:v ic (x)=0
z=0 ff (z)
4.2 The Packet-Loss Probability at a Receiving Queue
The packet-loss probability at a receiving queue is more complicated
to calculate, since we may have multiple packet
arrivals to a given queue within a single arrival slot (re-
fer to
Figure
5(a)). Let us
as the conditional
probability that n packets will be lost at receiving
queue j given that the current arrival slot is x. A receiving
queue will lose n packets in slot x if (a) the queue had
packets at the beginning of slot x, and
(b) exactly B (out)
arrived during slot x. We
can then write:
pkts arrive to j j x](13)
similar to (8). The last
probability in (13) can be easily obtained using (9) or (10),
as in the last column of Table 2.
Note that at most u j (x) packets may arrive (and get
lost) in arrival slot x. Using (13), we can then compute the
expected number of packets lost in slot x as:
E[number of packets lost at
The expected number of arrivals to receiving queue j in slot
x can be computed as:
E[# arrivals to j j
sPr[s pkts arrive to j j x] (15)
Finally, the
probability\Omega j that an arriving packet to node
j will be lost regardless of the arrival slot x can be found as
follows:
x=0 E[number of lost packets at j j x]
x=0 E[number of arrivals to j j x]
5 Numerical Results
We now apply our analysis to a network with nodes.
The arrival process to each of the transmitting queues of the
network is described by a different 2-MMBP. The 2-MMBPs
selected exhibit a wide range of behavior in terms of two
important parameters, the mean interarrival time and the
squared coefficient of variation of the interarrival time. The
routing probabilities we used are:
ae
That is, receiver 1 is a hot spot, receiving 10% of the total
traffic, while the remaining traffic is evenly distributed
to the other 15 nodes. The total rate at which packets are
generated by users of the network is 1.98 packets per arrival
slot. Most of the traffic is generated at node 1, as the rate
of new packets generated at this node is 0.583 packets per
arrival slot. The packet generation rate decreases monotonically
for nodes 2 to 16. For load balancing purposes, we have
allocated one of the C channels exclusively to node 1, since
this node receives a considerable fraction of the total traffic.
The remaining are shared by the other 15
receivers. The allocation of the receivers to the remaining
wavelengths was performed in a round-robin fashion, and is
given in Table 3 for
The quantities a ic of the schedule, i.e., the number of
packets to be transmitted by node i onto channel -c per
frame (refer to Section 2.2 and Figure 2) were fixed to be as
close to (but no less than) 0.5 arrival slots as possible. Recall
that, while the length of an arrival slot is independent of C
and is taken as our unit of time, the length of a service slot
Table
3: Channel sharing for
depends on the number of channels. In cases in which 0.5
arrival slots is not an integral number of service slots, the
value a ic is rounded up to the next integer to ensure that
every queue is granted at least 0.5 arrival slots of service
during each frame 4 (i.e., a
In constructing
the schedules, we have assumed that the time it takes a laser
to tune from one channel to another is equal to one arrival
slot 5 . Finally, for all of the results we present in this section
we have let all transmitting and receiving queues have the
same buffer capacity B (i.e., B (in)
to reduce
the number of parameters that need to be controlled.
In
Figure
6 we show the part of the schedule corresponding
to channel -1 for three different values of the number
of channels and 8; the parts of the schedules for
other channels are very similar. The schedules will help explain
the performance results to be presented shortly. Since
the number of nodes each arrival slot
is exactly four service slots long. Each node is allocated
arrival slots, or 2 service slots for transmissions on each
channel, as Figure 6(a) illustrates. For the network is
bandwidth limited [12], that is, the length of the schedule is
determined by the bandwidth requirements on each channel
arrival slots), not the transmission and tuning
requirements of each node (= 4 \Theta 0:5
slots). The schedule for Figure 6(b) is an example
where there is a non-integral number of service slots within
each arrival slot. More precisely, one arrival slot contains
6 , or 2 2
service slots. Each node is assigned two service
slots (a for transmissions on each channel, since
one service slot is less than 0.5 arrival slots. For the
network is again bandwidth limited, and the total schedule
length becomes service slots, or 12 arrival slots.
Finally, when
slots, and the corresponding schedule is shown in Figure
6(c). However, in this case the network is tuning limited [12],
i.e., the node transmission and tuning requirements determine
the schedule length. Since each node has to transmit
for 0.5 arrival slots on each channel, and to tune to each
of the 8 channels (recall that the tuning time is one arrival
4 Other schemes for allocating a ic have been implemented, including
setting a ic proportional to r ic , setting a ic proportional to
ic g, and setting a ic to the effective bandwidth [7] of node
i's total traffic carried on channel - c . Although the packet loss probability
results do depend on the actual values of a ic , the overall conclusions
drawn regarding our analysis are very similar. Thus, we have
decided to include only the simplest case here.
5 Again, due to the synchronous nature of this network, if one arrival
slot is not an integral number of service slots, the number of
service slots for which a transmitter cannot transmit is rounded up
to the next integer, thereby setting the required time for tuning to
some value slightly greater than one arrival slot. As a result, the
tuning time is always d N
C e service slots.
(a)
(c)
Transmitting queue number
denotes unused slot)
arrival slot
arrival slot
arrival slot
service slot
service slot
service slot
Figure
Transmission schedules for -1 and
slot), the total schedule length is 8 \Theta 0:5
slots. But the transmissions on each channel only take
arrival slots; the remaining 4 arrival slots in
Figure
6(c) are not used.
Figures
7-10 show the packet loss probability (PLP) at
four different transmitting queues as a function of the buffer
size B for 8. We only show results for two nodes,
namely, the node with the highest traffic intensity (node 1)
in
Figures
7 and 9, and a representative intermediate node
(node Figures 8 and 10. We also consider only transmitting
queues 1 and 2 (out of C) at each node. Queue 1
at each node is for traffic to be carried on wavelength -1 ,
which is dedicated to receiver 1 (the "hot spot"). Thus, the
amount of traffic received by this queue does not change as
we vary the number of channels, since the first channel is
dedicated to receiver 1. Queue 2 at each node is for traffic
to be carried on wavelength -2 . The amount of traffic received
by this queue will decrease as the number of channels
increases, since channel -2 will be shared by fewer receivers.
The behavior of queue 2 is representative of the behavior of
the other
Figure
7 plots the
PLP\Omega 1;1 (i.e., the PLP at transmitting
queue 1 of node 1) as a function of the buffer size B for
8. As expected, the PLP decreases as the buffer
size increases. For a given buffer size, however, the PLP
changes dramatically and counter to intuition, as the number
C of channels is varied. Specifically, the PLP increases
with C; that is, adding more channels results in worse per-
formance. When B is 10, there is roughly nine orders of
magnitude difference between the PLP for
and three orders of magnitude difference between
As we discussed above, the traffic load of this queue
does not change with C; the queue receives the traffic for
destination 1, which is always 10% of the total traffic generated
at node 1 (see (17)). What does change as C varies
is the service rate of the queue, and this change can help
explain the results in Figure 7. Referring to Figure 6, we
note that when 4, each frame of the schedule is
arrival slots long, and 2. Hence, at most 8 packets
may arrive to this queue during a frame while as many as 2
packets will be serviced. When
indicating a decrease in the service rate of the queue. Simi-
larly, for further decrease in
available service per frame for this queue. This decrease is
the reason behind the sharp increase in PLP with C in Figure
7. Very similar behavior is observed in Figure 8 where we
plot\Omega 8;1 , the PLP at transmitting queue 1 of node 8. The
main difference between Figures 7 and 8 is in the absolute
values values of PLP. The very small PLP numbers
are due to the fact that the amount of traffic entering queue
1 of node 8 (0.004 packets per arrival slot) is significantly
smaller than the traffic entering the same queue of node 1
(0.058 packets per arrival slot - recall that the traffic source
were chosen so that the packet generation rate decreases as
the node index increases). In fact, for buffer sizes
our analysis gave PLP values that
are essentially zero; these values are not plotted in Figure
8 because we believe that they are the result of round-off
errors.
Figures
plot the PLP at transmitting queue 2
of nodes 1 and 8, respectively, against the buffer size. From
Table
3 we note that the traffic received by this
queue decreases from 30% of the overall network traffic when
or 8; this decrease is due to the
fact that 5 receivers share wavelength -2 when
only 3 receivers share it when 8. Thus, the PLP
behavior at this queue will depend not only on the change
in the service rate as C varies, but also on the change in the
amount of traffic received due to addition of new channels.
In
Figure
9, and for a given buffer size, the PLP decreases as
C increases from 4 to 6 (compare to Figure 7). In this case,
the decrease in the traffic arrival rate (from an average rate
of 0.175 to 0.105 packets per arrival slot) more than offsets
the decrease in the service rate that we discussed above. On
the other hand, the PLP values for are less than those
for (transmitting queue 2 of node 8) due
to the fact that the decrease in the offered load (from 0.012
to 0.007 packets per arrival slot) is not substantial enough
to offset the decrease in the service rate; still, this increase
is less severe than the one in Figure 8 where there was no
decrease in the arrival rate. As C increases to 8 there is no
change in the offered traffic for either queue; as expected,
the PLP rises with the decrease in the service rate.
Finally, Figures 11 and 12 plot the PLP at receiving
queues 1 and 8, respectively. Receiving queue 8 is representative
of queues 2 through 16 in that it receives 6% of
the total network traffic (see (17)). Again, the PLP decreases
with increasing buffer size. Also, the lower values of
PLP in
Figure
12 compared to Figure 11 reflect the fact that
only 6% of the total traffic is destined to receiving queue 8,
as opposed to 10% for the hot spot queue 1. What is surprising
in Figures 11 and 12, however, is that, for a given
buffer size, the PLP decreases as the number C of channels
increases. This behavior is in sharp contrast to the one we
observed in the transmitting side case, and can be explained
as follows. First, higher losses at the transmitting queues for
larger values of C means that fewer packets will make it to
the receiving queues, thus losses will be lower at the latter.
But the dominant factor in the PLP behavior in Figures
11 and 12 is the change in the service rate of the receiving
queues as C varies (refer to Figure 6). For 4, as many
as packets may arrive to each receiving queue within a
frame, and 8 packets may be served (i.e., transmitted to the
users). When the number of potential arrivals in a
frame remains at 32, but the frame is 12 arrival slots long,
meaning that up to 12 packets may be served, leading to a
drop in the PLP. Finally, for the number of packets
served in a frame is the same as in but the maximum
number of packets that may arrive becomes only 16,
explaining the dramatic drop in the PLP.
6 Concluding Remarks
In this paper we introduced a model for the media access
control (MAC) layer of optical WDM broadcast-and-select
LANs. The model consists of a queueing network of transmitting
and receiving queues, and a schedule that masks the
transceiver tuning latency. We developed a decomposition
algorithm to obtain the queue-length distributions at the
transmitting and receiving queues of the network. We also
obtained analytic expressions for the packet-loss probability
at the various queues. Finally, we presented a study case
to illustrate the significance of our work in predicting and
explaining the performance of the network in terms of the
packet-loss probability.
Overall, the results presented in this paper indicate that
the performance of a WDM optical network can exhibit behavior
that is counter to intuition, and which may not be
predictable without an accurate analysis. The performance
curves shown also establish that the packet-loss probability
in such an environment depends strongly on the interaction
among the scheduling and load balancing algorithms, the
routing probabilities, and the number of available channels.
Our work has made it possible to investigate the behavior
of optical networks under more realistic assumptions regarding
the traffic sources and the system parameters (e.g., finite
buffer capacities) than was possible before, and it represents
a first step towards a more thorough understanding of net-work
performance in a WDM environment. Our analysis
also suggests that simple slot allocation schemes similar to
the ones used for our study case are not successful in utilizing
the additional capacity provided by an increase in the
number of channels. The specification and evaluation of
more efficient slot allocation schemes should be explored in
future research.
--R
Impact of tuning delay on the performance of bandwidth-limited optical broadcast networks with uniform traffic
Efficient scheduling of nonuniform packet traffic in a WDM/TDM local lightwave network with arbitrary transceiver tuning laten- cies
A media-access protocol for packet-switched wavelength division multiaccess metropolitan area networks
Call admission control schemes: A review.
area traffic: The failure of poisson modeling.
Queueing systems for modelling ATM networks.
Approximate analysis of discrete-time tandem queueing networks with bursty and correleated input traffic and customer loss
Scheduling transmissions in WDM broadcast-and-select networks
Packet scheduling in broadcast WDM networks with arbitrary transceiver tuning latencies.
A performance model for wavelength conversion with non-poisson traffic
Numerical Solutions of Markov Chains.
The MaTPi protocol: Masking tuning times through pipelining in WDM optical networks.
Stochastic Modeling and the Theory of Queues.
Approximate analysis of a discrete-time polling system with bursty arrivals
--TR
Scheduling transmissions in WDM broadcast-and-select networks
area traffic
Packet scheduling in broadcast WDM networks with arbitrary transceiver tuning latencies
Scheduling of multicast traffic in tunable-receiver WDM networks with non-negligible tuning latencies
Queueing systems for modelling ATM networks
Approximate Analysis of a Discrete-Time Polling System with Bursty Arrivals
HiPeR-l
A Performance Model for Wavelength Conversion with Non-Poisson Traffic | markov modulated bernoulli process;discrete-time queueing networks;optical networks;wavelength division multiplexing |
277910 | Modeling set associative caches behavior for irregular computations. | While much work has been devoted to the study of cache behavior during the execution of codes with regular access patterns, little attention has been paid to irregular codes. An important portion of these codes are scientific applications that handle compressed sparse matrices. In this work a probabilistic model for the prediction of the number of misses on a K-way associative cache memory considering sparse matrices with a uniform or banded distribution is presented. Two different irregular kernels are considered: the sparse matrix-vector product and the transposition of a sparse matrix. The model was validated with simulations on synthetic uniform matrices and banded matrices from the Harwell-Boeing collection. | Introduction
Sparse matrices are in the kernel of many numerical appli-
cations. Their compressed storage [?], which permits both
operations and memory savings, generates irregular access
patterns. This fact reduces and makes hard to predict the
performance of the memory hierarchy. In this work a probabilistic
model for the prediction of the number of misses on
a K-way associative cache memory considering sparse matrices
with a uniform or banded distribution is presented.
We want to emphasize that an important body of the model
is reusable in different algebra kernels.
The most important approach to study cache behavior
has traditionally been the use of trace-driven simulations [?],
[?], [?] whose main drawback is the large amount of time
needed to process the traces. Another possibility is nowadays
provided by the performance monitoring tools of modern
microprocessors (built-in hardware counters), that make
This work was supported by the Ministry of Education and
Science (CICYT) of Spain under project TIC96-1125-C03, Xunta
de Galicia under Project XUGA20605B96, and E.U. Brite-Euram
Project BE95-1564
available data such as the number of cache misses. These
tools are obviously restricted to the evaluation of the specific
cache architectures for which they are available. Finally, analytical
models present the advantage that they reduce the
times for obtaining the estimations and make the parametric
analysis of the cache more flexible. Their weak point has
traditionally been their limited precision. Although some
models use parameters extracted from address traces [?],
more general analytical models have been developed that
require no input traces. While most of the previous works
focuse on dense algebra codes [?], [?], little attention has
been devoted to sparse kernels due to the irregular nature
of their access patterns. For example, [?] studies the self
interferences on the vector involved in the sparse matrix-vector
product on a direct-mapped cache without considering
interferences with other data structures and they do not
derive a general framework to model this kind of codes.
Nevertheless, as an example of the potential usability of
such types of models, we have modeled the cache behavior
of a common algebra kernel, the sparse matrix-vector
product, and a more complex one, the transposition of a
sparse-matrix. This last code includes different access patterns
and presents an important degree of data reusability
for certain vectors.
The remainder of the paper is organized as follows: Section
?? presents the basic model parameters and concepts.
Sparse matrix-vector product cache behavior is modeled in
Section ??, while Section ?? is dedicated to the modeling
of the transposition of a sparse matrix. Both models are
extended to banded matrices in Section ??. In Section ??
the models are verified and the cache behavior they depict
is studied. Finally, Section ?? concludes the paper.
2 Probabilistic model
Our model considers three types of misses: intrinsic misses,
and self and cross interferences. An intrinsic miss takes place
the first time a memory block is accessed. Self interferences
are misses on lines that have been previously replaced in
the cache by another line that belongs to the same program
vector. Cross interferences refer to misses on lines that have
been replaced between two consecutive references by lines
belonging to other vectors.
In direct mapped caches each memory line is always
mapped to the same cache line, so replacements take place
whenever accesses to two or more memory lines mapped to
the same cache line take place. However, in K-way assso-
ciative caches lines are mapped to a set of K cache lines.
In this case the line to be replaced is selected following a
Cache size in words
Ls Line size in words
Nc Number of cache lines (Cs=Ls )
Nnz Number of non zero elements of the sparse matrix
M Number of rows of the sparse matrix
N Number of columns of the sparse matrix
fi Average of non zeros elements per row (Nnz=M)
K Associativity
Nk Number of cache sets (Nc=K)
pn Probability that a position in the sparse
matrix contains a non zero element (fi=N)
Probability that there is at least one entry in a group
of Ls positions of the sparse matrix
r size of an integer
size of a real
Table
1: Notation used.
random or a LRU criterium. Our model is oriented to K-way
associative caches with LRU replacement. In order to
replace a line in a cache of this kind, K different new lines
mapped to the same set must be accessed before reusing the
line considered. When the behavior is that of direct
mapped caches.
The replacement probability for a given line grows with
the number of lines affected by the accesses between two
consecutive references to it, and the way these lines are distributed
among the different sets. This depends on the memory
location of the vectors to be accessed, as it determines
the set corresponding to a given line. We handle the areas
covered by the accesses to a given program vector V as an
area vector, S is the ratio
of sets that have received K or more lines of vector V, while
is the ratio of sets that have received
lines from V. This means that S Vi is the ratio of cache sets
that require i accesses to new different lines to replace all
the lines they contained when the access to V started.
Besides K, the number of lines in a set, there is a number
of additional parameters our model considers. They are
depicted in Table ??. By word we mean the logical access
unit, this is to say, the size of a real or an integer. We have
chosen the size of a real, but the model is totally scalable.
A uniform distribution in an M \Theta N sparse matrix with
Nnz non zero elements (entries) allows us to state the following
considerations: the number of entries in a group of
Ls positions belongs to a binomial B(Ls ; pn) where pn is the
probability of a given position of the matrix containing an
entry, that is, This way, the probability of generating
an access to a given line of the vector times which we
are multiplying the sparse matrix is p, which is calculated
as the table shows.
2.1 Area vectors union
An essential issue is the way we obtain the total area vector.
We add the area vectors corresponding to the accesses to
the different program vectors. Given two area vectors S
corresponding
to the accesses to vectors U and V, we define the union area
vector
where is the ratio of sets that have
received lines from these two vectors, and (S U [S V )0
U
U V00110011
U
U
Figure
1: Area vectors corresponding to the accesses to a
vector U and a vector V in a cache with N
and resulting total area vector.
l
Figure
2: Area vectors corresponding to a sequential access
and an access to lines with a uniform reference probability
of a vector covering 11 lines in a cache with N
2.
is the ratio of sets that have received K or more lines. Figure
?? illustrates the area vector union process. From now
on the symbol [ will be used to denote the vector union op-
eration. This method makes no assumptions on the relative
positions of the program vectors in memory, as it is based in
the addition as independent probabilities of the area ratios.
2.2 Sequential access
The calculation of the area vector depends on the access
pattern of the corresponding program vector, so we define
an area vector function for each access pattern we find in
the codes analyzed. For example, the area vector for the
access to n consecutive memory positions is
(2)
is the average number of lines
that fall into each set. If l - K then
as all of the sets receive an average of K or more
lines. The term Ls added to n stands for the average
extra words brought to the cache in the first and last lines
of the access.
2.3 Access to lines with a uniform reference probability
The area vector for an access to an n word vector where
each one of the cache lines into which it is divided has the
same probability P l of being accessed may be calculated as
li (n; P l
li (n; P l
being B(n,p) the binomial
distribution 1 and )eg. An example
for this access and the sequential access is presented
We define the binomial distribution on a non integer number of
elements
Figure
3: Correspondence between the location of the non
zeros in a banded matrix to be transposed and the groups
where they are moved to in the vectors that define the output
matrix.
in
Figure
??. In the case of k accesses of this type, the area
vector is S k
l (n; P l
2.4 Access to groups of elements with a uniform reference
probability
In the transposition of a sparse matrix we find the case of
a vector of n words divided into groups of t words where
the probability Pg of accessing each group is the same. It
happens in the main loop of the algorithm, which accesses
the non zeros of the input matrix per rows (sequentially)
moving them to the group corresponding to the column they
belong to in the vectors that define the output matrix. Each
group has as many positions as non zeros has the associated
column in the input matrix. Only one of the lines of the
group is accessed during the processing of each row of the
input matrix, and consecutive accesses to the same group
address consecutive memory positions. The area covered by
an access of this type is given by
Sg (n; t; Pg
ae
where if t - Ls , each line of the vector has a uniform probability
being accessed. Whereas if t ? Ls
this probability is Ls=tPg .
As in the case of S l , Sg may be extended in order to
calculate the area covered by k accesses, S k
(n; t; p). Due to
space limitations we only show the main expressions of our
model; all of them can be found in [?].
2.5 Access to areas displaced with successive references
The model may be extended to consider the case in which
the area with some access probability is displaced with successive
references. Let S gb (b; t; Pg ) be the area vector corresponding
to an access to b consecutive groups of t elements
with a uniform probability per group Pg of accessing one
and only one of the elements of the group. Its value is given
by Sg (b \Delta t; t; Pg ).
We define S k
gb (b; t; Pg ) as the area vector corresponding
to k accesses of this type. In the banded sparse matrix
transposition case each one of the k accesses corresponds to
the processing of a new row of the sparse matrix, as in each
row a column leaves the band (to the left), which makes the
probability of accessing the group corresponding to it null,
and a new one joins the band to the right, thus adding its
group to the set of groups that may be accessed during the
processing of the row. The situation is depicted in Figure ??
with 5: during the processing of the first row, groups 1
DO J=R(I), R(I+1)-1
ENDDO
ENDDO
Figure
4: Sparse matrix-vector product.
to 5 may be accessed if there is a non zero in the associated
column, while in the second row these sets are 2 to 6, and
in the third only sets 3-7 may be accessed.
The accesses following this pattern cover three adjacent
areas:
1. A set of lines with a growing
access probability (Area 1 in Figure ??).
2. A set of lines with a constant access
probability (Area 2 in Figure ??).
3. A last set symmetric to the first one, of the same size,
and with a decreasing access probability (Area 3 in
Figure
??).
Once the average access probabilities for the lines of this
three consecutive areas are known (see [?]), it only remains
to combine them to obtain the corresponding area vector.
2.6 Number of lines in a vector competing for the same
cache set
Finally, for the calculation of the self interference probability
a function to compute the average number of lines with
which a line of the vector competes for the same cache set
is needed. This function is defined as
ae
is the average number of lines of the
vector mapped to the same cache set.
3 Modeling the sparse matrix-vector product
The code for this sparse matrix algebra kernel is shown in
Figure
??. The format used to store the sparse matrix is the
Compressed Row Storage (CRS) [?]: A contains the sparse
matrix entries, C stores the column of each entry, and R indicates
in which point of A and C a new row of the sparse
matrix starts, which permits knowing the number of entries
per row. These three vectors and D, the destination vector
of the product, present a purely sequential access, thus
most of the misses on them are intrinsic. There are no self
interferences and very few misses due to cross interferences
(specially taking into account that we are considering a K-way
associate cache). Therefore the number of misses for
these vectors is calculated as the number of cache lines they
cover (intrinsic misses).
Nevertheless vector X suffers a series of indirect accesses
dependent on the location of the entries in the sparse ma-
trix, as it is addressed from the value of C(J). The number
of misses accessing this vector is calculated multiplying the
number of lines referenced per dot product by the number
of rows of the sparse matrix and the miss probability of one
of these accesses. The first value is calculated as p \Delta N=Ls ,
as X covers N=Ls lines, and each one has a probability p of
being accessed during the dot product times a given row of
the sparse matrix, as we have stated in the previous section.
The miss probability is calculated as the opposite of the
hit probability. Hits take place whenever the accessed line
has been referenced during a previous dot product, and it
has suffered no self or cross interferences.
Cross interferences are generated by the accesses to the
remaining vectors, which present a sequential access. The
cross interference area is given by the total area vector associated
with the accesses to vectors A, C, R and D. This
area vector is calculated in the way explained in Section ??,
by adding the individual area vectors corresponding to the
accesses to each program vector during the period consid-
ered. In our case we are interested in calculating the cross
interference area vector after i dot products:
cross
is the area vector covered by the accesses to
a vector V during i dot products. The four vectors have a
sequential access, so expression Ss derived in Section ?? is
applied:
These values are obtained considering that one element
of D and R, and fi components of A and C are accessed per
dot product. A scale factor r is applied to integer vectors in
order to take into account that integer data are often stored
using less bytes than real ones. This factor is the quotient
of the number of bytes required by a real datum and the one
required by an integer.
As for the self interference probability, each line of X
competes on average with C(N) lines of the same vector
(see definition of C(n) in (??)) in the same cache set, and
they all have the same probability p of being accessed during
a dot product times a row of the sparse matrix. As a result,
the number of different candidate lines of X to replace a given
line that have been accessed after i dot products belongs to
a binomial
The hit probability in the first access to any line of X
during the dot product of the j-'th row of the sparse matrix
times vector X is:
where p(1\Gammap) i\Gamma1 is the probability that the last access to the
line has taken place i dot products ago and S int X (i) is the
interference area vector generated by the accesses produced
during these i dot products, which is calculated as
where the cross interfence area vector obtained in (??) is
added to the self interference area vector, which is given by
an access with a uniform access probability per line of 1\Gamma(1\Gamma
p) i to the lines of vector X that can generate self interferences
with another line, which are calculated using (??). Finally,
the average hit probability is calculated as:
END DO
J=C(I)+2
END DO
END DO
DO K=R(I), R(I+1)-1
CT(P)=I
END DO
END DO
Figure
5: Transposition of a sparse matrix.
We must point out that the model only takes into account
the first access to each line of X in each dot product. The
other Nnz \Gammap\DeltaM \Delta(N=L s) accesses have a very low probability
of resulting in a miss, as they refer to lines that have been
accessed in the previous iteration of the inner loop.
4 Modeling the transposition of a sparse matrix
In this section the model is extended to a operation with
a greater degree of complexity, the sparse matrix transposi-
tion. As in the previous section, we assume that the sparse
matrix and its transposed are stored in a CRS format. Figure
?? shows the transposition algorithm described in [?],
where both matrices are represented by vectors(A, C, R) and
(AT, CT, RT), respectively. Observe that in loop 4 there are
multiple indirection levels both in the left and right side of
the sentences.
In what follows we employ a similar approximation to
the one developed in the previous section to estimate the
number of misses for vectors AT, CT and RT. The remaining
vectors have sequential accesses, which have already been
considered in the previous algorithm.
4.1 Vectors AT and CT
These two vectors follow exactly the same access pattern,
as may be observed in loop 4 of Figure ??. We will thus
explain the estimation of the number of misses for AT, as
the one for CT is identical but taking into account that it is
an integer vector. The access pattern is the one modeled by
Sg , described in Section ??, where
and
This pattern has similarities with the one explained in
Section ?? for vector X, as the access probability is uniformly
distributed along the vector, being the difference that for
vector X the probability is constant for each line of the vec-
tor, while for vectors AT and CT it corresponds to sets of
as many elements as a column of the original sparse matrix
holds. As a result, the general form of the hit probability
during the process of the j-'th row of the sparse matrix is
very similar to the one in (??), with the following differences:
ffl The probability of accessing the considered line of the
vector during the processing of a row is not p but
ffl The probability of accessing another line mapped to
the same cache set during the processing of the i previous
rows is not 1\Gamma(1\Gammap) i but 1\Gamma(S i
ffl Vector AT has Nnz elements, so the number of lines
that compete in the set with the line considered is
C(Nnz) in the equation that calculates the interference
area vector.
ffl When calculating the cross interference probability,
the same scheme of adding the area vectors corresponding
to the access to the remaining vectors is used.
R (i), S C (i) and S A (i) are calculated according to
(??) and the remaining area vectors are
l (N \Delta
(Nnz \Delta
4.2 Vector RT
This vector is referenced in the four loops of the algorithm.
In the first loop it has a totally sequential access that produces
misses.
In the second loop, the accesses to RT follow a similar pattern
to those of vector X in the sparse matrix-vector product.
The only differences consist in that there is only one possible
source of cross interferences (vector C), and that RT is an
integer vector and not a real value vector. The number of
misses in loop 2 is
first hit RT2
where P hit RT2 is calculated following expressions (??) and
(??) introducing the modifications mentioned above. Vector
RT has been completely accessed in a sequential manner in
the previous loop. For this reason we must add the probability
first hit RT2 of getting a hit due to the existence of
portions of RT in the cache when initiating the loop. Again,
a final expression can be found in [?].
In loop 3, vector RT is sequentially accessed without any
accesses to other vectors. The estimated number of misses
is
where P first hit RT3 may be estimated as P first hit RT2 \Delta
pr \Delta M .
Finally, in loop 4 the access to RT is again similar to the
sparse matrix-vector product described in Section ??. In
this loop we must also consider a hit probability P first hit RT4
due to a previous load of RT generated by loop 3.
5 Extension to banded matrices
A large number of real numerical problems in engineering
lead to matrices with a sparse banded distribution of the
entries [?]. In this section we present the modifications
the model requires in order to describe the cache behavior
for matrices of this type. The model parameter pn (see
Table
??) is calculated as fi
W , where W is the bandwidth.
Consequently p takes the value
5.1 Sparse matrix-vector product
The number of misses over vector X is the only one affected
by the band distribution of the entries. The modeling of the
behavior of this vector is identical to the one described in
Section ??. The number of accesses to different lines in the
dot products in which vector X is involved,
multiplied by the miss probability calculated as
For the calculation of P hit X in expression (??) M must
be replaced by W , as one line of X may be accessed during
the product of a maximum of W rows. In addition, the
number of lines of X that compete with another in a cache
set is C(W ) instead of C(N ), which influences P hit X (j) in
expression (??), as the entries of each row are distributed
among W positions instead of N .
5.2 Transposition of a sparse matrix
The calculation of the number of misses on AT is modified
in a similar way to that of vector X in the sparse matrix-vector
product: each line of this vector spreads its access
probability through the processing of W rows of the sparse
matrix, instead of M , thus reducing to this limit the sum
that calculates the hit probability. For the same reason,
the number of lines with which a given line of this vector
competes for the same cache set during its access period is
not C(Nnz ), but C(Nnz=N \Delta W ).
Besides, AT is accessed following the pattern described
by area vector funcion S gb in Section ??, so the probability
of accessing during the processing of the i previous rows a
given line that is mapped to the same cache set as the line
we are considering is 1\Gamma(S i
Also the cross interference probability calculation needs to
be modified according to the new access patterns.
Similar changes are needed for the estimation of the number
of misses on vector CT, taking into account that it is
made up of integer values.
The prediction of the number of misses on vector RT in
loops 2 and 4 is based on the model for vector X in the
sparse matrix-vector product, so the calculation of P hit RT2
and P hit RT4 requires the modifications explained in the
previous section, and the number of different lines accesses
in each dot product is no longer N \Delta r=Ls but W \Delta r=Ls . Some
changes are needed also to calculate the probabilities of hit
due to a reuse P first hit RT2 and P first hit RT4 . Finally,
the value of P first hit RT3 is now P first hit RT2 \Delta pr \Delta W .
6 Results
The model was validated with simulations on synthetic matrices
with a uniform distribution of the non zero elements
and banded matrices from the Harwell-Boeing collection [?].
Traces for the simulations were obtained by running the
algorithms after replacing the original accesses by calls to
functions that calculate the position to be accessed and write
it to disk. These traces were fed to the dineroIII cache sim-
ulator, belonging to the WARTS tools [?].
Tables
?? and ?? show the model accuracy for the sparse
matrix-vector product for some combinations of the input
parameters for uniform and banded sparse matrices respec-
tively. Tables ?? and ?? display the results for the sparse
matrix transposition model. Without any loss of generality
we have considered square matrices M) in the analy-
sis, and r = 1. Cs is expressed in Kwords and Ls in words.
The maximum error obtained in the trial set was 5.15%
for the synthetic matrices and 28.12% for the Harwell-Boeing
matrices. The reason for this last result is that the entries
distribution in the real matrices is not completely uniform,
which produces high deviations for the sparse matrix transposition
model (see Table ??). Nevertheless, we consider
that such amount of deviation is still acceptable for our analysis
purposes. Besides the small size of the matrices of the
collection does not favor the convergence of our probabilistic
model. We also want to point out that the average error
predicted
measured
Table
2: Predicted and measured misses and deviation of model for sparse matrix-vector product with a uniform entries
distribution. M , Nnz and numbers of misses in thousands.
Name N W Nnz ff Cs Ls K
predicted
measured
Table
3: Predicted and measured misses and deviation of model for sparse matrix-vector product of some Harwell-Boeing
matrices.
predicted
measured
Table
4: Predicted and measured misses and deviation of model for the transposition of a sparse matrix with a uniform entries
distribution. M , Nnz and numbers of misses in thousands.
Name N W Nnz ff Cs Ls K
predicted
measured
Table
5: Predicted and measured misses and deviation of model for the transposition of a sparse matrix of some Harwell-Boeing
matrices.
Pn
Log Ls
Figure
Number of misses in a 4-way associative cache with
2Kw during the sparse matrix-vector product of a 10K \Theta10K
matrix as a function of Ls and pn .
K=2
K=4
Log Ls
Figure
7: Number of misses during the sparse matrix-vector
product of a 10K \Theta 10K matrix with as a function
of the line size of a 2Kw cache for different associativities.
for the synthetic matrices was 0.65%, and 5% for the real
matrices.
In
Figures
??-?? we present the relationship between the
number of misses and the different parameters introduced in
the models. In the case of Cs and Ls we display the base
2 logarithm of the number of cache Kwords and line words
respectively.
Figure
?? shows the relationship of the number of misses
with Ls and pn in the sparse matrix-vector product. The
evolution is approximately linear with respect to pn , as the
number of accesses is directly proportional to Nnz and most
of them follow a sequential pattern. It may be observed how
the number of misses significantly decreases as Ls increases
because the accesses to all of the vectors except X are sequential
(see Figure ??) and the larger the lines, the more
exploitation of the spatial locality we obtain. Nevertheless,
when Ls is very large (? 64 words) and the matrix has a
lower degree of sparsity (pn ? 0:1) the increase of the self
and cross interferences probabilities over vector X begins to
unbalance the advantages obtained from a more efficient use
of the spatial locality exhibited by the remaining vectors for
This effect, shown in Figure ??, increases with the
value of pn .
The relationship between W and Cs for the banded sparse
matrix-vector product is shown in Figure ??. In this graph0.51.5x
Figure
8: Number of misses during the sparse matrix-vector
product for a sparse matrix 20K\Theta20K with
tries, a function of W and Cs .
K=2
K=4
Log
Figure
9: Number of misses during the sparse matrix-vector
product of a 20K \Theta 20K matrix with 2M entries and
8000 as a function of the cache size with Ls = 8 for different
associativities.
we have considered broad bands in order to illustrate the
effect of self interferences in the access to vector X. The
bandwidth reduction has a large influence on the number
of misses because it reduces the self interference probability
and increases the reuse probability for the lines of vector X,
as the non zeros are spread on narrower rows (spatial locality
improvement) and columns (temporal locality improve-
ment). A number of misses near the minimum is reached
when being the increases of Cs beyond that
size of little use. It is intuitive that good miss rates can
only be reached with Cs ? W , as with this size there is a
line in the cache for each line in the band of the currently
processed row. The extra room is needed to avoid the combined
effect of self and cross interferences, as we shall now
demostrate through an analysis for a fixed bandwidth for
different degrees of associativity and cache sizes.
In
Figure
?? the number of misses for a matrix with
displayed in relation to the cache size for different
associativities. We can observe that for
of the improvement is reached for the 8Kw cache, due to the
elimination of the self interferences. The cache size increments
from that size on help to gradually reduce the cross
interferences. On the other hand, caches with K ? 1 have
Log
Figure
10: Number of misses during the transposition of a
sparse matrix 20K\Theta20K with
and a function of W and Cs .
a different behavior: the misses reduction gradient is very
high while Cs -16Kw. The reason is that in the 8Kw cache
there are K different lines of X mapped to each cache set.
As the lines are usually accessed in the same order and the
cache uses a LRU replacement, any cross interference may
generate misses for all of the lines of X that are mapped to
the same cache set. The result is that the cross interferences
affect as many lines as a set can hold, thus generating more
misses. In fact we can see that the 4-way associate cache performs
a little worse than the 2-way cache for this cache size.
For caches with Cs - 16Kw, the cache sets have enough
lines left to absorb the accesses to the vectors that comprise
the sparse matrix and the destination vector without
increasing the interferences on X due to their combination
with the lines of this vector that reside in the same cache set.
For small cache sizes all the associativities perform similarly
due to the large number of interferences. The conclusion is
that associative caches help reducing the interference effect
when the number of lines that compete for a given cache line
is smaller than or equal to K; otherwise the performance is
quite similar to that of a direct mapped cache. Moreover,
in this case, if the lines mapped to the same cache set are
usually accessed in the same order, high associativities may
perform worse.
As for the sparse matrix transposition, Figures ?? and ??
represent the same data for this algorithm as Figures ??
and ?? for the sparse matrix-vector product respectively.
The first one shows a decrease of the number of misses with
the bandwidth reduction, being this more noticeable in the
point where Cs becomes greater than W . The reasons are
those explained for the previous algorithm. Anyway, this
reduction is much softer than in the sparse matrix-vector
product because the accesses to vector RT in loops 2 and
4, which are the most directly favoured by the bandwidth
reduction, stand only for a small portion of the misses. On
the other hand, the number of misses on vectors AT, and
CT, which account for the vast majority of the misses, decreases
slowly when Cs increases or W decreases, although
they are also heavily dependent on the bandwidth. The reason
is that in all of the cases shown in the figure the data
belonging to these vectors during the process of all of the
columns inside the band of the original matrix do not fit in
the cache ((Nnz=N) \Delta W entries). Only for the case when
does this set fit, and we can
see that the number of misses becomes stable in this area
K=2
K=4
Log
Figure
11: Number of misses during the transposition of a
sparse matrix 20K\Theta20K matrix with 2M entries and
8000 as a function of the cache size with Ls = 8 for different
associativities.
of the graph. The misses on C, R and A remain almost completely
constant due to their sequential access, obtaining
as only benefit from the bandwidth reduction a somewhat
lower cross interference probability.
Figure
?? shows that the general behavior of the algorithm
with respect to K, the associativity degree, although
having similarities with that of the sparse matrix-vector
product, does not depend only on W to determine the cache
sizes for which the hit rate obtains reasonable values. As explained
before, the reason is that for none of the
cache sizes considered contain a considerable portion of the
data the algorithm accesses during the W iterations that
process a whole band in loop 4, the one that causes most of
the misses. Only accesses to RT, mainly in loop 2, benefit
from the increase of Cs to values larger than W . This is specially
noticeable for as in the sparse matrix-vector
product. Another similarity is the worse behavior of caches
with K ? 1 for a cache size very close to W due to the added
effect of cross interferences with self interferences because of
cache lines which are usually accessed in the same order.
This penalizes the LRU replacement algorithm. Once the
cache size is noticeable larger than the bandwidth, higher
associativities perform better, as in Figure ??.
In order to get hints about the values of the cache parameters
for which the algorithm stabilizes its misses, Figures
?? and ?? show the same data for a smaller matrix
using 4. During the process of a band of this matrix
the working set for vectors AT and CT, those that account
for most of the misses, comprise 25W elements, as there is
an average of 25 elements per column. The hit rate obtains
values near to its maximum in Figure ?? when the cache size
exceeds this value. Increases of Cs beyond that limit provide
little performance improvement. These improvements
are only noticeable when the cache size increase is very high,
due to the strong reduction of the cross interference. We
must take into account that this graph is constructed for
a 4-way associative cache. Somewhat greater cache sizes
would be needed to obtain good cross interference reduction
in a direct mapped cache (see Figure ??).
associative caches help reducing the miss rate for the
small cache sizes in Figure ?? because on average there are
always less than 2 lines competing in any set, as during the
process of each row there are only about 200 lines of vector
Log
Figure
12: Number of misses during the transposition of a
sparse matrix 5K\Theta5K with 125K entries,
as a function of W and Cs .
K=2
K=4
Log
Figure
13: Number of misses during the transposition of a
sparse matrix 5K\Theta5K matrix with 125K entries and
200 as a function of the cache size with Ls = 4 for different
associativities.
AT with a non null access probability, 200 lines belonging
to vector CT and a few more lines belonging to the other
vectors, while the smallest of the caches considered has 512
lines. As expected, the increase of the cache size reduces
the hit ratio difference between direct mapped caches and
set associate caches. For Cs - 8Kw the cache size increase
helps only by reducing the cross interferences.
Finally, the relationship of the line size and the matrix
density with the number of misses for this last algorithm
has proved to be the same as in the sparse matrix-vector
product, being the only difference that the gradient
of the increase of misses with relation to pn is about three
times larger, which is normal, as the number of accesses per
non zero in the original matrix is eight, while in the sparse
matrix-vector product algorithm it is three.
7 Conclusions and future work
We have presented a fully-parametrizable model for the estimation
of the number of misses on a generic K-way associative
cache with LRU replacement and we have applied it to
the sparse matrix-vector product and the transposition of a
sparse matrix. The model deals with all the possible types
dineroIII
time
model
time
100 1000 1% 1000
100 1000 10% 100
100 1000 1% 1000
100 1000 10% 100
100 1000 1% 1000
100 1000 10% 100
Table
Simulation and model user times to calculate the
number of misses during the transposition of a banded sparse
matrix on a 200 MHz Pentium. N and Nnz in thousands,
time in seconds.
of misses and has demonstrated a high level of accuracy in
its predictions. It considers a uniform distribution of the
entries on the whole matrix or on a given band.
As
Table
?? shows, besides providing more information,
the modelization is much faster than the simulation, even
when the time required to generate the trace, which takes
almost as much time as the execution of the simulator itself,
is not included in the table.
We have illustrated how the model may be used to study
the cache behavior for this code, and shown the importance
of the bandwidth reduction in the case of the banded matri-
ces, even for high degrees of associativity. The model can be
also applied to study possible architectural cache parameter
modifications in order to improve cache performance.
Future work includes the extension of the model to consider
prefetching and subblock placement. On the other
hand, we are now working on the modeling for non uniform
distributions of the entries in the sparse matrices focusing
in the most common patterns that appear in real matrices
suites. Finally, in order to obtain more accurate estima-
tions, we are studying the inclusion of the data structures
base addresses as a parameter of the model.
--R
"Analysis of Cache Performance for Operating Systems and Multiprogramming,"
Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods.
"User's Guide for the Harwell-Boeing Sparse Matrix Collection,"
"Cache Miss Prediction in Sparse Matrix Computations,"
"Cache Miss Equations: An Analytical Representation of Cache Misses,"
"Cache Profiling and the SPEC Benchmarks: A Case Study,"
Sparse Matrix Technology.
"Sparse Matrix Computations: Implications for Cache Designs,"
"Characterizing the Behaviour of Sparse Algorithms on Caches,"
"Cache Interference Phenomena,"
"Trace-Driven Memory Simulation: A Survey,"
--TR
Analysis of cache performance for operating systems and multiprogramming
Characterizing the behavior of sparse algorithms on caches
Sparse matrix computations
Cache interference phenomena
Block algorithms for sparse matrix computations on high performance workstations
Trace-driven memory simulation
Cache miss equations
Cache Profiling and the SPEC Benchmarks
--CTR
Gerardo Bandera , Manuel Ujaldn , Emilio L. Zapata, Compile and Run-Time Support for the Parallelization of Sparse Matrix Updating Algorithms, The Journal of Supercomputing, v.17 n.3, p.263-276, Nov. 2000
Basilio B. Fraguela , Ramn Doallo , Emilio L. Zapata, Probabilistic Miss Equations: Evaluating Memory Hierarchy Performance, IEEE Transactions on Computers, v.52 n.3, p.321-336, March
Chun-Yuan Lin , Jen-Shiuh Liu , Yeh-Ching Chung, Efficient Representation Scheme for Multidimensional Array Operations, IEEE Transactions on Computers, v.51 n.3, p.327-345, March 2002
B. B. Fraguela , R. Doallo , J. Tourio , E. L. Zapata, A compiler tool to predict memory hierarchy performance of scientific codes, Parallel Computing, v.30 n.2, p.225-248, February 2004
Jingling Xue , Xavier Vera, Efficient and Accurate Analytical Modeling of Whole-Program Data Cache Behavior, IEEE Transactions on Computers, v.53 n.5, p.547-566, May 2004
Chun-Yuan Lin , Yeh-Ching Chung , Jen-Shiuh Liu, Efficient Data Parallel Algorithms for Multidimensional Array Operations Based on the EKMR Scheme for Distributed Memory Multicomputers, IEEE Transactions on Parallel and Distributed Systems, v.14 n.7, p.625-639, July | probabilistic model;irregular computation;sparse matrix;cache performance |
277915 | Application and evaluation of large deviation techniques for traffic engineering in broadband networks. | Accurate yet simple methods for traffic engineering are important for efficient dimensioning of broadband networks. The goal of this paper is to apply and evaluate large deviation techniques for traffic engineering. In particular, we employ the recently developed theory of effective bandwidths, where the effective bandwidth depends not only on the statistical characteristics of the traffic stream, but also on a link's operating point through two parameters, the space and time parameters, which are computed using the many sources asymptotic. We show that this effective bandwidth definition can accurately quantify resource usage. Furthermore, we estimate and interpret values of the space and time parameters for various mixes of real traffic demonstrating how these values can be used to clarify the effects on the link performance of the time scales of burstiness of the traffic input, of the link parameters (capacity and buffer), and of traffic control mechanisms, such as traffic shaping. Our approach relies on off-line analysis of traffic traces, the granularity of which is determined by the time parameter of the link, and our experiments involve a large set of MPEG-1 compressed video and Internet Wide Area Network (WAN) traces, as well as modeled voice traffic. | Introduction
The rapid progress and successful penetration of broadband
communications in recent years has led to important new
problems in traffic modeling and engineering. Among oth-
ers, call admission control and network dimensioning for
cases of guaranteed QoS (which is one of the most important
features supported by the ATM technology) have attracted
the attention of researchers. Successful approaches
are closely related to the ability of quantifying the usage
This work was supported in part by the European Commission
under ACTS project CASHMAN (AC-039).
y Also with the Dept. of Computer Science, University of Crete.
In Proc. of ACM SIGMETRICS '98/ PERFORMANCE
Joint International Conference on Measurement and
Modeling of Computer Systems, Madison, Wisconsin,
June 24-26, 1998.
of resources, on the basis of traffic modeling and measurements
For example, statistical analysis of traffic measurements
[11, 14, 7] has shown a self-similar or fractal behavior; such
traffic exhibits long range dependence or slowly decaying
autocorrelation. Although the implications of such long
range dependence is still an open issue (e.g., see [6, 8] and
the references therein), recent work [16, 8] has shown that
these implications can be of secondary importance to the
buffer overflow probability when the buffer size is small,
which applies to the case where real time communication is
supported. This example motivates the need for a methodology
to understand the impact of the various time scales of
the burstiness of real broadband traffic on the performance
of the network and on its resource sharing capabilities. In
particular, some basic questions for which the network engineer
must provide answers are the following: How much
does the cell loss probability decrease when the link capacity
or buffer size increases? How does traffic shaping 1 affect
the multiplexing capability of a link and the amount of resources
used by a bursty source? What is the necessary
granularity of traces in order to capture the information
that is important for performance analysis and network di-
mensioning? What are the effects of the composition of
the traffic mix on the multiplexing capability of a link?
Traditional queueing theory, which requires elaborate traffic
models, cannot be applied to answer such questions in
the context of large multi-service networks; for such cases
asymptotic methods are more appropriate. In this paper
we answer such questions by applying and evaluating the
many sources asymptotic and the effective bandwidth based
on this asymptotic, for real broadband traffic. This traffic
consists of MPEG-1 compressed video, Internet Wide Area
Network (WAN) traffic, and traffic resulting from modeled
voice.
Problems related to resource sharing have often been
analyzed using the notion of effective bandwidth, which is a
scalar summarizing resource usage depending on the statistical
properties and Quality of Service (QoS) requirements
of a source. Effective bandwidths are usually derived by
means of asymptotic analysis, which is concerned with how
the buffer overflow probability decays as some quantity in-
creases. If this quantity is the size of the buffer, we have the
large buffer asymptotic [5, 10]. If the buffer per source and
capacity per source are kept constant, and we are interested
Related work on how traffic smoothing affects the multiplexing
capability of a link can be found in [18] and the references therein.
in how the overflow probability decays as the size of the
system (the broadband link and the multiplexed sources)
increases, then we have the many sources asymptotic; this
asymptotic regime has been investigated in [4, 1, 17].
Effective bandwidth definitions based on the large buffer
asymptotic were found, in some cases, to be overly conservative
or too optimistic [2]. This occurs because the large
buffer asymptotic does not take into account the gain when
many independent sources are statistically multiplexed to-
gether. On the other hand, the effective bandwidth defined
according to the many sources asymptotic [9, 3] depends
not only on the statistical properties and Quality of Service
(QoS) requirements of a source, but also on the statistical
properties of the other traffic it is multiplexed with and
the parameters (capacity and buffer) of the multiplexing
link. Only recently [9, 3] has it been understood how to
incorporate such information into the definition of the effective
bandwidth. This work has shown that the effective
bandwidth of a source depends on the link's operating point
through two parameters, called the space and time parame-
ters, which in turn depend on the link parameters (capacity
and buffer) and the statistical properties of the multiplexed
traffic. These parameters can be computed using the many
sources asymptotic and, as we will demonstrate with real
broadband traffic, have important applications for traffic
engineering. Since the effective bandwidth gives the amount
of resources that must be reserved for that source in order
to satisfy its QoS requirements, it helps simplify problems
in traffic engineering and network dimensioning.
The rest of this paper is structured as follows. In Section
2 we review basic results from the theory of effective
bandwidths, as developed in [9], and the theory of many
sources asymptotics [4, 1, 17]. In Section 2.2 we discuss the
application of this framework to traffic engineering, giving
emphasis on the interpretation of the space and time
parameters. In Section 3 we present a detailed series of experiments
which aim to evaluate the accuracy of the above
framework for link capacities and buffer sizes anticipated for
broadband networks and for real broadband traffic which
consists of MPEG-1 compressed video and Internet WAN
traces, in addition to modeled voice traffic. Finally, Section
4 summarizes the results of the paper and identifies
areas for future research.
2 The Many Sources Asymptotic and its Implications
In this section we summarize the key results of the many
sources asymptotic and their implications for traffic engineering
2.1 Effective Bandwidths
Suppose the arrival process at a broadband link is the superposition
of independent sources of J types.
be the number of sources of type j, and let
(note that the n j s are not necessarily integers). This system
can be viewed as having N sources of the same type, where
a single source consists of a proportion of the J source types
and can be characterized by the vector n. The broadband
link has a shared buffer of size link capacity
Parameter N is the scaling parameter (size of the
system), and parameters b; c are the buffer and capacity
per source, respectively. We suppose that after taking into
account all economic factors (such as demand and com-
petition) the proportions of traffic of each of the J types
remains close to that given by the vector n, and we seek
to understand the relative usage of network resources that
should be attributed to each traffic type.
Furthermore, let X j [0; t] be the total load produced by
a source of type j in the time interval (0; t], which feeds the
above link. We assume that X j [0; t] has stationary incre-
ments. Then, the effective bandwidth of a source of type j
is defined as [9]
st log E
\Theta e sX j [0;t]
where s; t are system parameters which are defined by the
context of the source, i.e., the characteristics of the multiplexed
traffic, their QoS requirements, and the link resources
(capacity and buffer). Specifically, the time parameter
(measured in, e.g., milliseconds) corresponds to
the most probable duration of the busy period of the buffer
prior to overflow [4]. The space parameter s (measured
in, e.g., kb \Gamma1 ) corresponds to the degree of multiplexing
and depends, among others, on the size of the peak rate
of the multiplexed sources relative to the link capacity. In
particular, for link capacities much larger than the peak
rate of the multiplexed sources, s tends to zero and ff j (s; t)
approaches the mean rate of the source, while for link capacities
not much larger than the peak rate of the sources,
s is large and ff j (s; t) approaches the maximum value that
the random variable X j [0; t]=t can attain.
P(overflow) be the probability
that in an infinite buffer which multiplexes
sources and is served at rate C = Nc, the queue
length is above the threshold Nb. The following holds
for Q(Nc;Nb;Nn) [4]:
lim
log
sup
st
where I is called the asymptotic rate function. The last
equation is referred to as the many sources asymptotic, and
has been proved for continuous time in [1] and for a special
case in [17]. A similar asymptotic holds for the proportion
of workload lost through the overflow of a finite buffer of
size Nb. Due to equation (2), the overflow probability can
be written as which leads to the
following approximation when N is large:
The accuracy of the above approximation and, more im-
portantly, the achievable link utilization for MPEG-1 compressed
video and Internet WAN traffic is investigated in
Section 3.1.
Consider the QoS constraint on the overflow probability
to be P(overflow) - e \Gammafl , and assume
be a subset of Z J
such that
vice versa), i.e., the QoS constraint on the overflow probability
is met. Due to this property, A(N called
the acceptance region. The region A(N
to compute. However, for the scaled acceptance region, the
following holds [9]:
lim
where
s
st
Hence, the scaled acceptance region
can be approximated by
A.
If is on the boundary of the region A and
the boundary is differentiable at that point, then the tangent
plane determines the half-space [9]
s
where (s; t) is an extremizing pair in equation (2) and c is
the "effective capacity" per source. The case for two source
types shown in Figure 1.
the scaled acceptance region
Approximation A of
A 3
Figure
1: Approximation A of the scaled acceptance region
three values of t in (4).
To the extent that A(N can be approximated
by NA, it follows from (5) that a point
can be taken to satisfy
s
where, as in (5), (s; t) is an extremizing pair in equation
(2), and C is the "effective capacity" of the system at the
operating point (s; t).
The effective bandwidth ff j (s; t) provides a relative measure
of resource usage for a particular operating point of the
link, expressed through parameters s; t. For example, if a
source of type j1 has twice as much effective bandwidth as
a source of type j2 , then, for this particular operating point
of the link, one source of the first type can be substituted
by two sources of the second type, while still satisfying the
QoS constraint. The above measure of resource usage differs
from the ordinary measure that is usually reported (i.e.,
the mean rate), which corresponds to Note that the
QoS guarantees are encoded in the effective bandwidth definition
through the value of fl, which appears on the right
hand side of (6) and influences the form of the acceptance
region.
The asymptotics behind the above results assume only
stationarity of sources. Illustrative examples discussed in
[9] and [16] include periodic sources, fractional Brownian
input, policed and shaped sources, and deterministic multiplexing
Unlike the above definition of the effective bandwidth
which takes into account the effects of statistical multi-
plexing, the effective bandwidth based on the large buffer
asymptotic depends solely on the source's characteristics
and the QoS constraint. Specifically [5, 10], consider the
QoS constraint P(overflow) - e \GammaffiB , where B is the total
buffer. Then, the effective bandwidth based on the large
buffer asymptotic of a source of type j, and the corresponding
constraint is
s lim
\Theta e sX j [0;t]
Observe that (7) is a special case of (1) for t ! 1. Indeed,
equation (7) becomes accurate when the buffer of the system
is large, in which case the time parameter t becomes
large. However, for finite buffer sizes, equation (7) can lead
to significant underutilization or even overutilization of link
capacity [2]. The experiments in Section 3 include results
which compare the performance of the large buffer asymptotic
with that of the many sources asymptotic.
Using the Bahadur-Rao theorem, the authors of [12] extended
the proof of the many sources asymptotic in [4] to
show that as N !1, the following holds:
e \GammaN I
where (s; t) is an extremizing pair of (2) and oe 2 is given by
log
\Theta e sX j [0;t]
\Theta e sX j [0;t]
is the moment generating func-
tion. Based on (8), we have the following approximation:
We will refer to the above equation as the many sources
asymptotic approximation with the Bahadur-Rao improve-
ment. The term 1log(2-N oe 2 s 2 ) can be approximated by2
log(4-NI) [13]. Hence, equation (9) does not require any
additional computations compared to (3).
It can be seen with some algebra that the many sources
asymptotic with the Bahadur-Rao improvement (equation
gives the following effective bandwidth constraint in
the neighborhood of the extremizing pair (s; t) of (2)
s
where
2fl ). It is important to note
that the same formula for the effective bandwidth, given by
equation (1), is used in both (6) and (10). The Bahadur-
Rao improvement only affects (increases) the amount of effective
capacity C
while the parameters s; t are
computed using the same formula (2).
2.2 Implications to Traffic Engineering
In this subsection we discuss the interpretation of the space
s and time t parameters, and how they can be used for
traffic engineering.
For any traffic stream, the effective bandwidth ff j (s; t)
in (1) is a template that must be filled with the system operating
point parameters s; t in order to provide the correct
measure of effective usage by the stream for that particular
operating point. Although the operating point also
depends on the individual stream, for a large system, due
to heavy multiplexing, this dependence is ignored. Such an
engineering approach simplifies considerably the analysis
because there is no circle in the definitions of the effective
bandwidth and the operating point. Furthermore, as we
will see in Section 3.3, the values of s; t are, to a large ex-
tent, insensitive to small variations of the traffic mix. It
has been observed that in networks serving a large number
of sources, the traffic mix exhibits a strong cyclic behav-
ior. Hence, we can assign particular pairs (s; t) to periods
of the day during which the traffic mix remains relatively
constant. The values of s; t corresponding to a particular
period of the day can be computed off-line using the supinf
formula (2) and the effective bandwidth formula (1), where
the expectation in (1) is replaced by the empirical mean;
the latter can be computed from traffic traces taken during
that period of the day.
Recall that the time parameter t corresponds to the
most probable duration of the busy period prior to buffer
overflow. As we will discuss now, this parameter also identifies
the time scales that are important for buffer over-
flow. Assume that a link is operating at a particular operating
point, expressed through parameters s; t. In the
effective bandwidth formula (1) the statistical properties of
the source appear in X j [0; t], which is the amount of work-load
produced by the source in an interval of length t. If two
sources have the same distribution of X j [0; t] for this value
of t, then they will both have the same effective bandwidth.
A case of practical interest where this result can be applied
is traffic smoothing: To have an effect on the amount of
resources used by a source, traffic smoothing must be performed
on a time scale larger than t, since only then does
it affect the distribution of X j [0; t]. We will investigate this
with real broadband traffic in Section 3.3.
The time parameter t also indicates the granularity that
traffic traces must have 2 , since given a value for t it would
be sufficient to choose the granularity to be a few times
smaller than this value. Traditionally, the granularity of
traces was chosen in a rather ad-hoc manner.
By setting equal to the right hand
side of (2) and taking the derivative with respect to B and
C, we get the following expression for the space parameter
s and the product st [3]:
and st = @fl
Thus, the space parameter s equals the rate at which the
logarithm of the overflow probability decreases with the
buffer size, for fixed capacity C, whereas the product st
equals the rate at which the logarithm of the overflow probability
decreases with the link capacity, for fixed buffer B.
Here, by trace granularity we mean the time window in which
we measure the amount of load produced by a source.
3 Experiments with Real Traffic
In this section we apply and evaluate the traffic engineering
framework discussed in the preceding sections for real
broadband traffic. The specific issues we address are the
ffl Compare the overflow probability and link utilization
using the many sources asymptotic and its Bahadur-
Rao improvement to the actual cell loss probability
and the maximum utilization estimated using simula-
tion. (Section 3.1)
ffl Compare the values of parameters s; t computed by
theory to the values estimated using simulation. (Sec-
tion 3.2)
ffl Estimate and interpret typical values of parameters
s; t for real broadband traffic. (Section 3.2)
ffl Investigate how the values of s; t, and subsequently
the effective bandwidth, depend on the traffic mix.
(Section 3.3)
Our experiments involve MPEG-1 compressed video and
Internet WAN traces, as well as modeled voice traffic. The
sequences, made available 3 by O. Rose [15], have
been encoded using the UC Berkeley MPEG-1 software encoder
with the frame pattern IBBPBBPBBPBB. Each sequence
consisted of 40,000 frames (approximately
utes). For Internet Wide Area Network (WAN) traffic we
used the Bellcore Ethernet trace BC-Oct89Ext which has
been made available 4 by W. Leland and D. Wilson [11].
The trace had duration 122797.83 seconds. For voice traffic
we use an on-off Markov modulated fluid model with peak
rate 64 Kbps and average time spent in the "on" and "off "
states 352 msec and 650 msec, respectively. Finally, we consider
link capacities 34 Mbps, 155 Mbps, and 622 Mbps,
and buffers introducing delay of up to 50 msec for MPEG-1
traffic, and up to 150 msec for Internet WAN traffic.
3.1 Overflow Probability and Link Utilization
In this section we compare the overflow probability and
link utilization using the many sources asymptotic and its
Bahadur-Rao improvement with the actual cell loss probability
and the maximum utilization estimated using simu-
lation. We also derive a simple heuristic for computing the
actual cell loss probability from the overflow probability.
3.1.1 Overflow probability
Figure
compares, for a fixed number of sources, the overflow
probability estimated using the many sources asymptotic
and its Bahadur-Rao improvement with the cell loss
probability and frame overflow probability estimated using
simulation; the latter is the probability that at least one
cell of a frame is lost. Both the cell loss probability and the
frame overflow probability are measured using a discrete
time simulation model with an epoch equal to one frame
In these and subsequent simulations,
we report the average from a total of 80 independent simulation
runs, each with a random selection of the starting
frame for every source (we assume that frame boundaries
are aligned). Each simulation run had duration 5 times the
3 Available at http://ftp-info3.informatik.uni-wuerzburg.de/pub/
4 Available at The Internet Traffic Archive,
size of the trace. Both the number of runs and the duration
of each run were chosen empirically.
For each method, the base-10 logarithm of the overflow
probability is plotted against the buffer size (measured in
milliseconds), while the link utilization remains the same.
In
Figure
2, first observe that for small buffer sizes there
is a relatively fast decrease of the overflow probability as
the buffer size increases. However, this stops to occur after
some buffer size (e.g., 5 \Gamma 8 msec for a 155 Mbps link). Further
increase of the buffer has a smaller effect on the overflow
probability. Furthermore, the logarithm of the overflow
probability in both of these regimes is almost linear with
the buffer size.
Second, observe that although the many sources asymptotic
overestimates the Cell Loss Probability (CLP) by approximately
2-3 orders of magnitude, it qualitatively tracks
its shape very well. The Bahadur-Rao improvement overestimates
the CLP by 1-2 orders of magnitude. On the other
hand, the large buffer asymptotic, in addition to overestimating
the CLP by many orders of magnitude, also fails to
track its shape.
The actual cell loss probability differs from the overflow
probability estimated using the many sources asymptotic
and its Bahadur-Rao improvement because the latter is not
a measure of the CLP, but a measure of the probability that
in an infinite buffer the queue length becomes greater than
B. This probability is closer in spirit to the frame overflow
probability, i.e., the probability that at least one cell of
a frame is lost. Indeed, as Figure 2 shows, the overflow
probability estimated using the many sources asymptotic
with the Bahadur-Rao improvement is very close to the
frame overflow probability. 5
To further explain the above, we derive a simple expression
for the cell loss probability in terms of the frame
overflow probability L f . If one observes a large number of
frames, say M , the average number of frames in which we
have at least one lost cell is ML f . Let x be the average
number of cells that are lost when a frame overflow occurs.
Then the average number of cells that are lost in M frames
is ML f x. Since there is a total of MF cells in these frames,
where F is the average number of cells in a frame, we can
approximate the cell loss probability with the percentage of
lost cells, namely
From the last equation we see that the cell loss probability
differs from the frame overflow probability by a correction
We assume that when an overflow
occurs only a few cells are lost. This is expected for small
loss probabilities, since the probability of loosing cells in a
buffer of size B+ ffl is exponentially harder than loosing cells
in a buffer of size B. In particular, we will assume that only
one cell is lost, hence Lc - 1=F , and since the average number
of cells in one frame is 25 we get Lc -
This heuristic agrees with the difference between the frame
overflow probability and the cell loss probability shown in
Figure
2.
Figure
3 shows the cell loss probability estimated using
(12), where . Observe that the cell loss probability
using this heuristic matches the cell loss probability
estimated using simulation extremely well.
5 This is the case only when the simulation epoch equals the frame
time.
3.1.2 Link utilization
Nm=C be the link utilization, where N is the
number of sources, m is the mean rate, and C is the link
capacity. Figure 4 compares, for a range of buffer sizes
and for overflow probability 10 \Gamma6 , the link utilization using
the many sources asymptotic and its Bahadur-Rao improvement
with the maximum achievable utilization (estimated
using simulation). The utilization is computed by increasing
the number of multiplexed sources in order to find the
maximum number such that the overflow probability (3),
computed using (1) and (2), is less than the target overflow
Similar to our observations regarding the overflow prob-
ability, there are significant gains in increasing the size of
the buffer up to a certain value. Increasing the buffer size
above this value has a very small effect on the link utilization
Recall that the many sources asymptotic overestimated
the CLP by 2-3 orders of magnitude. However, as Table 1
shows, it performs much better in estimating the maximum
utilization. In particular, for
1 msec, the many sources asymptotic achieves a utilization
which is approximately 79% of the maximum utilization.
The Bahadur-Rao improvement increases this percentage
to 88%. Furthermore, this percentage increases for larger
link capacities; e.g., for
the many sources asymptotic achieves a utilization which
is 90% of the maximum (Table 1(b)). Of course, as Figure
5 shows, using the heuristic based on (12), we achieve a
utilization which almost coincides with the maximum utilization
Buffer Utilization %
msec (cells) Simulation MSA MSA
(a)
Buffer Utilization %
msec (cells) Simulation MSA MSA
Table
1: Link utilization: theory vs. simulation for
Star Wars traffic. The parenthesis contain the percentage
of the maximum utilization (second column). MSA: Many
Sources Asymptotic, B-R: Bahadur-Rao. [ P(overflow) -
Finally, Figure 6 shows the link utilization in the case
of Internet WAN traffic. Observe that while for Star Wars
traffic the gains of increasing the buffer decrease abruptly,
for Internet WAN traffic the gains of increasing the buffer
decrease smoother as the buffer size increases. This indicates
that there are more time scales in Internet traffic
which, for different buffer sizes, become important for buffer
overflow.
3.2 Space and Time Parameters
The space s and time t parameters characterize a link's
operating point and depend on the characteristics of the
multiplexed traffic. In this section we compare the values
of these parameters computed using the supinf formula
(2) to the corresponding values estimated using simulation.
Furthermore, we compute and interpret typical values of
these parameters, demonstrating how they can be used for
traffic engineering.
3.2.1 Space and time parameters: theory vs. simulation
Recall from our discussion in Section 2.2 that the space
parameter s equals the rate at which the logarithm of the
overflow probability decreases with the buffer size, equation
(11). Motivated by this equation, we can estimate s using
the following ratio of differences:
sim is the cell loss probability
estimated using simulation. Figure 7(a) shows that
the value of parameter s computed using the supinf formula
(2) is in good agreement with the value computed using
(13). Note that the "steps" in the value computed using
the supinf formula are expected since the many sources
asymptotic (and large deviations theory in general) captures
only the most likely way overflow can occur. On the
other hand, the curve labeled "simulation" in Figure 7(a)
represents an average over all the events that contribute to
overflow.
Recall from our discussion in Section 2.1 that the time
parameter t can be interpreted as the most probable duration
of the busy period prior to buffer overflow. Figure
7(b) compares the value of parameter t computed using
the supinf formula to the average value of the busy period
prior to buffer overflow. As was the case for parameter s,
the agreement between the two curves is good.
3.2.2 Typical values and interpretation of the space and
time parameters
Next we investigate how parameters s; t and the product
st depend on the link capacity and buffer size. The values
of s; t are computed using the supinf formula for a target
Figure
8 shows the values of parameter s as a function of
the buffer size, for various link capacities (Figure 8(a)) and
for various video contents (Figure 8(b)). In Figure 8(a), the
explanation of the steep decrease of the value of s is similar
to the explanation of the knee of the curves in Figures 2
and 4. Specifically, according to equation (11), s equals
the rate at which the logarithm of the cell loss probability
decreases (i.e., the quality of service improves), when the
buffer size increases. Up to some value, the buffer is used
to smooth the fast time scales of the multiplexed traffic.
Thus, increasing the buffer has a large affect on the overflow
probability, and the value of s is high. Once the fast
time scales have been smoothed, the slow time scales govern
the buffer overflow. Thus, increasing the buffer has a very
small effect on the overflow probability, and the value of s is
small. Also, observe in Figure 8(a) that the steep decrease
of the value of s occurs for smaller buffer sizes (measured
in milliseconds) as the link capacity increases. Finally, see
Figure
8(b), similar behavior is observed for MPEG-1 traffic
with various contents. This indicates that the dependence
of s on the link capacity and buffer size is related to the
frame structure.
The dependence of parameter t on the buffer size is
shown in Figure 9(a). Observe that the steep increase of its
value occurs for the same buffer size for which the increase
of the value of s occurs (Figure 8(a)). The small values of
t correspond to the regime where the fast time scales are
important for buffer overflow, whereas the large values of
t correspond to the regime where the slow time scales are
important for buffer overflow.
The product st, for different buffer sizes, is shown in
Figure
9(b). Once again we observe a steep increase of its
value, occurring at the same buffer sizes where the changes
in the values of s and t occur. However, the explanation
for the increase of the value of st is more subtle than the
explanation for the behavior of s; t. Recall from our discussion
in Section 2.2 that the product st equals the rate at
which the logarithm of the overflow probability decreases
with the link capacity, while the buffer size remains the
same; see equation (11). Comparing Figure 9(a) with Figure
9(b), we observe that the larger values of st correspond
to larger values of t. Larger values of t result in an averaging
effect in the amount of load X j [0; t] that appears in the
effective bandwidth formula (1). Hence, for the overflow
phenomenon, the traffic appears smoother. But for a link
which multiplexes smooth traffic and is operating with a
cell loss probability greater than zero, a change of the link
capacity has a greater effect on the overflow probability
compared to a link which multiplexes more bursty traffic.
This gives the intuition of why the value of st increases.
Figure
compares the values of s; t for Star Wars and
voice traffic. Figure 10(a) shows that as the buffer size in-
creases, the value of s for voice traffic decreases smoothly.
Furthermore, the rate of decrease is smaller for larger buffer
sizes. Comparing the value of s for MPEG-1 and voice traf-
fic, we conclude that for buffer sizes up to 2 msec and above
increasing the buffer has a larger effect for a net-work
carrying voice traffic compared to a network carrying
MPEG-1 traffic. This provides an example of how the space
parameter s can be used for buffer dimensioning.
Figure
10(b) shows that for voice traffic the time parameter
increases almost linearly with the buffer size. This
can be explained since for a high degree of multiplexing,
voice sources (which are modeled as on-off Markov fluids)
behave as Gaussian sources. For such sources, it can be
shown [4] that the time parameter t increases linearly with
the buffer size.
Figure
11(a) compares the value of parameter s for Star
Wars and Internet WAN traffic. For MPEG-1 traffic, the
values of s form two distinct regimes, where s is almost
constant, corresponding to the two distinct time scales that
are important for buffer overflow. On the other hand, for
Internet traffic, the values of s form more than two regimes,
indicating that for such traffic there are more time scales
which, for different buffer sizes, become important for buffer
overflow. Recall that this is also the reason behind the
smoother dependence of the link utilization on the buffer
size for Internet WAN traffic compared to Star Wars traffic
Figure
6). Finally, Figure 11(b) shows that parameter
s can take different values for different Internet traffic
segments, illustrating that different segments have different
statistical properties.
3.3 Effects of the Traffic Mix
As discussed in Section 2.1, periods of the day during which
the traffic mix remains relatively constant can be characterized
by corresponding pairs of (s; t). In this section we
investigate the dependence of these parameters, hence of
the effective bandwidth, on the traffic mix. The traffic mix
we consider consists of traffic of different types (MPEG-1
compressed video and voice), traffic with the same structure
but different contents (MPEG-1 compressed video with different
contents), and smoothed/unsmoothed traffic of the
same type and content.
3.3.1 Traffic mix containing Star Wars and voice traffic
We first consider the traffic mix containing Star Wars and
voice traffic. The horizontal axis in Figures 12(a) and 12(b)
depicts the percentage of voice connections, each containing
individual voice channels. The vertical axis of the above
figures depicts the effective bandwidth of the Star Wars
traffic stream. Observe that (1) the effective bandwidth, to
a large extent, changes slowly with the traffic mix, (2) the
dependence of the effective bandwidth on the traffic mix
is smaller for larger capacities and larger buffer sizes, and
(3) there are cases where increasing the percentage of voice
connections leads to a sharp decrease of the value of the
effective bandwidth.
The first observation supports the argument that pairs
of (s; t) can be assigned to periods of the day during which
the traffic mix remains relatively constant. However, the
third observation says that there are percentages of the
traffic mix where the effective bandwidth exhibits sharp
changes. If the link's operating point is close to such a
percentage, then we can estimate the average amount of
resources used by a source as a linear combination of the
effective bandwidth to the left and to the right of the sharp
change. The coefficients of the linear combination would be
determined by the percentage of the time the network was
operating on the left and on the right of the change.
The sharp decrease in the value of the effective band-width
identified above is due to the change of the time scale
that is important for buffer overflow. In particular, as indicated
above the curve for capacity 155 Mbps and buffer
size 4 msec in Figure 12(a), the time parameter t increases
finally 7 frames) for the same percentage
of voice connection at which the sharp decrease in
the value of the effective bandwidth occurs. The increase
of t produces an averaging effect (also discussed in Section
3.2.2) in the amount of workload X j [0; t] that appears
in the effective bandwidth formula (1); this averaging effect
results in a smaller effective bandwidth.
3.3.2 Traffic mix containing MPEG-1 traffic with different
contents
Up to now we investigated the case where the traffic mix
contains traffic of different nature. Next we investigate the
case where the traffic mix contains MPEG-1 video traffic
with the same encoding parameters but with different contents
Figures
13(a) and 13(b) show the effective bandwidth
of the Star Wars stream as a function of the percentage of
news and talk show streams, respectively. These figures
show that the content has a very small effect on the effective
bandwidth; this also implies that parameters s; t are
affected very little.
3.3.3 Traffic mix containing smoothed and unsmoothed
Star Wars traffic
Our final investigation deals with another important question
in traffic engineering: How does traffic smoothing affect
the network's operating point and the amount of resources
used by a source? We will see that parameter t shows the
time scale at which smoothing must be performed
in order for it to affect resource usage. Figure 14 shows
the effective bandwidth of the Star Wars stream for different
percentages of a traffic mix which consists of unsmoothed
and smoothed Star Wars traffic; the latter is
created by evenly spacing the cells belonging to two consecutive
frames. First, observe that the effects of the traffic
mix on the effective bandwidth decreases when the link capacity
and buffer size increases. Second, observe that there
are cases where increasing the buffer size has a very small
effect on the effective bandwidth, e.g., at
the curves for practically co-
incide. Third, observe that for some buffer sizes, smoothing
has no effect on the effective bandwidth (and on the net-
work's operating point), e.g., in Figure 14(a) the curve for
and in Figure 14(b) the
curves for are
flat. We explain this behavior next.
Figure
15 shows the effective bandwidth for both the
smoothed and the unsmoothed Star Wars stream. When
the percentage of smoothed traffic is small, the time parameter
smaller than the time interval
over which smoothing was performed (80 msec). For this
reason, the amount of workload X j [0; t] that appears in the
effective bandwidth formula (1) is smaller for the smoothed
stream than it is for the unsmoothed stream. Hence, the
effective bandwidth of the smoothed stream is smaller than
the effective bandwidth of the unsmoothed stream. For
some percentage of smoothed traffic (- 60%), the time parameter
t (= 80 msec) is no longer smaller than the time
interval over which smoothing is performed (80 msec). Because
of this, the amount of workload X j [0; t] is the same
for both the smoothed and the unsmoothed stream. Hence,
the effective bandwidth of both streams is the same.
Conclusions
In this paper we employ the recently developed theory of
effective bandwidths, and in particular the one based on the
many sources asymptotic, whereby the effective bandwidth
depends not only on the statistical characteristics of the
traffic stream, but also on a link's operating point. The
latter is summarized in two parameters: the space and time
parameters.
We have investigated the accuracy of the above frame-
work, and how it can provide important insight to the complex
phenomena that occur at a broadband link with a high
degree of multiplexing. In particular, we estimate and interpret
values of the space and time parameters for various
mixes of real traffic, demonstrating how these can be used to
clarify the effects on the link performance of the time scales
of burstiness of the traffic input, of the link parameters (ca-
pacity and buffer), and of traffic control mechanisms, such
as traffic smoothing.
Our approach is based on the off-line analysis of traffic
traces, the granularity of which can be determined by the
time parameter of the system. For the traffic mixes consid-
ered, the space and time parameters are, to a large extent,
insensitive to small variations of the traffic mix. This indicates
that particular pairs of these parameters can characterize
periods of the day during which the traffic mix
remains relatively constant. The above result has important
implications to charging, since simple charging schemes
which are linear in time and volume and have important incentive
properties can be created from tangents to bounds
of the effective bandwidth [3]. Furthermore, the above result
opens up some new possibilities for traffic modeling.
Rather than developing general models that try to emulate
real traffic in any operating environment, a new approach
would be to develop models that emulate real traffic according
to the particular operating point of the network. Such
models would be parameterized with the pair (s; t), would
be simple and efficient to implement, and can be the basis
for fast and flexible traffic generators.
The application of our approach for traffic engineering
and network dimensioning in a real multi-service network
environment that involves a large number of different source
types constitutes a promising area for further research. A
specific question is to investigate whether the number of
discontinuities of the time parameter, that we have identified
for a traffic mix containing two source types, increases
with the number of source types. A second area for further
work is to extend our investigations to links with priorities;
some results in this area are presented in [9].
Acknowledgements
The authors are particularly grateful to Frank P. Kelly for
his helpful discussions and insights, and thank the anonymous
reviewers for their constructive comments.
--R
Large deviations
On the effectiveness of effective bandwidths for admission control in ATM networks.
Buffer overflow asymptotics for a switch handling many traffic sources.
Effective bandwidth of general Markovian traffic sources and admission control of high speed networks.
Experimental queueing analysis with long-range dependent packet traf- fic
On the relevance of long-range dependence in network traffic
Notes on effective bandwidths.
Effective bandwidths for multiclass Markov fluids and other ATM sources.
On the self-similar nature of ethernet traffic
Cell loss asymptotics for buffers fed with a large number of independent stationary sources.
de Veciana.
Statistical properties of MPEG video traffic and their impact on traffic modeling in ATM systems.
The importance of the long-range dependence of VBR video traffic in ATM traffic engineering: Myths and realities
Large deviations approximations for fluid queues fed by a large number of on/off sources.
Smoothing, statistical multiplexing and call admission control for stored video.
--TR
Effective bandwidth of general Markovian traffic sources and admission control of high speed networks
On the self-similar nature of Ethernet traffic
Effective bandwidths for multiclass Markov fluids and other ATM sources
Analysis, modeling and generation of self-similar VBR video traffic
area traffic
Experimental queueing analysis with long-range dependent packet traffic
The importance of long-range dependence of VBR video traffic in ATM traffic engineering
On the relevance of long-range dependence in network traffic
--CTR
A. Courcoubetis , Antonis Dimakis , George D. Stamoulis, Traffic equivalence and substitution in a multiplexer with applications to dynamic available capacity estimation, IEEE/ACM Transactions on Networking (TON), v.10 n.2, April 2002
C. Courcoubetis , V. A. Siris , G. D. Stamoulis, Network control and usage-based charging: is charging for volume adequate?, Proceedings of the first international conference on Information and computation economies, p.77-82, October 25-28, 1998, Charleston, South Carolina, United States
Jun Jiang , Symeon Papavassiliou, Providing End-to-End Quality of Service with Optimal Least Weight Routing in Next-Generation Multiservice High-Speed Networks, Journal of Network and Systems Management, v.10 n.3, p.281-308, September 2002 | broadband networks;large deviations;effective bandwidths;traffic engineering;ATM |
277973 | Fast Multigrid Solution of the Advection Problem with Closed Characteristics. | The numerical solution of the advection-diffusion problem in the inviscid limit with closed characteristics is studied as a prelude to an efficient high Reynolds-number flow solver. It is demonstrated by a heuristic analysis and numerical calculations that using upstream discretization with downstream relaxation ordering in a multigrid cycle with appropriate residual weighting leads to an efficient solution process. Upstream finite-difference approximations to the advection operator are derived whose truncation terms approximate "physical" (Laplacian) viscosity, thus avoiding spurious solutions to the homogeneous problem when the artificial diffusivity dominates the physical viscosity [A. Brandt and I. Yavneh, J. Comput. Phys., 93 (1991), pp. 128--143]. | Introduction
Efficient multigrid algorithms for the numerical solution of partial differential
problems normally require good ellipticity measures on all scales of
the problem, which implies that nonsmooth solution components can be re-solved
by local processing [2]. But problems with small ellipticity measures
are marked either by indefiniteness or by anisotropies. In the latter case,
Technion-Israel Institute of Technology, Haifa, Israel
y University of Twente, Enschede, The Netherlands
z Weizmann Institute of Science, Rehovot, Israel
there exist so-called characteristic directions of strong dependence. Some
nonsmooth components of the solution are then advected along these char-
acteristics, and hence they cannot be resolved locally [1]. A typical example
is steady flow at high Reynolds numbers (small viscosity).
When applied to such problems of small ellipticity the usual multigrid
algorithms often exhibit a severe degradation of performance compared to
that seen in elliptic problems. Indeed, most multigrid codes in use today
for solving steady flows at high Reynolds numbers, although yielding a great
improvement over previous single-grid solvers, fall far short of attaining the
so-called textbook multigrid efficiency for general (even smooth) flows. To regain
this efficiency the multigrid algorithm requires modifications that take
into account the anisotropic properties of the operator. For example, it
was shown in [4] and [9] that using upstream discretization and downstream
relaxation-ordering yields a fully efficient multigrid solver for flows whose
characteristics (streamlines) start at some part of the boundary and end at
another without recirculating (entering flows). To obtain efficient multigrid
solvers for flows with closed characteristics, however, different modifications
were proposed, such as defect-correction cycles and residual overweighting
[5]. The main drawbacks of the latter approaches are: (a) they are not likely
to generalize efficiently to orders of accuracy higher than one; (b) they require
W cycles, which may be substantially more expensive than simple V
cycles in parallel computation; (c) they suggest different treatment for different
types of flow, viz., recirculating versus entering flows. The upshot of
the present work is to obtain a unified approach for both types of flow by
employing upstream discretization and downstream relaxation ordering for
recirculating flows as well. In Section 2 we formulate the simple model problem
of advection-diffusion and present the First Differential Approximation
to its discretized form. In Section 3 we present the two-level cycle and use the
approximation of Section 2 in a heuristic analysis for a priori prediction of
the performance of this algorithm. In Section 4 new first-order upstream discretizations
for the advection operator are presented, whose first truncation
terms approximate isotropic diffusivity. These schemes are shown to eliminate
spurious solutions to the homogeneous (i.e., unforced) small-viscosity
advection-diffusion equation, such as those reported in [3]. Section 5 presents
numerical calculations testing the accuracy of the discretization and the efficiency
of the multigrid algorithm and how it compares to the predictions of
Section 3. Section 6 summarizes the main conclusions and further research
plans.
2 The Scalar Advection-Diffusion Equation
We study the scalar advection-diffusion equation with closed characteristics
as a prelude to the study of flow problems. This equation serves well as a
preliminary problem, since the advection part (i.e., momentum equations)
is responsible for the degraded performance observed in the solution of the
incompressible-flow equations by the usual multigrid algorithm [5]. Also,
as is shown for entering flows in [4], the solution-process of the advection
part of the system can be effectively decoupled from that of the elliptic
part that is due to the continuity equation. Hence, efficient solution of the
advection problem is a necessary stage in the development of a fully efficient
flow-equations solver, and [4] suggests that the resulting advection-problem
solver can indeed be used in designing the sought-after flow-solver.
The advection-diffusion equation in two dimensions is
where ffl is a positive constant and a, b, f , and g are given functions of x
and y. Equation (1) is discretized on a uniform grid of meshsize h, whose
gridlines lie parallel to the x and y coordinates. The characteristic direction
of the advection operator in (1) is given (locally) by
where OE is the (local) angle of nonalignment between the x coordinate and
the characteristic direction. We will focus our attention on the particular
case where the characteristics defined by a and b form closed loops (as in
vortices), one of which may coincide with
@\Omega (as in internal flows).
Suppose that (1) is discretized by some stable finite-difference discretization
of first-order accuracy. The main aspects of the problem can be analyzed
by substituting for the discrete operator its First Differential Approximation
[8] and also [1, 2, 9]. For the advection-diffusion equation with
positive but vanishingly small ffl we need only consider the advection oper-
ator, since the tiny diffusion will be dominated by the artificial diffusivity
represented by the truncation terms (except at stagnation points). Let L h
denote a first-order accurate discrete approximation to the advection opera-
tor. Then, by a Taylor series expansion, we generally have
where u h denotes the discretized function, h being the meshsize of a uniform
grid. Here, ~
are functions of x and y, the specific details of which are
determined by a and b and the discretization. The FDA is the approximation
of L h by the differential operator that remains in (2) after the O(h 2 ) terms
are neglected. (Hence, it applies only to sufficiently smooth u h , since the
neglected terms are higher derivatives).
We now assume for simplicity of the discussion that the equation is normalized
such that a 2 introduce a (conformal) local coordinate
system, (-; j), where - denotes the local "streamwise" coordinate parallel to
the characteristic direction, while j denotes the "cross-stream" coordinate
that is perpendicular to the characteristic. Thus,
and
The FDA of L h in the local coordinate system is therefore
~
where, by (2) and (4),
We assume a consistent and stable discretization, which requires that the
artificial viscosity operator represented by the first truncated term be elliptic,
implying that the O(h) part of the operator in (5) is of positive type. Under
special circumstances, such as consistent alignment of the characteristics with
the grid, T h
vanish, and this property is marginally violated. In this
case the physical diffusion term becomes important, no matter how small ffl
may be, and the analysis below does not apply. Accordingly, we will assume
below that T h
1 is large compared to h, which is the usual case.
3 Two-Level Error-Reduction Analysis
We now analyze the error reduction attainable with a two-level cycle using
upstream discretization and downstream relaxation-ordering.
3.1 Two-level Cycle
The proposed two-level cycle for a given discrete problem L h u
defined as follows.
ffl Starting with some approximation to u h perform - 1 (small integer)
relaxation sweeps.
ffl Calculate the residuals, r
u h is the current approximation
to the solution, and transfer them to a twice-coarser grid
2h, multiplied by a globally-uniform weight W .
ffl Solve the coarse-grid problem, L 2h v for the correction.
Here, r 2h is the restriction of r h to the coarse grid.
ffl Interpolate and add the correction v 2h to the fine-grid approximation.
ffl Perform - 2 (small integer) fine-grid relaxation sweeps.
In studying the asymptotic performance of the two-level cycle, the number
of pre-relaxation sweeps - 1 need not be distinguished from the number of
post-relaxation sweeps - 2 . (Recall that we associate asymptotic performance
with the spectral radius ae of the iteration matrix, and that ae(AB) = ae(BA)
for any pair of square matrices A, B of the same dimension). We denote the
total number of sweeps by
In analyzing the two-level cycle we shall make many simplifying assump-
tions. The degree to which these assumptions are justified needs to be judged
by the degree to which numerical results match the predictions of the analyses
3.2 The Model Problem and Analysis
We analyze the two-level convergence for the discrete approximation to (1) in
the limit of vanishing ffl for problems with closed characteristics by considering
the following model problem on grid h.
where L h is, as above, the discretization of the advection operator. For the
domain of solution and boundary conditions we require periodicity in - in
order to simulate closed characteristics, and we choose for simplicity of the
discussion - 2 [0; 1]. The boundary conditions in the cross-stream direction
are not germane in the present context. For simplicity, we let
The main point of our approach is to use discretization that is purely
upstream, and to relax the equation in downstream ordering, starting at
ordering means that we relax a variable only after relaxing all
other unknowns which participate in the equation which corresponds to this
variable (except, perforce, at Thus, a full relaxation sweep results in
the elimination of all the residuals except at a narrow band (of O(h) width)
that stretches from (which coincides with
due to the periodicity). Neglecting the width of this band, we find that the
residual function, r h , which remains after at least one full relaxation sweep
has been carried out, can be modeled by
r h (-;
The fine-grid error v h satisfies the residual equation. Wherever the residual
vanishes we now revert to FDA, obtaining
~
with ~
defined in (5). We now add a further simplifying assumption that
in (5) are independent of j. Hence, we may expand (8) in a Fourier series
in j. For an error component -
(-) exp(i!j) of frequency !, Equations (5)
and (8) then yield
d-
where all the O(h) terms but the first can be neglected in the homogeneous
equation, since they multiply derivatives and are therefore small compared
to d-v h
d-
. The solution to (9) in the interval (0; 1) is therefore given by
Z -T h
ds
where A h
! is the amplitude of - v h
! at which we shall determine shortly.
(Superscripts denote an infinitesimal positive (negative) incre-
ment). In particular, at
1 is the average value of T h
1 over the entire domain, under the assumption
that T his j \Gammaindependent (and recalling that the domain length
in the - direction is one).
It is important to note that D(h; !) is approximately the factor by which
a single relaxation sweep amplifies (reduces) an error component that oscillates
at frequency ! in the j direction. This is due to the fact that, given
upstream differencing, the downstream relaxation ordering yields numerical
integration; and D(h; !) is the factor by which this integration over the domain
reduces the error. Equation (11) implies that relaxation reduces error
components with large ! very efficiently, but components that are smooth
along j need to be corrected on the coarse grid.
the FDA is no longer useful. Instead, we have a jump
in - v h
! that is proportional to the Fourier coefficient of R h (j) corresponding to
frequency !. We denote this jump by ffi h
. The periodicity in - now implies
by (10) and (11)
A h
Now, following the two-level algorithm, we attempt to approximate the
(weighted) residual equation on the coarse grid 2h. We assume that the
same discretization stencil is used on the coarse grid as on the fine. (Note
this important assumption on which the entire method hinges). We also assume
that the restriction operator is such that the jump condition at
is approximated correctly on the coarse grid. In practice this holds provided
that a proper averaging is used, such as full-weighted residual transfers. Analogously
to (10),(11), and (12), respectively, we obtain
Z -T 2h(s) ds
and
A 2h
where W is a constant weight to be chosen. Since the stencils of L h and L 2h
are the same, we also assume -
Also, since we assume that the restriction operator transfers the jump condition
correctly, we have ffi 2h
. Equations (12), (15), and (16) now yield
A 2h
A h
Neglecting again the effects of intergrid transfers and aliasing, we assume
that the remaining error after adding the coarse-grid correction is
Hence, the fine-grid error is amplified by the factor (-v h
! . In addition,
relaxation sweeps performed on the fine grid amplify the error by D(h; !) - ,
as noted above. The two-level error-amplification factor - tl is then given by
the absolute value of the product of these terms. Note, however, that the
determining values of -
! in the coarse-grid correction term depend on
where one begins relaxing on the fine grid immediately following the coarse-grid
correction (since all other fine-grid values at the end of one or more
sweeps are determined solely by the values where relaxation begins, due to
the upstream differencing and downstream relaxation ordering). If the fine-
grid relaxation begins at that is, at or shortly after the point where the
residual was nonzero, then we have
A 2h
A h
However, if the fine-grid relaxation begins at
there is some small overlap in the region being relaxed), then
A h
3.3 Optimal Residual Weighting
By (11) we have and in order to obtain an h-independent
analysis we assume now that D can take on any value in the interval (0; 1).
The optimal value of W is that which minimizes the supremum of - tl over
1). For any fixed W , the supremum is evidently obtained either for
@D
From (18) and (19) we can thus obtain Dm as a function of W and -, from
which we can then calculate W opt (the value of W which yields the fastest
convergence) for either of these cases. For
for (18), yielding -
For larger - the tedious calculations need to be carried out numerically. But
W opt tends to 2 for both cases rather quickly. This is expected, since this
value is the ratio of the Green's functions on the coarse and fine grids for
components that are very smooth in the cross-characteristic direction; other
components are reduced by relaxation. With we obtain for both (18)
and (19)
Equating the derivative of (20) with respect to D to zero, we get as the only
relevant root,
The two-level error-amplification factor is now obtained by substituting (21)
into (20). For sufficiently small - \Gamma2 we may neglect this term, obtaining
In fact, (22) gives an excellent approximation of the maximal - tl in (20)
for any -, erring by less than 2% for - 2. By curious coincidence, the
same asymptotic two-level error-amplification factor for large -, (2-e) \Gamma1 , is
obtained for the Poisson equation on a rectangle using Gauss Seidel relaxation
in Red-Black order [7]! Thus, this analysis leads us to expect efficiency that
is comparable to that obtained for the Poisson problem.
Example 1. We apply our algorithm to the advection-diffusion problem
with closed characteristics used in [10] (originally in [6]):
(0; 1), with u(x;
on @ and
For the advection term we use the same discretization as is used on the finest
grid in [6] and [10]: standard upstream (SU), defined by
where
In We use this value only at the stagnation point, adding no
viscosity elsewhere to maintain upstream discretization. Since the "physical"
viscosity is dominated by the artificial viscosity elsewhere anyway, the difference
is small. (Alternatively, we could use a much smaller ffl everywhere).
Levels Grid W= 1 W= 2 MGD9V
Table
1: The number of cycles necessary to reduce the L 2 residual norm by
a factor of 10 8 in Example 1. The problem is taken from [10], and MGD9V
is the automatic method of de Zeeuw which is used there.
As in [10], the initial guess for u
in\Omega is zero, and we cycle until the L 2
norm of the residuals is reduced to at most 10 \Gamma8 times its initial value. We
performed this test with 4, 5, 6, 7, and 8 levels, with the coarsest grid always
5 \Theta 5, including boundary points (as in [10]). We used V(1,1) cycles
throughout, with the usual full-weighted residual transfers and bilinear in-
terpolation. (See Section 5 for details on implementation of the relaxation).
In
Table
1 we compare our results with W=1 and 2 to those reported for
MGD9V, the automatic method of de Zeeuw (available up to six levels), using
a so-called "sawtooth" cycle with one ILLU (Incomplete Line LU) relaxation
sweep per level. It must be stressed that the efficiency and robustness of this
method is convincingly demonstrated in [10], and all the results achieved on
ten other problems (including non-recirculating advection diffusion) were far
better than these.
Evidently, the present method performs very well, with efficiency comparable
to that of elliptic equations in this simple test problem. Clearly,
the downstream relaxation ordering itself is not sufficient for the recirculation
problem (nor is ILLU). Both MGD9V and the present method with no
residual overweighting show clear deterioration as the grid is refined, while
with proper overweighting the convergence rate remains excellent even for
very fine grids. We reiterate that our residual overweighting approach may
not apply to MGD9V, since there different stencils are used on the different
grids, requiring different overweighting.
3.4 Several Bands of Residuals
It may not always be easy to obtain a single band of residuals per vortex. This
happens when the relaxation is carried out in piecewise-downstream ordering,
as would be the case in a domain-decomposition setting for example. Our
analysis can be extended to the case where several bands of nonzero residuals
remain after relaxation. It is found that one then requires two coarse-grid
corrections, with optimal weights 1 and 2 approximately. This seems to
imply that a W cycle is required in this situation. Also, one must then use
upstream intergrid transfers, so as to avoid averaging over interfaces between
subdomains, which may actually cause divergence.
4 Discretization
Flows in which the streamlines do not start and end at boundaries, but
constitute closed curves, require special considerations in the discretization.
In such cases, even a very small viscosity plays an important role in determining
the main flow throughout the domain. The solution in the limit
of vanishing viscosity depends very strongly on how these coefficients tend
to zero. In effect, the advection terms determine the behavior of the solution
along streamlines, whereas the viscous terms determine its cross-stream
form. And since the boundary is often a streamline itself, the propagation
of information from the boundary into the domain is governed by the viscous
terms no matter how small they may be. This effect is discussed in
detail in [3, 9], where it is shown for both the advection-diffusion problem
and the incompressible Navier Stokes equations that solutions with schemes
in which the numerical viscosity is anisotropic (having different viscosity co-efficients
for the cross-stream and streamwise directions), such as standard
upstream-difference schemes, may be spurious.
In the most general case it can be shown that even isotropic viscosity is
not sufficient for convergence of the solution, and one must actually specify a
uniform viscosity. We do not know how to do this while retaining the purely-
upstream structure (but see remarks in Section 6). However, for the homogeneous
advection-diffusion problem there are several indications (though no
proof) that isotropy suffices. This is shown below and also in [3, 9], where
it is also shown (in a numerical example) to suffice for the incompressible
Navier Stokes equations. This is consistent with the fact that the vorticity
in the Navier Stokes equations satisfies a homogeneous advection-diffusion
equation.
To obtain a discretization scheme that exhibits the appropriate physical-
like behavior for vanishing viscosity we must thus either add sufficient explicit
isotropic viscosity that will dominate the artificial viscosity of the discrete
advection operator, or else derive a discretization of the advection operator
that satisfies the condition of isotropy in its lowest-order truncated terms.
Since we want our scheme to remain purely upstream, we follow the latter
approach.
Consider the standard upstream scheme of (23), and assume for simplicity
of discussion a - b - 0 . From (2) we have by a Taylor expansion
~
Hence, in order to obtain isotropic artificial viscosity we may either add
some approximation of 0:5h(a \Gamma b)u xx , or else subtract some approximation
of 0:5h(a \Gamma b)u yy . In order to retain an upstream scheme we define this
additional viscosity at the point (i \Gamma
For general a and b we obtain in the first case the Isotropic-Viscosity
Upstream scheme IVU1, defined by
if ja i;j j ? jb i;j j, and
otherwise. In the second case we obtain scheme IVU2, defined by
if ja i;j j ? jb i;j j, and
otherwise. Here i1; j1 are defined as in (23), and similarly
The first truncated term in scheme IVU1 is thus \Gamma0:5 min(ja i;j j; jb i;j j)\Deltau,
while that of IVU2 is \Gamma0:5 max(ja i;j j; jb i;j j)\Deltau : Both schemes have isotropic
artificial viscosity, but that of IVU1 is smaller, and in fact it vanishes upon
alignment of the characteristic directions with the grid.
Both discretizations are stable in downstream-ordered Gauss-Seidel relax-
ation. The former is a nonnegative-weighted average of the standard first-order
and second-order upstream schemes, both of which are stable in this
relaxation. The latter produces an M-matrix. As expected, there were no
stability problems in any of our many numerical calculations.
5 Numerical Experiments
We first test numerically the discretizations derived in Section 4 on a model
problem for which the standard upstream scheme has been shown to yield
spurious solutions [3, 9]. Then, the asymptotic error reduction of two-level
and multilevel cycles are investigated for several problems.
5.1 Accuracy Test
The accuracy of the different discretizations is tested on the model problem:
a
with a and b given by
(These coefficients are the same as those of Example 2 below, and a picture
of the characteristics appears in Figure 1a). The domain of solution is the
unit square, centered at the origin, with a square of diagonal 0.5, whose
sides form a 45 degree angle with the axes, removed from its center. On the
outer boundary prescribed and on the inner boundary
solve this problem with the three upstream schemes-SU, IVU1, IVU2, and
also the (non-upstream) isotropic-viscosity scheme used in [3], denoted ISO.
viscosity is added: These solutions are compared to
that obtained with a standard second-order upstream scheme with physical
viscosity coefficient 0.001. The latter solution is obtained on a 257 \Theta 257 grid.
Grid SU IVU1 IVU2 ISO
129 \Theta 129 0.0750 0.0057 0.0163 0.0098
Table
2: L 1 difference norms between the solutions obtained with several
schemes at different resolutions, and a high-accuracy solution obtained on
grid 257 \Theta 257 (see text for details). The three isotropic-viscosity schemes
are seen to yield convergent solutions, but not the standard upstream (SU)
scheme.
In
Table
2 we present L 1 norms of the differences between the test solutions
at various resolutions, and the second-order accurate solution (restricted to
the corresponding grid by injection). Since the solution is smooth, and the
physical viscosity dominates the second-order truncation terms at this
high resolution, the latter solution is assumed to be very accurate.
Evidently, the three schemes with isotropic artificial viscosity produce
convergent solutions but the SU scheme does not, despite the fact that its
average viscosity is smaller than that of IVU2 and ISO. IVU1, which has
the least artificial viscosity of the four schemes tested, produces the smallest
error.
5.2 Efficiency Tests
The remainder of our numerical calculations are aimed at testing the performance
of the algorithm in various configurations and comparing to the
analytical predictions of Section 3. We test the SU scheme (as this is the
most widely used first-order scheme) and the IVU1 scheme (which is more
accurate than IVU2 and also employs just a four-point stencil). In all these
tests we use first-order upstream residual restriction and bilinear interpolation
of the corrections. The restriction is performed as follows: for all
even i and j on the fine grid we define i1, j1 as in (23), and restrict to the
corresponding coarse-grid right-hand side at coarse-grid point (i=2; j=2) the
average of the fine-grid residuals at points (i; j), (i1; j), (i; j1), and (i1; j1).
This restriction gives slightly better results than standard full weighting in
multi-vortex problems, since residuals are less likely to be transferred from
one vortex to another.
The finest grid in all the tests is 129 by 129, and six levels are employed
except in the two-level tests. We include no "physical" viscosity except at
stagnation points it is required for well-posedness. We
calculate convergence factors as follows: the boundary conditions and right-hand
sides are chosen to be zero; (the choice is immaterial for linear problems,
but this allows us to normalize the solution by a constant factor every few
cycles in order to avoid roundoff errors). The initial solution field is pseudo-
random, and 100 cycles are performed. We calculate the convergence factor
as the (geometric) average error-convergence per cycle over the last 80 cycles.
The averaging is used because in some cases the convergence history is not
smooth, and the value corresponding to any particular cycle may not carry
much meaning. However, it should be noted that in all cases the convergence
factors in the vicinity of the optimal W were not sensitive to the exact choice
of W .
Example 2. The first test for efficiency is the problem used above to test
the discretization, but without the inner "island" (so as to have pure recirculation
everywhere). The characteristics of this problem, which form a single
clockwise-rotating vortex, are plotted in Figure 1a. A relaxation sweep is implemented
by sweeping four times over the domain, and in each such sweep
relaxing roughly one quarter of the variables as follows: in the first sweep
only variables at locations where both a(x; y) and b(x; y) are nonnegative are
relaxed (designated first quadrant); in the second sweep only variables corresponding
to locations where a(x; y) is nonnegative and b(x; y) is nonpositive
are relaxed (second quadrant); in the third sweep only variables at locations
where both a(x; y) and b(x; y) are nonpositive are relaxed (third quadrant);
in the fourth sweep only variables corresponding to locations where a(x; y)
is nonpositive and b(x; y) is nonnegative are relaxed (fourth quadrant). Fi-
nally, all stagnation points (in this case just one) are relaxed last. This entire
process comprises a single clockwise sweep. Of course, each quarter-sweep is
performed in downstream order i.e., x and y increasing in the first quarter-
increasing and y decreasing in the second, etc. It is efficient to store
the order of relaxation of the entire sweep during a setup-sweep (which costs
very little), so that from then on each full clockwise sweep costs nearly the
same as an ordinary lexicographic Gauss-Seidel sweep.
Note that we specify nonnegative and nonpositive in the above descrip-
tion. That is, it should be ensured that the boundaries of the quadrants are
included in the quadrant. This was important in some of the tests.
Such a clockwise sweep indeed eliminates all the residuals except along
a narrow band that extends from the center of the vortex to the boundary.
But if the vortex rotates counterclockwise, several such bands would remain.
Hence, in Examples 3-5 below, where both clockwise and counterclockwise
vortices exist (as would be the general case), we perform an analogous counterclockwise
sweep following each clockwise one. The quadrant where this
counterclockwise sweep starts has some bearing on performance. In our experiments
we use exactly the reverse order. This allows us to save some work
by performing the counterclockwise sweep over just three quadrants, since
the fourth quadrant (a(x; y) nonpositive and b(x; y) nonnegative) has just
been relaxed in the clockwise sweep. Thus, we begin with the third quadrant
(a and b nonpositive), then the second, and finally the first quadrant. (Note,
by the way, the difference from the usual symmetric Gauss-Seidel: here the
sweeps are all performed in downstream ordering within each quadrant. But
the quadrants are scanned symmetrically).
In Problems (3-5) (especially there was also some sensitivity to which
quadrant one chooses to start the relaxation sweep. That is, a somewhat
different convergence rate and optimal overweighting were obtained if one
performed the clockwise sweep as above, than if one relaxed, say, the second
quadrant first, then the third, then the fourth, and finally the first. Hence,
in these tests we shifted the relaxation starting-point by one quadrant after
every cycle so as to obtain results that represent some average or "typical"
case.
We first performed two-level tests in order to compare the numerical
results with the analysis. We used and the theoretically optimal W ,
calculated from (19). This is the relevant value for this example, since there
is a one-line overlap between the quadrants. The results are given in Table 3.
Evidently, the analysis captures the main features of this problem very well,
despite the numerous simplifications, as seen especially in the SU results.
Calculations were also carried out with V(1,0), V(1,1), and V(2,1) cycles,
with and the optimal value, which was determined experimentally.
The numerical results are summarized in Table 4. We find that the experimental
V-cycle results also match the two-level predictions fairly well in the
Table
3: Comparison of error-amplification factors obtained by analytical
prediction and two-level numerical calculations of Example 2 for
W opt . The optimal values of W were obtained from (19).
vicinity of the optimal residual-weighting factors. But the optimal W 's are
somewhat higher than predicted, although they do show the expected dependence
on the number of relaxation sweeps performed. In other problems,
reported below, the optimal value varied, but it was always fairly close to 2.
The overall performance when the optimal W is used is very satisfactory.
As noted above, this performance is not sensitive to moderate changes in W
(see also below).
Example 3. This example features flow with four vortices rather than just
one. Here, a and b are given by
and the characteristics are plotted in Figure 1b. The domain, as in all the
examples except Example 5, is the unit square centered at the origin. The
numerical results using V(1,1) and V(2,1) cycles appear in Tables 5 and 6,
respectively. The convergence performance remains excellent, even though
some nonvanishing residuals remain after the relaxation at parts of the borderlines
between vortices (where the flow leads away from the borderline).
Recall, however, that here and below each full relaxation sweep consists of
one clockwise sweep followed by three quarters of a counterclockwise sweep
(see description of implementation above), in order to allow for the opposite-
sign vortices.
Cycle W SU IVU1
V(1,0) 1. 0.795 0.898
V(1,1) 1. 0.676 0.831
V(2,1) 1. 0.536 0.724
2. 0.280 0.440
2. 0.143 0.302
2. 0.069 0.133
Table
4: Error-amplification factors obtained in numerical calculations of
Example 2. The optimal values of W were obtained experimentally (see
text).
Example 4. In order to test the effect of grid-alignment of the borderlines
between vortices we solve a problem in which this borderline is not aligned
with the grid. In this problem a and b are given by a 1
respectively, where
a
with
This represents a superposition of two opposite-sign vortices. The characteristics
are depicted in Figure 1c, and the numerical performance is shown in
Tables
5 and 6. Here we see some loss of efficiency, but the performance is
still satisfactory and far better than is usually exhibited in such problems.
Example 5. Here we test a mixed problem, where the flow enters and
leaves through the boundary, but there also exists a large recirculation zone.
This is obtained by redoing Example 2, but in an extended domain:
Example SU IVU1
Table
5: Error-amplification factors obtained with V(1,1) cycles for
and W opt . The latter were found experimentally.
Example SU IVU1
Table
obtained with V(2,1) cycles for
and W opt . The latter were found experimentally.
The mesh is still uniform, and the finest grid is now 193 by 129. The characteristics
are shown in Figure 1d, and the numerical performance is given in
Tables
5 and 6. As in the other examples, there was virtually no sensitivity to
the order in which the relaxation sweeps were performed (that is, clockwise
first and then counterclockwise or vice versa).
Summary
Often, one may not wish to search for optimal residual weighting
factors for every problem. Instead, one can simply use the nominal value
2. In the "realistic" examples 3-5, one saves at most 25% of the time
spent in relaxation by using the optimal value rather than 2, and usually
much less. This would also be the case in Example 1 if we were to employ
the "symmetric" relaxation. Even with this nominal value the convergence
rates are comparable to those of elliptic problems.
6 Conclusions and Further Research
An experimental approach that is hoped to eventually lead to a fully efficient
solver for general high-Reynolds flows has been introduced, analyzed and
tested on the advection-diffusion problem in the inviscid limit. The numerical
tests mostly match the predictions very well, indicating that the main cause
for slow convergence of the usual multigrid algorithms for recirculating flows
has indeed been understood, and a way to eliminate it has been found.
The multigrid V-cycle, using downstream relaxation and upstream dis-
cretization, was shown to yield an efficient solver for the tested problem
in several simple situations of closed characteristics and in a mixed en-
tering/recirculation problem. The tests were performed with the classical
standard first-order upstream discretization scheme and also with a novel
first-order upstream discretization, that was shown to preclude the spurious
solutions reported in [3].
The present approach is cheaper to implement than that developed in [5],
and can straightforwardly be applied to mixed entering/recirculating flows.
More important, there is potential of success with high-order discretization,
for which the approach of [5] yields an inadequate compromise. However,
the results obtained are still preliminary. The effect of the intergrid transfers
on the small band of residuals and its consequences in terms of error-
reduction efficiency should be investigated over a wide variety of cases, along
with a study of how to deal with (or avoid) situations where there remain
several bands of nonvanishing residuals per vortex. Then, further research
should be directed towards higher-order discretization. For this case too,
an effectively-upstream discretization needs to be developed, whose truncation
error represents isotropic artificial diffusivity. One approach is to use a
predictor-corrector type discretization, employing an upstream scheme as a
(local) driver and a possibly higher-order (not necessarily upstream) scheme
as a (local) corrector. Finally, the present approach has only been tested on
the advection problem. Experiments with the incompressible Navier Stokes
equations for flows with closed streamlines need to be performed, employing
distributive Gauss-Seidel relaxation, as shown in [4]. These will no doubt
raise further questions.
It is anticipated that the techniques investigated here will carry over to
three dimensions, although the implementation will be considerably more
complicated. This is supported by the fact that simple experiments performed
with the overweighting methods of [5] in three dimensions exhibited
the expected performance.
An obvious drawback of the entire approach is that it is inherently sequen-
tial, and efficient parallel implementations are hard to envisage. Some parallelization
might be achievable by performing a downstream line Gauss Seidel
relaxation. Also conceivable is a domain decomposition approach which
leaves several lines of residuals per vortex.
Another drawback of the present approach is that it is not directly applicable
for flows with significant additional viscosity, since this entails using
discretizations that are not purely upstream. Methods that deal with such
flows as well are presently being investigated.
Acknowledgment
This work was supported by The Royal Netherlands Academy of Arts and
Sciences and The Feinberg Graduate School, by the United States-Israel
Binational Science Foundation under grant no. 94-00250, and by The United
States Air Force Grant F49620-92-J-0439 and by the Carl F. Gauss Minerva
Center for Scientific Computation.
--R
"Multigrid Solvers for Non-Elliptic and Singular- Perturbation Steady State Problems,"
"1984 Multigrid Guide with Applications to Fluid Dynam- ics,"
Inadequacy of First-Order Upwind Difference Schemes for Some Recirculating Flows
On Multigrid Solution of High-Reynolds Incompressible Entering Flows
Accelerated Multigrid Convergence and High- Reynolds Recirculating Flows
Efficient Solution of Finite Difference and Finite Element Equations by Algebraic Multigrid
fundamental algo- rithms
On the Correctness of First Differential Approximation of Difference Schemes
Multigrid Techniques for Incompressible Flows
--TR | recirculating flow;upstream discretization;advection-diffusion;multigrid |
278097 | An Integral Invariance Principle for Differential Inclusions with Applications in Adaptive Control. | The Byrnes--Martin integral invariance principle for ordinary differential equations is extended to differential inclusions on {Bbb R}N. The extended result is applied in demonstrating the existence of adaptive stabilizers and servomechanisms for a variety of nonlinear system classes. | Introduction
. Suppose that -
semidynamical system on
R N with semiflow ' and so, for each x 0 2 R N , is the unique maximal
forwards-time solution of the initial-value problem -
In [2], Byrnes
Martin prove the following integral invariance principle: if '(\Delta; x 0 ) is bounded and
continuous function l : R N ! R+ := [0; 1), then
1, to the largest invariant (with respect to the differential
set in l \Gamma1 (0), the zero level set of l. This result has ramifications in adaptive
control, some of which are highlighted in the present paper. However, we wish to
consider the (adaptive) control problem in a fairly general setting that allows time-variation
in the underlying differential equations, possible non-uniqueness of solutions,
and discontinuous feedback strategies: each of these features places the problem outside
the scope of [2]. For this reason we develop, in Theorem 2.10, an integral invariance
principle for initial-value problems of the form -
the set-valued map X is defined on some open domain G ae R N and is assumed to
be upper semicontinuous with non-empty, convex and compact values. In the case
Theorem 2.10 contains the following generalization of the Byrnes-Martin
result: if x(\Delta) : R+ ! R N is a bounded solution and R 1l(x(s))ds ! 1 for some
lower semicontinuous l : R N ! R+ , then x(t) tends, as t !1, to the largest weakly-
invariant (with respect to the differential inclusion) set in l \Gamma1 (0). One particular
consequence of Theorem 2.10 is to facilitate the derivation of a nonsmooth extension,
to differential inclusions, of LaSalle's invariance principle for differential equations:
this extension may be of independent interest and is presented in Theorem 2.11. The
remainder of the paper is devoted to the application (in a collection of five lemmas) of
the generalized integral invariance principle to demonstrate, by construction and for
a variety of nonlinear system classes, the existence of a single adaptive controller that
achieves (without system identification, parameter estimation or injection of probing
signals) some prescribed objective for every system in the underlying class.
2. Differential inclusions. Some known facts (tailored 1 to our immediate pur-
pose) pertaining to differential inclusions are first assembled.
Department of Mathematical Sciences, University of Bath, Bath BA2 7AY, United Kingdom.
1 Variants of Propositions 2.2, 2.4 and 2.8 can be found in, for example, [8], [19]: for general
treatments of differential inclusions and related topics in set-valued analysis, nonsmooth control and
optimization see [1], [4], [6], [7], [8] and [12].
2.1. Maximal solutions. Consider the non-autonomous initial-value problem
G;
(1)
is an open subset of R N . The set-valued map (t; x) 7! X(t; x) ae R N
in (1) is assumed to be upper semicontinuous 2 on R \Theta G, with non-empty, convex
and compact values. This is sufficient (see, for example, [1, Chapter 2, Theorem
3]) to ensure that, for each admits a solution: an X-arc 3 x 2
Definition 2.1. A solution x of (1) is said to be maximal, if it does not have a
proper right extension which is also a solution of (1).
Proposition 2.2. Every solution of (1) can be extended to a maximal solution.
Definition 2.3. A solution x of (1) is precompact if it is
maximal and the closure cl(x([t 0 ; !))) of its trajectory is a compact subset of G.
Proposition 2.4. If x 2 AC([t 0 ; !); G) is a precompact solution of (1), then
2.2. Limit sets. Here, we specialize to the autonomous case of (1), rewritten as
G;
(2)
where, without loss of generality, t assumed. The map x 7! X(x) ae R N (with
domain G) is upper semicontinuous with non-empty, convex and compact values.
Definition 2.5. Let x 2 AC([0; !); G) be a maximal solution of (2). A point
is an !-limit point of x if there exists an increasing sequence (t n ) ae [0; !)
such that t
x as n ! 1. The
set\Omega\Gamma x) of all !-limit points of x
is the !-limit set of x.
Definition 2.6. Let C ae R N be non-empty. A function x 2 AC([0; !); G) is
said to approach C if dC is the (Euclidean) distance
function for C defined (on R N ) by dC (v) := Cg.
Definition 2.7. Relative to (2), S ae R N is said to be a weakly-invariant set if,
for each x 0 there exists at least one maximal solution x 2 AC([0; !); G) of
Proposition 2.8. If x is a precompact solution of (2),
is a non-empty,
compact, connected subset of G.
Moreover,\Omega\Gamma x) is the smallest closed set approached
by x and is weakly invariant.
2.3. Invariance principles. For later use, the following fact (a specialization
of a more general result [4, Theorem 3.1.7]) is first recorded.
Proposition 2.9. Let I = [a; b], let non-empty K ae G be compact. If
AC(I; K) is a sequence of X-arcs and there exists a scalar \Phi such that, for all n,
xn (t)k - \Phi for almost all t 2 I, then subsequence converging uniformly
to an X-arc x 2 AC(I; K).
We now arrive at the main result, which generalizes [2, Theorem 1.2].
2 The set-valued map X is upper semicontinuous if it is upper semicontinuous at every point -
- of
its domain in the sense that, for each " ? 0 there exists
with
denotes the open unit ball centred at 0 in R N .
3 For an interval I ae R and S ae R N , AC(I ; S) denotes the space of functions I ! S that
are absolutely continuous on compact subintervals of I. For simplicity, we write AC(I) in place of
the same notational convention applies to other function spaces. A function x 2 AC(I; G)
is said to be an X-arc if it satisfies the differential inclusion in (1) almost everywhere.
AN INTEGRAL INVARIANCE PRINCIPLE AND ADAPTIVE CONTROL 3
Theorem 2.10. Let l R be lower semicontinuous. Suppose that U ae G is
non-empty and that l(z) - 0 for all z 2 U .
If x is a precompact solution of (2) with trajectory in U and l
x approaches the largest weakly-invariant set in \Sigma := fz 2 cl(U
Proof. By Proposition 2.4, x has maximal interval of existence
by Proposition 2.8, has non-empty !-limit
G. Let
and so there exists
x as n !1. Define
upper semicontinuity of X together with compactness of its
values, X(K) is compact and so -
xk1 . Write I = [0; 1]
and define a sequence Evidently,
x as n !1. By Proposition 2.9 (with
subsequence
- which we do not relabel - converging uniformly to an X-arc x 2 AC(I; K), with
x
and the sequence (- n
-n lower semicontinuity of l, together with continuity of x, x
and (uniform) convergence of (x n ) to x , it follows that - and -n , n 2 N, are lower
semicontinuous with lim inf n!1 -n
Z t- (s)ds -
Z tlim inf
-n ds - lim inf
Z t-n ds 8 t 2 I:
By the hypotheses,
defines a monotone function W 2 AC(R+ ) with W (t) # 0 as t !1. Hence,
Z t+tn
l(x(s))ds
Z t-n (s)ds -
Z t- (s)ds 8 t 2 I:
Seeking a contradiction, suppose ffl := - (0) ? 0. Then, by lower semicontinuity of
- , there exists t 2 (0; 1] such that -
the contradiction
Therefore, is arbitrary, we
have \Omega\Gamma x) ae \Sigma. By Proposition 2.8, x
and the latter is a weakly-
invariant set. Therefore, x approaches the largest weakly-invariant set in \Sigma.
The next result is a nonsmooth extension, to differential inclusions, of LaSalle's
theorem [11, Chapter 2, Theorem 6.4]: a smooth version (that is, restricted to smooth
functions V ) is given in [19, Theorem 1] and a nonsmooth version is proved in [23,
Theorem 3]; the alternative proof given below is considerably simpler, by virtue of
its use of the integral invariance principle. First, we give Clarke's [4] definition of a
generalized directional derivative V of a locally Lipschitz function
at z in direction OE:
The map (z; OE) 7! V semicontinuous (in the sense of real-valued func-
tions) and, for each z, the map OE 7! V continuous.
Theorem 2.11. R be locally Lipschitz. Define
Suppose that U ae G is non-empty and that u(z) - 0 for all z 2 U .
If x is a precompact solution of (2) with trajectory in U , then, for some constant
approaches the largest weakly-invariant set in
Proof. This result is essentially a corollary to Theorem 2.10 insofar as the essence
of the proof is to show that the hypotheses of Theorem 2.10 hold with l := \Gammau.
We first show that u is upper semicontinuous (and so l j \Gammau is lower semicon-
tinuous). Let z 2 G be arbitrary and let (z n ) ae G be such that z n ! z as n ! 1.
From (u(z n )) we extract a subsequence (u(z nk )) with u(z nk
1. For each k, let OE k be a maximizer of continuous V
X(znk ), and so u(z nk upper semicontinuity of X, we
have X(znk sufficiently large. Since OE k 2 X(znk ) and X(z) is
compact, contains a subsequence converging to OE
is arbitrary and X(z) is compact, OE 2 X(z). Thus, invoking upper semicontinuity of
\Delta), we may conclude that lim sup n!1 u(z n
semicontinuity of u.
Observe that, for all z 2 U ,
By Proposition 2.4, x has interval of existence R+ . Let O ae R+ denote the set of
measure zero on which the derivative -
x(t) fails to exist. Since V is locally Lipschitz,
for each t 2 R+nO there exists a constant L t such that, for all h ? 0 sufficiently
small,
x(t)j:
Therefore,
lim inf
Next, we prove that V is nonincreasing on R+ . This we do by
showing that V ffi x is nonincreasing on every compact subinterval. Let [ff; fi] ae R+ ,
and let K ae G be compact and such that x([ff; fi]) ae K. Since V is locally Lipschitz
on G, it is Lipschitz on K. Thus, the restriction of V ffi x to [ff; fi] is a composition of
a Lipschitz function and an absolutely continuous function and so is itself absolutely
continuous. It now follows from (3) and (4) that
u(x(s))ds
AN INTEGRAL INVARIANCE PRINCIPLE AND ADAPTIVE CONTROL 5
Therefore, t 7! V (x(t)) is nonincreasing on [ff; fi]. Since [ff; fi] ae R+ is arbitrary, we
conclude that V ffi x is nonincreasing on R+ with
u(x(s))ds
By continuity of V and precompactness of x, we conclude that V (x(\Delta)) is bounded.
Therefore, It follows that
An application of Theorem 2.10 completes the proof.
3. Adaptive control. Approaches to adaptive control may be classified into
methods that - either implicitly or explicitly - exhibit some aspect of identification
of the process to be controlled, and methods that seek only to control. The latter
approach, to be adopted here and sometimes referred to as universal control, has its
origins in the work of Byrnes and Willems [3, 27], M-artensson [13, 14, 15], Morse [16],
Nussbaum [17] and others (see [9] for a survey and comprehensive bibliography). In
common with its above-cited precursors, this section of the present paper is concerned
with demonstrating the existence - under relatively weak assumptions - of a single
controller that achieves some prescribed objective for every system in the underlying
class. In contrast with its above-cited precursors (which deal mainly with classes of
linear systems, possibly subject to "mild" nonlinear perturbations, in a context of
adaptive linear feedback), the present paper considers strongly nonlinear systems and
nonsmooth feedback (for an overview of adaptive control of nonlinear systems in the
context of smooth feedback, see [18]). In essence, the ensuing two subsections provide
a unified analysis - unified through its use of the integral invariance principle - of
various problems in nonlinear adaptive control (some closely-related problems have
been individually investigated, via alternative analyses, in [5], [10], [20] and [21]).
3.1. Scalar systems. First, consider scalar systems of the form
where parameters b 2 R, P 2 N and functions f , p are unknown. The state y(t) is
available for feedback. We will identify (6) with the quadruple (b; f; p; P ).
For any function OE : R+ ! R+ that is both continuous and positive definite
we denote, by N OE , the set of system quadruples (b; f;
satisfying the following three assumptions.
Assumption A. b 6= 0.
Assumption B. (p; y) 7! f(p; y), R P \Theta R! R, is a continuous function and is
OE-bounded uniformly with respect to p in compact sets: precisely, for every compact
K ae R P , there exists scalar -K such that jf(p; y)j -K OE(jyj) for all (p; y) 2 K \Theta R.
Assumption C. p(\Delta) 2 L 1 (R; R P ).
By virtue of Assumption C, without loss of generality t may be assumed in
this we will do, without further comment, throughout this subsection.
Examples 3.1.
(a) Let OE : jyj 7! exp(jyj). Then all polynomial systems, of arbitrary degree, of
the form -
coefficient
functions are of class N OE .
6 E. P. RYAN
(b) Suppose that Assumptions A, C hold and that the only a priori information on
continuous f is its behaviour "at infinity", captured in the following manner: for some
known continuous -
OE(jyj)), as jyj !1, uniformly
with respect to p in compact sets in the sense that, for every compact K ae R P , there
exist scalars c K and CK such that, for all
OE(jyj) for all jyj ? CK .
Assumption B holds with OE
OE, and so (b; f;
3.1.1. Adaptive stabilizer. Let OE : R+ ! R+ be a continuous, positive-definite
function. Assuming only that the function OE and the instantaneous state y(t)
are available for control purposes, we will show that the following adaptive feedback
strategy (appropriately interpreted) is a N OE -universal stabilizer in the sense that it
assures that the state of (6) approaches f0g for all quadruples (b; f; whilst
maintaining boundedness of the controller function -(\Delta):
where - is any continuous function R! R with the properties:
(a) lim sup
For example, suffices.
In view of the discontinuous nature of the feedback (however, note that, if
0, then the feedback is continuous), we interpret the strategy (7) in the set-valued
sense
with y 7! oe(y) ae R given by
Let (b; f; . By properties of f(\Delta; \Delta) and boundedness of p(\Delta), there exists a
scalar - such that jf(p(t); y)j -OE(jyj) for all (t; y).
We embed the feedback-controlled system in a differential inclusion on R
where x 7! X(x) ae R 2 is given by
X is upper semicontinuous on R 2 with non-empty, convex and compact values. There-
fore, for each x 0 2 R 2 , the initial-value problem (11) has a solution and, by Proposition
2.2, every solution can be extended to a maximal solution.
Lemma 3.2. Let x 0 2 R 2 be arbitrary and let be a maximal
solution of (11), defined on its maximal interval of existence [0; !). Then
exists and is finite; (iii)
Proof. The essence of the proof is to establish boundedness of x(\Delta): whence, by
Proposition 2.4, assertion (i) and, by monotonicity, assertion (ii): state convergence
to zero (assertion (iii)) is then an immediate consequence of Theorem 2.10.
AN INTEGRAL INVARIANCE PRINCIPLE AND ADAPTIVE CONTROL 7
For almost all t 2 [0; !), we have
which, on integration, yields
Z -(t)
Seeking a contradiction, suppose that solution component -(\Delta) (monotone increasing)
is unbounded. Fix - such that - 1. Dividing by -(t) - 1 in (13), gives
Z -(t)
Recalling that b 6= 0 and taking limit inferior as leads to a contradiction
of one or the other of properties (8). Hence, -(\Delta) is bounded and so, by
(13), y(\Delta) is also bounded. Therefore, precompact
solution of (11) and so, by Proposition 2.4,
by boundedness of -(\Delta), that l ffi x 2 L 1 (R+ ) and so, by Theorem 2.10 (with U
approaches the set f0g \Theta R. In particular, y(t) ! 0 as t !1 and,
by boundedness and monotonicity of -(\Delta), lim t!1 -(t) exists and is finite.
3.1.2. Adaptive servomechanism. We now turn attention to the servomechanism
problem for scalar systems (6): that is, the construction of controls that cause
the state to track, asymptotically, reference signals r(\Delta) of some given class in the
sense that 1. For the class of reference signals (previously
adopted in [20], [10], [21]) we take the (Sobolev) space R = W 1;1 (R) of functions
essentially bounded derivative -
equipped with
the norm krk
rk1 where k k1 denotes the norm on L 1 (R).
We impose a stronger assumption on the function f by requiring that Assumption
should hold for some known, continuous, positive-definite, nondecreasing function
OE with the additional property that, for each R - 0, there exists a scalar ae R such that
Note that, by positive definiteness of OE together with property (14), OE(0) ? 0.
Example 3.3. Let OE : jyj 7! exp(jyj), which has the property (14), and so all
polynomial systems (of arbitrary degree and with coefficients in L 1 (R)), as cited
earlier in Example 3.1(a), remain admissible. \Sigma
Let OE be a continuous, positive-definite, nondecreasing function with property
(14). We claim that, in order to assure that the tracking error
approaches f0g for all reference signals r 2 R and all quadruples (b; f;
suffices to replace every occurrence of y(t) in (9) by e(t). Proof of this claim follows.
Let (b; f;
define the continuous function
~
s:
Let ~
K ae R ~
P be compact and so there exist compact K ae R P and R ? 0 such that
~
K ae K \Theta [\GammaR; R] 2 . By properties of f and OE, there exist constants -K and ae R such
that, for all (p;
with ~
f is OE-bounded uniformly with respect to ~
in
compact sets.
Let r 2 R be arbitrary. Then ~
(b; ~
. Expressed in terms of the tracking error the
system dynamics have the form
We are now in precisely the same context, modulo notation, as in the case of an
adaptive stabilizer and so, replacing all occurrences of y(t) in (9) by e(t) to yield
then the same argument (as used to establish Lemma 3.2) applies mutatis mutandis
to conclude that (15) is an (R; N OE )-universal servomechanism: for each r(\Delta) 2 R
and (b; f; every solution (e(\Delta); -(\Delta)) of the controlled system has maximal
interval of existence R+ with e(t) ! 0 as t !1, and lim t!1 -(t) exists and is finite.
3.1.3. Practical stabilization and tracking by continuous feedback. The
adaptive strategies outlined above are (generically) of a discontinuous feedback nature.
From a viewpoint of practical utility, this feature might be regarded as unpleasant.
Here, we investigate the possibility of adopting smooth approximations to the discontinuous
feedbacks. Of course, in so doing, one would expect to pay a price. It will be
shown that, if the objective of attractivity of the zero state (in the stabilization case)
or asymptotic tracking (in the case of a servomechanism) is weakened to requiring
global attractivity of any (arbitrarily small) prescribed neighbourhood of zero or, for
the servomechanism problem, tracking to within any prescribed (arbitrarily small but
non-zero) error margin, then such approximations are feasible. We will present this
result only in the context of the stabilization problem (imposing the additional property
(14), the corresponding result for the servomechanism problem is readily inferred
by analogy with Section 3.1.2). The na-ive idea, as developed in [10] and [21], is to
inhibit the adaption whenever the state lies within the prescribed neighbourhood of
zero.
arbitrary and let d ffl denote the distance function for the set [\Gammaffl; ffl]:
thus, d ffl (x) :=
continuous and positive-definite. Let R be any
continuous function (arbitrarily smooth) such that (i) jsat ffl (x)j - 1 for all x and (ii)
We will show that the following strategy assures
that the state of (6) approaches the interval [\Gammaffl; ffl] for all quadruples (b; f;
where, as before, - is any continuous function with properties (8).
Let (b; f; and so there exists constant - such that jf(p(t); y)j -OE(jyj)
for all (t; y). Define the set-valued map y 7! oe ffl (y) by
Evidently, sat ffl is a continuous selection from oe ffl . We now embed the smooth-feedback-
controlled system in the following differential inclusion on R
AN INTEGRAL INVARIANCE PRINCIPLE AND ADAPTIVE CONTROL 9
where x 7! X ffl (x) ae R 2 is given by
ffl is upper semicontinuous on R 2 with non-empty, convex and compact values. There-
fore, for each x 0 2 R 2 , the initial-value problem (16) has a solution and every solution
has a maximal extension.
Lemma 3.4. Let x 0 2 R 2 be arbitrary and let be a maximal
solution of (16), defined on its maximal interval of existence [0; !). Then
exists and is finite; (iii) d ffl
Proof. For almost all t 2 [0; !),
which, on integration, yields
Z -(t)
valid for all t; - 2 [0; !), with t - . By precisely the same contradiction argument as
employed previously in the case of the discontinuous stabilizer, we may deduce that
precompact solution of (16) and so
by boundedness of -(\Delta), that l ffi x 2 L 1 (R+ )). Therefore, by Theorem 2.10 (with
approaches the set l 0g. In
particular, d ffl by monotonicity of bounded -(\Delta), lim t!1 -(t)
exists and is finite.
3.1.4. Dynamically perturbed scalar systems. Let OE : R+ ! R+ be continuous
and positive definite. Let \Sigma We wish to consider the
situation wherein \Sigma 1 is subject to perturbations generated through its interconnection
with a dynamical system \Sigma 2 .
Figure
System \Sigma 2 is assumed to correspond to a differential equation (driven by the state of
the scalar system \Sigma 1 ) on R N of the form
with input y(t), and scalar output w(t) perturbing \Sigma 1 . Notationally, we identify the
system \Sigma 2 with the triple (g; h; N ). The overall system has representation (on R\ThetaR N )!
We will define, via Assumption D below, a class P/ of admissible systems \Sigma
such that the N OE -universal stabilizer of Section 3.1.1 is readily modified to
yield (N OE ; P/ )-universal stabilizer. Before stating Assumption D, we cite Sontag's
concept of input-to-state stability [24, 25] (see also [26]) in the context of (18) with g
assumed to be locally Lipschitz and with y(\Delta) regarded as an independent input of class
loc (R+ ; R). System (18) is input-to-state stable (ISS) if there exist a continuous,
strictly increasing function and a continuous function
and having the properties that, for
each t - 0, fi(\Delta; t) is strictly increasing and, for each s - 0, fi(s; t) # 0 as t !1, such
that, for every i 0 2 R N and every y(\Delta) 2 L 1
loc (R+ ; R), the (unique) maximal solution
i(\Delta) of the initial-value problem (18) satisfies ki(t)k - fi(ki 0 k;
denotes the truncation of y at t, that is, y t
t. If (18) is ISS, then it is forward complete and has the convergent-
input, convergent-state property: for each
loc (R+ ; R), the unique
solution i(\Delta) of the initial-value problem has maximal interval of existence R+ and, if
For continuous by P/ the set of system triples \Sigma
satisfying the following:
Assumption D. (i) g : R \Theta R N ! R N is locally Lipschitz; (ii) system (18) is
input-to-state stable (ISS); (iii) h : R N ! R is continuous; (iv) there exist a function
loc (R+ ; R)),
the (unique) solution i(\Delta) of (18) satisfies
Z t/(jy(s)j)jy(s)jds 8 t - 0:
While Assumption D is rather restrictive, it is not difficult to identify non-trivial
classes of systems for which the assumption holds. Three such examples follow, the
first of which is easily seen, the second and third can be verified by arguments invoking
[22, Theorem 2].
Examples 3.5.
(a) Let / : jyj 7! jyj and suppose (g; h; N ) defines a linear system with
If A has spectrum, spec(A), in the open left half complex plane
(b) More generally, let
locally Lipschitz and h : R N ! R is continuous. In addition, assume g and h are
positively homogeneous of degree k. If f0g is an asymptotically stable equilibrium of
the unforced system -
. For example, with
systems (with with unknown real parameters a i ) of
the form -
are of class P/ ,
provided that a 1 ! 0.
(c)
. Assume
R \Theta R N ! R N is locally Lipschitz and that h : R N ! R is continuous and positively
homogeneous of degree k h . If, in addition, g is positively homogeneous of degree k g ,
and f0g is an asymptotically stable equilibrium of the
unforced system -
. For example, systems (with
unknown parameters ae, a i 2 R) of the form -
linear
output are of class P/ , provided that a 1 ! 0 and
continuous and positive definite. Let /
be continuous. We will show that, for (N OE ; P/ )-universal stabilization, it suffices to
AN INTEGRAL INVARIANCE PRINCIPLE AND ADAPTIVE CONTROL 11
replace both occurrences of OE in (9) by OE + /, to yield
Let (b; f; . Then there exists constant - such that
y). The feedback-controlled system (19-20) can be
embedded in a differential inclusion on R N+2 :!
R N+2 is given by
\Theta fg(y; i)g \Theta f(OE
X is upper semicontinuous on R N+2 with non-empty, convex and compact values.
Therefore, for each x 0 2 R N+2 , the initial-value problem (21) has a solution and
every solution can be maximally extended.
Lemma 3.6. Let x 0 2 R N+2 be arbitrary and let be a
maximal solution of (21), defined on its maximal interval of existence [0; !). Then
exists and is finite; (iii) (y(t);
Proof. For almost all t 2 [0; !), we have
which, on integration, yields
h(i(s))y(s)ds
Z -(t)
valid for all t; - 2 [0; !), with t - . Invoking Assumption D, we have
Z -(t)
-:
By the same contradiction argument as that employed in the proof of Lemma 3.2, we
may deduce that -(\Delta) is bounded. Boundedness of y(\Delta) then follows from (22). That
i(\Delta) is bounded is a consequence of the ISS property of \Sigma Therefore,
precompact solution of (21) and so
may conclude, by boundedness of -(\Delta), that l ffi x 2 L 1 (R+ ). Therefore, by Theorem
2.10 (with approaches the set f(y; In particular,
so, by the convergent-input, convergent-state property of the
ISS system \Sigma also conclude that i(t) ! 0 as t !1. Finally, by
boundedness and monotonicity of -(\Delta), lim t!1 -(t) exists and is finite.
Example 3.7. Linear, minimum-phase systems of relative degree one. This
class has played a central r-ole in the development of universal adaptive control. In
appropriate coordinates, such systems have state space representations of the form
In the single-input, single-output case, we identify (23) and (19) with
By the relative-degree-one assumption, and, by the minimum-phase
assumption,
2 jyj, we see that every single-
input, single-output, linear, minimum-phase system of relative degree one is of class
and the control (20) reduces to the ubiquitous Byrnes-Willems strategy:
Remarks 3.8. In considering the case of dynamically perturbed scalar systems,
we treated only the problem of adaptive stabilization. The adaptive servomechanism
of Section 3.1.2 can also be modified to incorporate dynamically perturbed sys-
tems, when the dynamic perturbations are generated by linear systems -
described in Example 3.5(a) above. For such per-
turbations, the (modified) servomechanism assures convergence to zero of the tracking
error, convergence to a finite limit of the adapting parameter, and boundedness of the
evolution t 7! i(t) of the perturbing system. We omit full details here.
3.2. Planar systems. In all applications of the integral invariance principle in
Section 3.1 above, the conclusion that x(t) tends, as t !1, to the zero level set l \Gamma1 (0)
proved sufficient for our purposes: the additional property that x(\Delta) approaches the
largest weakly-invariant subset of l \Gamma1 (0) was redundant. Here, we treat a class of
systems for which the latter property can be fruitfully exploited. We consider planar
systems (with scalar control u) described by a second-order differential equation:!
where the parameters b 2 R, P 2 N and functions d, f , p are unknown. The variable
but not its derivative -
is available for feedback. We identify (24) with the
quintuple (b; d; f; p; P ). For positive-definite, nondecreasing
we denote, by N ffi;OE , the set of system quintuples (b; d; f;
for which Assumption A (that is, b 6= together with the following three
assumptions.
Assumption t.
Assumption F. (p; y) 7! f(p; y); R P \Theta R! R is continuous and is continuously
differentiable in its first argument. Both f and D 1 f (j @f=@p) are OE-bounded uniformly
with respect to p in compact sets: precisely, for every compact K ae R P , there
exists scalar -K such that jf(p; y)j
Assumption G. p(\Delta) 2 W 1;1 (R; R P ).
By virtue of Assumptions E and G, without loss of generality t may be
assumed in (24): this we will do, without further comment, throughout.
In Section 3.2.1 below, we will show that (b; d; f;
known continuous positive-definite nondecreasing function OE, is sufficient
a priori information for adaptive stabilizability of (24) by feedback of the variable
y(t) alone: in essence, Assumption E compensates for the inaccessibility of the velocity
variable -
y(t) by requiring that the system should exhibit dissipitive dependence
(loosely quantifiable by the known constant ffi ) on that variable.
Example 3.9. As motivation for (24), consider a single-degree-of-freedom mechanical
system with position, but not velocity, available for feedback and with some
AN INTEGRAL INVARIANCE PRINCIPLE AND ADAPTIVE CONTROL 13
constant (but unknown) natural damping d quantified by a known parameter ffi in the
sense that d ! \Gammaffi ! 0. If we suppose that Assumption F holds with
for example, the following particular realizations are admissible.
Nonlinear pendulum with disturbed support (disturbance p(\Delta) 2 W 1;1 (R)):-
Duffing equation with extraneous disturbance (p(\Delta) 2 W 1;1 (R)):-
In the absence of control, such systems are potentially "chaotic". \Sigma
3.2.1. Adaptive stabilizer. Let R+ be a continuous,
positive-definite, nondecreasing function. Assuming only that ffi , OE and the instantaneous
state y(t) are available for control purposes, our goal is to demonstrate the
existence of an adaptive feedback strategy that provides N ffi;OE -universal stabilization
in the sense that it assures that the state of (24) approaches f0g for all quintuples
(b; d; f; whilst maintaining boundedness of the controller function -(\Delta).
Define the continuous function
its indefinite
fl. We claim that the following (formal)
strategy is a N ffi;OE -universal stabilizer:!
where - is any continuous function with properties (8).
Let (b; d; f; . Introducing the coordinate transformation
may express (24) in the form? ?
The feedback (25) is interpreted in the set-valued sense:!
with the map y 7! oe(y) defined as before in (10).
Writing -(t)), the overall adaptive feedback controlled system
may be embedded in the following differential inclusion on R 3
14 E. P. RYAN
X is upper semicontinuous on R\ThetaR 3 and takes non-empty, convex and compact values
in R 3 . Therefore, for each x 0 2 R 3 , the initial-value problem (28) has a solution and
every solution can be extended into a maximal solution.
Lemma 3.10. Let x 0 2 R 3 be arbitrary and let be a
maximal solution of (28) defined on its maximal interval of existence [0; !). Then
exists and is finite; (iii) (y(t);
Proof. On R, define the locally Lipschitz function \Phi : r 7! \Gamma(jrj), with directional
derivative at r in direction s given by
fl(jrj)sgn(r)s; r 6= 0
denote the composition \Phi ffi y and let O 1 ae [0; !) be the set (of
measure zero) of points t at which the derivative -
fails to exist. A straightforward
argument (analogous to that yielding (4)) gives
By properties of f and p (Assumptions F and G), the function
is of class AC([0; !); R). Let O 2 denote the set (of measure zero) of points t at which
p(t) fails to exist. Again by properties of f and p, there exists - ? 0 such that
parameterized by c ? 0, as follows F
By Assumptions F, G and definition of \Gamma, F c is such that, for all c sufficiently large,
Moreover, by (30) and (31),
be the set of points t at which y(t) and -
are not both zero. Observe that (i) every point t of the subset O 0 :=
is an isolated point implying that O 0 is countable and so has measure zero, and (ii)
From these observations together with
(29), (30) and writing O := O 0 [O 1 [ O 2 (of measure zero), we deduce that
Invoking (33), (34) and Assumption
AN INTEGRAL INVARIANCE PRINCIPLE AND ADAPTIVE CONTROL 15
for almost all t 2 [0; !), wherein we have used the fact that
with sufficiently large so that \Delta 2 (4ffl)
and (32) holds, in which case, V c (t) - 1\Theta
cy
t)] for all t and
which, on integration, yields
cy
Z j(t)
for all t; - 2 [0; !), with t - .
We first show that the function j(\Delta) (and hence -(\Delta)) is bounded. By properties
(8) of -, there exist increasing sequences (-j n ) n2N and (~j n ) n2N , with -
~
jn !1 as n !1, such that
(a) 1
jn
jn
~
jn
Z ~
jn
~
as n !1. Without loss of generality, we may assume -
Seeking a contradic-
tion, suppose that j(\Delta) is unbounded. Now, j(\Delta) is bounded from below (in particular,
so, by the supposition, j(\Delta) is unbounded from above.
Therefore, there exist increasing sequences
!) such that, for all n,
jn and j( ~ t n
combine to yield the
contradiction
~
jn
Z ~
jn
~
combine to yield the contradiction
jn
jn
Therefore, j(\Delta) (and hence -(\Delta)) is bounded. Boundedness of j(\Delta) and -(\Delta), together
with (37), imply boundedness of y(\Delta) and z(\Delta). This establishes assertion (i) and
assertion (ii) follows by monotonicity of -(\Delta). It remains to prove assertion (iii).
By boundedness of d(\Delta), p(\Delta) and x(\Delta), there exists ae ? 0 such that X 2 (t; x(t)) ae ae -
is a precompact solution of the autonomous
initial-value problem
Moreover, by boundedness of -(\Delta),
Z 1-
Therefore, by Theorem 2.10, x(\Delta) approaches the largest weakly-invariant (relative
to the autonomous differential inclusion (39)) set W in f(y; z; -)j
. By definition of weak invariance, the initial-value problem
has at least one solution maximal interval of existence
R+ and with trajectory in W ae f(y; z; -)j
Therefore, we conclude
that the largest weakly-invariant set in W is contained in the set f(y; z; -)j
and so the solution x(\Delta) approaches the set f(0; 0)g \Theta R. In particular, (y(t);
(0;
3.2.2. Adaptive servomechanism. We now turn attention to the servomechanism
problem for planar systems (26): that is, the construction of controls that
cause the system to track, asymptotically, any reference signal r(\Delta) of some given
class, in the sense that both the tracking error its derivative
r(t) tend to zero as t ! 1. For the class of reference signals
we take the (Sobolev) space R = W 3;1 (R) of functions r 2
(R) with
(R) and -
equipped with the norm
For the servomechanism problem, we restrict the underlying class of systems by
imposing a stronger assumption on the function f : Assumption F below should hold
for some known, continuous, positive-definite, nondecreasing function OE having the
additional property (14). Specifically, for real ffi ? 0 and continuous, positive-definite,
nondecreasing function OE : R+ ! R+ with property (14), we denote, by N
ffi;OE , the set
of quintuples (b; d; f; Assumptions A, E and F following.
Assumption F . (p; y) 7! f(p; y); R P \Theta R! R is continuously differentiable.
Both f and its gradient function (j (@f=@p; @f=@y)) are OE-bounded
uniformly with respect to p in compact sets: precisely, for every compact K ae R P ,
there exists scalar -K such that jf(p; y)j+kDf(p; y)k -K OE(jyj) for all (p; y) 2 K \ThetaR.
Example 3.11. The function OE : jyj 7! 1+jyj 3 has property (14) and the systems
described in Example 3.9 are admissible. \Sigma
OE be a continuous, positive-definite, nondecreasing function
with property (14). We claim that, in order to assure convergence to zero of both the
tracking error its derivative -
e(t) for all reference signals r 2 R
and all quintuples (b; d; f;
ffi;OE , it suffices to replace every occurrence of y(t)
in (25) by e(t). Proof of this claim follows.
Let (b; d; f;
ffi;OE . Write ~
define the continuous function
~
By properties of f , ~
f is continuously differentiable with respect to its first argument
(~p), with D 1
~
f given by
~
Let ~
K ae R ~
P be compact and so there exist compact K ae R P and R ? 0 such that
~
K ae K \Theta [\GammaR; R] 3 . By properties of f and OE, there exist constants -K and ae R such
AN INTEGRAL INVARIANCE PRINCIPLE AND ADAPTIVE CONTROL 17
that, for all (p;
~
with ~
Therefore, ~
f satisfies Assumption F.
P is of class
f; ~
. Expressed in terms of the tracking error
adopting the coordinate transformation
the
underlying dynamics have the
We are now in precisely the same context, modulo notation, as in the case of an
adaptive stabilizer and so, replacing all occurrences of y(t) in (9) by e(t), viz.!
then the same argument, as used to establish Lemma 3.10, applies mutatis mutandis
to conclude that (41) is an (R; N
)-universal servomechanism: for each r(\Delta) 2 R
and (b; d; f;
ffi;OE , every solution (e(\Delta); z(\Delta); -(\Delta)) of the feedback controlled
system has maximal interval of existence R+ with (e(t);
moreover, lim t!1 -(t) exists and is finite.
3.2.3. Dynamically perturbed planar systems. Let ffi ? 0 and let the function
positive definite and nondecreasing. Let \Sigma
(b; d; f; . Here, we consider the case where \Sigma 1 is subject to perturbations
generated through its interconnection with a dynamical system \Sigma 2 (as depicted in
Figure
1).
The system \Sigma 2 is assumed to correspond to a differential equation (driven by the
variable y(t) of system \Sigma 1 ) on R N of the form (18) with input y(t), and scalar output
w(t) perturbing \Sigma 1 . As before, we identify the system \Sigma 2 with the triple (g; h; N ).
Writing
the overall system has representation (on R \Theta R \Theta R N
We will define, via Assumption H below, a class P/ of admissible systems \Sigma
such that the N ffi;OE -universal stabilizer of Section 3.2.1 is readily modified to
yield a (N ffi;OE ; P/ )-universal stabilizer.
For continuous by P/ the set of system triples \Sigma
satisfying the following:
Assumption H. (i) g : R \Theta R N ! R N is locally Lipschitz; (ii) system (18)
is input-to-state stable there exist
a function ff and a scalar ff 1 ? 0, such that, for each (i
R N \Theta L 1
loc (R+ ), the (unique) solution i(\Delta) of (18) satisfies
Z th 2 (i(s))ds - ff 0 (ki
Z t/(jy(s)j)jy(s)jds:
Examples 3.12.
(a) Assumption H holds for the class of linear systems in Example 3.5(a).
(b) More generally, assume g : R \Theta R N ! R N is locally Lipschitz and h : R N
is continuous. Assume further that h is positively homogeneous of degree k - 1 and
that g is positively homogeneous of degree k
f0g is an asymptotically stable equilibrium of the unforced system -
then it can be shown (by an argument invoking [22, Theorem 2]) that (g; h; N
For example, if systems (with real
parameters a i ) of the form -
are of class P/ , provided that
positive definite and nondecreas-
ing. As before, let
continous
with indefinite integral
R -/.
We will show that, for (N ffi;OE ; P/ )-universal stabilization, it suffices to replace both
occurrences of fl in (25) by fl +/ and to replace the single occurrence of \Gamma by
to yield!
Let (b; d; f; . The feedback-controlled system (42-43)
can be embedded in a differential inclusion on R N+3 :!
where the set-valued map (t; x) j (t; N+3 is given by
X is upper semicontinuous on R \Theta R N+3 and takes non-empty, convex and compact
values in R N+3 . Therefore, for each x 0 2 R N+3 , the initial-value problem (44) has a
solution and every solution can be maximally extended.
Lemma 3.13. Let x 0 2 R N+3 be arbitrary and let
a maximal solution of (44) defined on its maximal interval of existence [0; !). Then
exists and is finite; (iii) (y(t); z(t);
AN INTEGRAL INVARIANCE PRINCIPLE AND ADAPTIVE CONTROL 19
Proof. Let F c and V c , parameterized by c ? 0, be defined as in the proof of
Lemma 3.10. By an argument essentially the same as that adopted in the proof of
Lemma 3.10 and choosing c sufficiently large, we arrive at a counterpart to (35):
for almost all t 2 [0; !). Invoking the inequality jh(i)jjzj - 1
integrating and invoking Assumption H, we have (for c sufficiently large)
Z j(t)
for all t; - 2 [0; !), with t - . A straightforward modification of the contradiction
argument previously used in the proof of Lemma 3.10, establishes boundedness of
j(\Delta) (and hence of -(\Delta)). Boundedness of j(\Delta) and -(\Delta), together with (46), imply
boundedness of y(\Delta) and z(\Delta). That i(\Delta) is bounded is a consequence of the ISS
property of \Sigma This establishes assertion (i) and assertion (ii) follows by
monotonicity of -(\Delta). It remains to prove assertion (iii). With minor modification, the
argument used in the proof of Lemma 3.10 applies to conclude that x(\Delta) approaches
the set f(y; z; zg. In particular, (y(t);
by the convergent-input, convergent-state property of the ISS system \Sigma
we may also conclude that i(t) ! 0 as t !1.
Example 3.14. Linear Minimum-Phase Systems of Relative Degree Two. Let
define a linear, single-input u, single-output y minimum-phase system on
R N+2 of relative degree two. Denoting its Markov parameters by m k := CA
and the system has a representation (on R 2 \Theta R N ) of the form
If we assume that \Gammam 3 (that is, the system exhibits natural damping
quantified by known ffi ), then we may identify (47) and (42) by setting
By the relative-degree-two assumption, and, by the minimum-phase
assumption,
2 jyj, we see that every relative-
degree-two, minimum-phase system with m 3 \Gammaffi is of class (N ffi;OE ; P/ ) and we
recover (modulo notation) the adaptive stabilizer proposed previously in [5]:
Remarks 3.15. We conclude with some observations on the servomechanism
problem for dynamically perturbed planar systems. Akin to Remarks 3.8, the adaptive
servomechanism of Section 3.2.2 can also be modified to incorporate dynamically
perturbed systems, when the dynamic perturbations are generated by linear systems
. For such perturbations, the (modified)
servomechanism assures convergence to zero of the tracking error and its derivative,
convergence to a finite limit of the adapting parameter, and boundedness of the evolution
t 7! i(t) of the perturbing system. For brevity, we omit full details here.
--R
An integral-invariance principle for nonlinear systems
Adaptive stabilization of multivariable linear systems
Optimization and Nonsmooth Analysis
Adaptive control of a class of nonlinearly perturbed linear systems of relative degree two
Nonlinear Functional Analysis
New York
Differential Equations with Discontinuous Righthand Sides
The Stability of Dynamical Systems
Optimal Control via Nonsmooth Analysis
Adaptive stabilization of multivariable linear systems
The order of a stabilizing regulator is sufficient a priori information for adaptive stabi- lization
New directions in parameter adaptive control
Some remarks on a conjecture in parameter adaptive control
Adaptive stabilization of nonlinear systems
in Control of Uncertain Systems
Universal
A nonlinear universal servomechanism
Universal stabilization of a class of nonlinear systems with homogeneous vector fields
Adaptive stabilization of multi-input nonlinear systems
Smooth stabilization implies coprime factorization
On characterizations of the input-to-state stability property
Global adaptive stabilization in the absence of information on the sign of the high frequency gain
--TR | differential inclusions;universal servomechanisms;invariance principles;nonlinear systems;adaptive control |
278098 | Optimal Boundary Control of the Stokes Fluids with Point Velocity Observations. | This paper studies constrained linear-quadratic regulator (LQR) problems in distributed boundary control systems governed by the Stokes equation with point velocity observations. Although the objective function is not well defined, we are able to use hydrostatic potential theory and a variational inequality in a Banach space setting to derive a first-order optimality condition and then a characterization formula of the optimal control. Since matrix-valued singularities appear in the optimal control, a singularity decomposition formula is also established, with which the nature of the singularities is clearly exhibited. It is found that in general, the optimal control is not defined at observation points. A necessary and sufficient condition that the optimal control is defined at observation points is then proved. | Introduction
.
In this paper, we are concerned with the problems in boundary control of fluid
flows. We consider the following constrained optimal boundary control problems in
the systems governed by the Stokes equation with point velocity observations.
Let\Omega ae R 3 be a bounded domain with smooth boundary \Gamma, \Gamma 1 an open subset
of
min
Z
subject to
div ~
where
~
w(x) is the velocity vector of the fluid at x 2 \Omega\Gamma
p(x) is the pressure of the fluid at x 2 \Omega\Gamma
w)(x) is the surface stress of the fluid along \Gamma defined by
Received by the editors XX XX, 19XX; accepted by the editors XXXX XX, 19XX.
y Department of Mathematics, Texas A&M University, College Station, 77843. Supported in
part by NSF Grant DMS-9404380 and by an IRI Award of Texas A&M University.
z Department of Aerospace Engineering, Texas A&M University, College Station, Current
address: Department of Mathematical Sciences, University of Nevada-Las Vegas, Las Vegas, NV
89154-4020.
P.You,Z.Ding and J.Zhou
~n(x) is the unit outnormal vector of \Gamma at x;
~g is a given (surface stress) Neumann boundary data (B.D.) on
U is the (surface stress) Neumann boundary control on the surface
U is the admissible control set to be defined later for well-posedness of the
problem and for applications;
are given weighting factors;
are prescribed "observation points";
are prescribed "target values" at
-, a positive quantity, is the kinematic viscosity of the fluid. For simplicity,
throughout this paper we assume that and the density of the fluid is the
constant one.
Let
which is the subspace of the rigid body motions in R 3 . Multiplying the Stokes equation
by ~a integration by parts yield the compatibility condition of the
Stokes system, i.e.,
Z
or
For q - 1, let A be a subspace of (L q (\Gamma)) 3 and denote
The Stokes equation (1.1) describes the steady state of an incompressible viscous
fluid with low velocity in R 3 . It is a frequently used model in fluid mechanics. It
is also an interesting model in linear elastostatics due to its similarities. During the
past years, considerable attention has been given to the problem of active control of
fluid flows (see [1, 2, 7, 18, 19] and references therein). This interest is motivated by
a number of potential applications such as control of separation, combustion, fluid-structure
interaction, and super maneuverable aircraft. In the study of those control
problems and Navier-Stokes equations, the Stokes equations, which describe the slow
steady flow of a viscous fluid, play an important role because of the needs in stability
analysis, iterative computation of numerical solutions, boundary control and etc. The
theoretical and numerical discussion of the Stokes equations on smooth or Lipschitz
domains can be found from [14, 16, 17, 22, 25, 26, 27].
Our objective of this paper is to find the optimal surface stress ~u(x) on \Gamma 1 , which
yields a desired velocity distribution ~
w(x), s.t. at observation points
the observation values ~
are as close as possible to the target values Z k with
a least possible control cost
Z
which arise from the contemporary fluid
control problems in the fluid mechanics.
Notice that point observations are assumed in the problem setting, because they
are much easier to be realized in applications than distributed observations. They
can be used in modeling contemporary "smart sensors".
OPTIMAL CONTROL OF STOKES FLUIDS WITH POINT OBSERVATIONS 3
Sensors can be used in boundary control systems (BCS) governed by partial differential
equations (PDE) to provide information on the state as a feedback to the
systems. According to the space-measure of the data that sensors can detect, sensors
can be divided into two types, point sensors and distributed sensors. Point sensors are
much more realistic and easier to design than distributed sensors. In contemporary
"smart materials", piezoelectric or fiber-optic sensors (called smart sensors) can be
embedded to measure deformation, temperature, strain, pressure,.,etc. Each smart
sensor detects only the average of the data in between the sensor and its size can
be less than in any sense, they should be treated as point
sensors. As a matter of fact, so far distributed sensors have not been used in any
real applications, to the best of our knowledge. However, once point observations on
the boundary are used in a BCS, singularities will appear and very often the system
becomes ill-posed. Mathematically and numerically, it becomes very tough to handle.
On the other hand, when point observations are used in the problem setting, the
state variable has to be continuous, so the regularity of the state variable stronger
than the one in the case of distributed observations is required. The fact is that in
the literature of related optimal control theory, starting from the classic book [23] by
J.L. Lions until recent papers [3],[4] by E. Casas and others, distributed observations
are always assumed and the optimal controls are characterized by an adjoint system.
The system is then solved numerically by typically a finite-element method, which
cannot efficiently tackle the singularity in the optimal control along the boundary.
On the other hand, since it is important in the optimal control theory to obtain a
state-feedback characterization of the optimal control, with the bound constraints in
the system, the Lagrange-Kuhn-Tucher approach is not desirable because theoretically
it cannot provide us with a state-feedback characterization of the optimal control
which is important in our regularity/singularity analysis of the optimal control and
numerically it leads to a numerical algorithm to solve an optimization problem with a
huge number of inequality constraints. A refinement of the boundary will double the
number of the inequality constraints, so the numerical algorithm will be sensitive to
the partition number of the boundary. Since the BCS is governed by a PDE system in
R 3 , the partition number of the boundary can be very large, any numerical algorithm
sensitive to the partition number of the boundary may fail to carry out numerical
computation or provide reliable numerical solutions.
Recently in the study of a linear quadratic BCS governed by the Laplace equation
with point observations, the potential theory and boundary integral equations (BIE)
have been applied in [20],[10],[11], [12] to derive a characterization of the optimal
control in terms of the optimal state directly and therefore bypass the adjoint system.
This approach shows certain important advantages over others. It provides rather
explicit information on the control and the state, and it is amenable to direct numerical
computation through a boundary element method (BEM), which can efficiently tackle
the singularities in the optimal control along the boundary.
In [10],[11],[9] several regularity results are obtained. The optimal control is characterized
directly in terms of the optimal state. The exact nature of the singularities
in the optimal control is exhibited through a decomposition formula. Based on the
characterization formula, numerical algorithms are also developed to approximate the
optimal control. Their insensitivity to the discretization of the boundary and fast
uniform convergences are mathematically verified in [12],[31].
The case with the Stokes system is much more complicate than the one with the
Laplace equation due to the fact that the fundamental solution of the Stokes system
4 P.You,Z.Ding and J.Zhou
is matrix-valued and has rougher singular behaviors. In this paper, we assume that
the control is active on a part of the surface and the control variable is bounded by
two vector-valued functions. A Banach space setting has been used in our approach,
we first prove a necessary and sufficient condition for a variational inequality problem
which leads to a first order optimality condition of our original optimization
problem. A characterization of the optimal control and its singularity decomposition
formula are then established. Our approach can be easily adopted to handle
other cases and it shows the essence of the characterization of the optimal control,
through which gradient related numerical algorithms can be designed to approximate
the optimal control.
The organization of this paper is as follows: In the rest of Section 1, we introduce
some basic definitions and known regularity results that are required in the later
development; In Section 2, we first prove an existence theorem for an orthogonal
projection, next we derive a characterization result for a variational inequality which
serves as a first order optimality condition to our LQR problem; then a state-feedback
characterization of the optimal control is established. Section 3 will be devoted to
study regularity/singularity of the optimal control. Since the optimal control contains
a singular term, we first derive a singularity decomposition formula for the optimal
control, with which we find that in general the optimal control is not defined at observation
points. A necessary and sufficient condition that the optimal control is defined
at observation points is then established. Some other regularities of the optimal control
will also be studied in this section. Based upon our characterization formulas a
numerical algorithm, in a subsequent paper, we design a Conditioned Gradient Projection
Method (CGPM)) to approximated the optimal control. Numerical analysis
for its (uniform) convergence and (uniform) convergence rate are presented there. We
show that CGPM converges uniformly sub-exponentially, i.e., faster than any integer
power of 1
n . Therefore CGPM is insensitive to discretization of the boundary.
The insensitivity of our numerical algorithm to discretization of boundary is a significant
advantage over other numerical algorithms. Since the fundamental solution of
the Stokes system is matrix valued with a very rough singular behavior, numerical
analysis is also much more complicated than the case with scalar-valued fundamental
solution, e.g., the Laplacian equation.
Let us now briefly recall some hydrostatic potential theory, BEM and some known
regularity results. Throughout of this paper, for a sequence of elements in R n , we
use superscript to denote sequential index and subscript to denote components, e.g.,
We may also use ~x k to emphasize that x k is
a vector. We may write ~
w(x; ~u) to indicate that the velocity ~
w depends also on ~u.
Unless stated otherwise, we assume p ?
is the Euclidean
norm in R n and k \Delta k is the norm in (L h (\Gamma)) n (h - 1).
Let fE(x; -); ~e(x;
be the fundamental solution
of the Stokes systems, i.e.
ae
div x E(x;
is the unit Dirac delta function at I 3 is the 3 \Theta 3 identity
matrix. It is known [22] that
OPTIMAL CONTROL OF STOKES FLUIDS WITH POINT OBSERVATIONS 5
where ffi i;j is the Kronecker symbol.
Remark 1. The significant difference between the case with point observations
and the case with distributed observations is as follows: for a given vector ~
the function
has a singularity of order O( 1
however it may oscillate between
so it is very tough to deal with. Whereas the function
Z
E(-; x) ~
is well-defined and continuous.
On the other hand, if E(P k ; x) in (1.4) and (1.5) is replaced by the fundamental
solution of the Laplace equation, in this case, E(P k ; x) becomes scalar-valued, then
(1.4) has the same order O( 1
of singularity at but the limit as
exists (including \Gamma1 or +1). So the singularity can be easily handled.
It is then known that the solution ( ~
w; p) of the Stokes equation (1.1) has a simple-
layer representation
~
Z
Z
for some constants ~a; ~ b 2 R 3 and a 2 R. ~j is called the layer density and ~a
represents a rigid body motion. By the jump property of the layer potentials, we
obtain the boundary integral equation
Z
Z
where
With a given Neumann B.D., the layer density ~j can be solved from the above BIE
(1.8). Once the layer density is known, the solution ( ~
w(x); p(x)) can be computed
from (1.6) and (1.7). The velocity solution ~
w(x) is unique only up to a rigid body
motion and the pressure solution p(x) is unique up to a constant.
6 P.You,Z.Ding and J.Zhou
In BEM, the boundary
divided into N elements with nodal points
Assume that the layer density ~j(x) is piecewise smooth, e.g. piecewise constant,
then the BIE (1.8) becomes a linear algebraic system. This
system can be solved for ~j(x i ) and then ( ~
w(x); p(x)) can be computed from a discretized
version of (1.6) and (1.7) for any x 2 \Omega\Gamma
For each ~
we define the simple layer potential of velocity
f) by
Z
E(x; -) ~
For each ~
we define the boundary operators K and K by
K( ~
Z
Z
Z
Z
where
Next we collect some regularity results on S v ; K and K into a lemma. Let
which represents the set of all layer densities corresponding to the zero Neumann
B.D., with which the Stokes system has only a rigid body motion. Hence we have
I
Lemma 1.1.
Let\Omega ae R 3 be a bounded simply connected domain with smooth
boundary \Gamma.
(a) (R 3 linear operator for p ? 2 and
(b) For any 1 - is a bounded linear operator
and K (K ) is the adjoint of K (K);
(c) For
is a bounded linear
(d) For
?M0 is invertible,
?N is invertible.
OPTIMAL CONTROL OF STOKES FLUIDS WITH POINT OBSERVATIONS 7
is a bounded linear operator.
Therefore K
?N is invertible.
Proof. (a)-(d) can be found from [5],[8], [13], [14] and [22].
To prove (e), since \Gamma ae R 3 is a compact set, it suffices to prove (e) for
Then we have 1
q \Gamma1. There exists an " 2 (0; 1), s.t. 1
q \Gamma1.
are the conjugates of
It can be verified that
s
ff
r
s
;ff
s
s
r
Note
and
'Z
where M is a constant independent of x 2 \Gamma. Let
f)(x). Applying H-older's
inequality twice, we get
'Z
'Z
'Z
''Z
ff doe -
'Z
''Z
r
'Z
'Z
Thus
'Z
s
s
'Z
Z
s
s
This proves the first part of (e). The second part follows from (c).
To prove (f), by (1.10), Q ij (x; -) is weakly singular for 1 - 3. Thus K
is an integral operator with weakly singular kernel. By Theorem 2.22 in [21], K is
a compact operator from (C (\Gamma)) 3 to (C (\Gamma)) 3 . The rest follows from the Fredholm
alternative (see [21], p.44).
8 P.You,Z.Ding and J.Zhou
For a given Neumann B.D. ~g 2 (L p (\Gamma 0 our control bound constraints
to the entire boundary \Gamma by
ae
and
ae Bu(x) x
with
where ~
vector depending on ~g and will be specified later. Define
the feasible control set
stands for the compatibility condition of the Neumann B.D. in the
Stokes system (1.1). It is clear that U is a closed bounded convex set in (L p (\Gamma)) 3 .
According to Lemma 1.1 (a), for each given Neumann B.D. ~u 2 U , the Stokes
system (1.1) has a solution ~
w in
(C(\Omega to a vector ~a
~
where
~
That is, for each given ~u, the velocity state variable ~
w is multiple-valued, so the
objective function J(~u) is not well-defined. However among all these velocity solutions,
there is a unique solution ~
~ h2M0
A direct calculation yields that ~
must satisfy
Since such a ~
w is unique and continuous, the point observations ~
our LQR
problem setting make sense and the LQR problem is well-posed.
From (1.14) and Lemma 1.1, we know
where C is a constant depending only on \Gamma. Let us observe (1.16). If we notice that
~
is linear in ~u, then we have
Lemma 1.2. Let ~a 3 be the unique solution to
OPTIMAL CONTROL OF STOKES FLUIDS WITH POINT OBSERVATIONS 9
Then for
~
and
where C is a constant depending only on \Gamma.
2. Characterization of the Optimal Control.
We establish an optimality condition of the LQR problem through a variational
inequality problem (VIP). The characterization of the optimal control is then derived
from the optimality condition.
In optimal control theory it is important to obtain a state-feedback characterization
of the optimal control, i.e., the optimal control is stated as an explicit function
of the optimal state. So the optimal control can be determined by a physical measurement
of the optimal state. Our efforts are devoted to derive such a result.
For each ~
we define the vector-valued truncation function
~
f
Bl
Let h\Delta; \Deltai be the pairing on ((L q our feasible control set U
defined in (1.11) is a convex closed bounded set in (L p (\Gamma)) 3 , it is known that ~u is an
optimal control of the LQR problem if
For any ff ? 0, (2.1) is equivalent to
To derive an optimality condition, we need to find a characterization of a solution to
the above variational inequality.
Theorem 2.1. For each f 2 (L q (\Gamma)) 3 , u f is a solution to the variational inequal-
ity
if and only if
Bl
where z f 2 M 0 such that [f
Theorem 2.2 for the existence of
such a z f ).
Moreover, (2.3) is well-defined in the sense that if z 1 and z 2 are two vectors in
then
Bl a.e. x 2 \Gamma:
(2.
P.You,Z.Ding and J.Zhou
Proof. By Theorem 2.2, there exists z
Bl . We have for each u 2 U ,
Z
on
doe x
where the last inequality holds since each integrand, the product of two terms, is
nonnegative.
Next we assume that u f is a solution to the VIP, i.e.,
Take
Bl , which is in U , we obtain
By the first part, we have
Taking
Combining (2.5) with (2.7) gives us
Thus
The proof of the second part of the theorem follows directly from taking z
Bl in (2.8).
In a Hilbert space setting, the above theorem is called a characterization of pro-
jection. When U is a convex closed subset of a Hilbert space H , for each f 2 H , u f
is a solution to the VIP if and only if
i.e., u f is the projection of f on U . This characterization is used to derive a first
order optimality condition for convex inequality constrained optimal control prob-
lems. However, this result is not valid in general Banach spaces. Instead we prove a
characterization of truncation, which is a special case of a projection. Note that in a
Hilbert space setting, a projection maps a point in the space into a subset of the same
space. However our truncation is a projection that maps a point in (L q (\Gamma)) 3 into a
subset of (L p
1). It crosses spaces. This characterization gives a
connection between the truncation and the solution to VIP, in our case, an optimality
OPTIMAL CONTROL OF STOKES FLUIDS WITH POINT OBSERVATIONS 11
condition in terms of the gradient. That is, by our characterization of truncation,
~u 2 U is a solution to the VIP (2.2) if and only if
where ~z 2 M 0 is defined in Theorem 2.2 s.t.
To prove the existence of a rigid body motion z f in (2.3), we establish the following
existence theorem for an orthogonal projection, which is given in a very general case
and plays a key role in establishing the optimality condition. It can be used to
solve LQR problems governed by PDE's, e.g., the Laplacian, the Stokes, the linear
elastostatics, .,etc. where the PDE has multiple solutions for a given a Neumann
type boundary data satisfying certain orthogonality condition.
Theorem 2.2. Let \Gamma be a bounded closed set in R n and \Gamma be a subset s.t.
be given s.t.
~
where ~
is given by (2.17) and
~
Assume that M 0 is an m-dimensional subspace in (L q (\Gamma)) n (q - 2; 1
and
g, then a necessary and sufficient condition that for each
~
Bl
is that
Moreover the set of all solutions ~z f in (2.10) is locally uniformly bounded in the sense
that for each given ~
exist
with k ~
Bl
we have
Proof. Case 1: dim
be an
orthonormal basis in M 1 (in M 0 as well). To prove the first part of the theorem, we
have to show that for each ~
~
Bl
12 P.You,Z.Ding and J.Zhou
For each ~
by
~
Bl
Then to prove the first part, it suffices to show that for each ~
exists
It is easy to check that for any ~
exist two
constants depending only on \Gamma and the basis y s.t.
is a bounded (depends on Bl and Bu) Lipschitz continuous map.
To show that T f has a zero, we prove that there exists a constant R ? 0 s.t. when
Once (2.15) is verified, we have
By Altman's fixed point theorem [15], the map C has a fixed point
(BR is the ball of radius R at the origin), i.e.,
it remains to verify (2.15). Define
It suffices to show that there exists R ? 0 s.t. for t ? R,
In the following, we prove that for each given ~
exist
have
So the second part of the theorem also follows. For each C 2 D, we denote
~y C
OPTIMAL CONTROL OF STOKES FLUIDS WITH POINT OBSERVATIONS 13
It is obvious that
Z
j~y C (x)jdoe x
is continuous in C and positive on the compact set D, hence
C2D
f
Z
and we set
R
For any given " ? 0, we assume
For each C 2 D; t ? 0,
Bl
Z
Bl
Z
~
Bl
Z
I C
Z
where for
I C
Z
Let
We have
lim
I C
Z
Z
Z
jy C
Thus
lim
Z
jy C
Z
Z
jy C (x)jdoe x
Z
14 P.You,Z.Ding and J.Zhou
where m y given by (2.16) is independent of C. From (2.14), we see that T f (C) \Delta C is
continuous in both ~
f and C, therefore there exist R C
Since D is compact, there exist C
Let
So we only need to take
~
and
~
Bu; a.e. on
Case 2: be an orthonormal
basis in M 0 , where (~y
By the proof in Case 1, for each ~
s.t.
Bl
Then for any c f
by (2.18), we have
~
Bl
~
Bl
On the other hand, when by (2.18), we have
Bl
Therefore
~
Bl
OPTIMAL CONTROL OF STOKES FLUIDS WITH POINT OBSERVATIONS 15
if and only if
i.e., (2.11) is satisfied. The proof is complete.
Remark 2. In the above theorem,
(1) when rigid body motion is considered,
we have dim(M all the conditions in the theorem are satis-
fied. So for each ~
there is ~a f such that
Bl
(2) if
the conclusion still holds for each ~
an m-dimensional
subspace of (L q (\Gamma)) n where q - 1, 1
(3) the vector C in (2.13) represents the rigid body motion in our case. From the
above theorem, we can see that the solution C f such that T f (C f
unique.
The following error estimate contains an uniqueness result, which will also be
used in proving the uniform convergence rate in a subsequent paper.
Theorem 2.3. Let us maintain all the assumptions in Theorem 2.2. Let ~
given in (L 1 respectively two zeros of T f and T h defined by (2.13). If
where
meas
c h
then
where the constant fl is independent of C f and C h .
P.You,Z.Ding and J.Zhou
Proof. We may assume that
For T f (C), we denote
where
y C
Write
Since T f (C) is Lipschitz continuous in C, a direct calculation leads to the Frechet
derivative
hy k
m\Thetam
a.e. C
a Gram-matrix, which is symmetric positive semi-definite, i.e., for any nonzero vector
(b
where "?" holds strictly if
because f~y is linearly independent.
On the other hand, we have
hy k
m\Thetam
hy k
m\Thetam
hy k
m\Thetam
where the Gram-matrix
hy k
m\Thetam
is also symmetric positive semi-definite. Therefore
where "!" holds strictly in the first inequality if meas (\Gamma C holds strictly
in the second inequality if meas and two
respectively, we let
OPTIMAL CONTROL OF STOKES FLUIDS WITH POINT OBSERVATIONS 17
Since T f (C) is Lipschitz continuous in C, once meas (\Gamma C f
It follows that T 0
positive definite matrix with
Therefore
defines a symmetric positive definite matrix with
For any 0 ! - ! 1, we have
for some
into account, we arrive at
Consequently, we have
and the proof is complete.
As a direct consequence of Theorem 2.3, we obtain the following uniqueness result.
Corollary 2.4. Let us maintain all the assumptions in Theorem 2.2. For given
~
is a zero of T f with
then C f is the unique zero of T f .
Now we present a state-feedback characterization of the optimal control.
Theorem 2.5.
Let\Omega ae R 3 be a bounded domain with smooth boundary \Gamma. The
LQR problem has a unique optimal control ~u 2 U and a unique optimal velocity
state ~
P.You,Z.Ding and J.Zhou
and
Bl
\Theta ~x is defined in Theorem 2.2 s.t. ~u ? M 0 and M 0 is given in (1.2).
Proof. Let
?M0 . Since our objective function J(~u) is strictly convex
and differentiable, and the feasible control set U is a closed bounded convex subset
in the reflexive Banach space X , the existence and uniqueness of the optimal control
are well-established. Equation (2.20) is just a copy of (1.16). By our characterization
of truncation, Theorem 2.1 with
Bl
is defined in Theorem 2.2 s.t.
Bl
To prove (2.21), we only need to show
Applying (1.9), i.e., M
and then
Since rJ(~u) defines a bounded linear functional on X , for any ~ h 2 X , take (1.12)
into account, we have
Z
I +K )
Z
So (2.22) is verified and the proof is complete.
OPTIMAL CONTROL OF STOKES FLUIDS WITH POINT OBSERVATIONS 19
3. Regularities of the Optimal Control.
It is clear that (2.21) is a feedback characterization of the optimal control. To
obtain such a characterization,
in (2.9) is crucial. Later on we will see that
is also crucial in proving the uniform convergence of our numerical algorithms
in a subsequent paper. Observe that when corresponds
to the LQR problem without constraints on the control variable, the optimal solution,
if it exists, becomes
\Theta ~x is defined in Theorem 2.2 s.t. ~u ? M 0 (see Remark 2). But according
to Lemma 1.1(d) such a solution ~u is only in (L q (\Gamma)) 3 (q ! 2), since E(P k ; \Delta) is only
in (L q (\Gamma)) 3 . So it is reasonable to apply bound constraints Bl and Bu on the control
variable ~u. However we notice that the optimal control still contains a singular term
which is not computable at . In order to carry out the truncation by Bl and
Bu, we have to know the sign of this singular term. Hence we derive a singularity
decomposition formula of (2.21), in which the singular term is expressed as continuous
bounded terms plus a simple dominant singular term and a lower order singular
term. With the simple dominant singular term, the nature of the singularity is clearly
exposed.
Theorem 3.1. For the optimal control ~u given in (2.21), let
~
f
Then
f
where in the singular part, the second term 4K ~
f (x) is dominated by the first term
f (x) whose nature of singularity can be determined at each P k and the regular term
f (x) is continuous on \Gamma.
Proof. For given ~g 2 (L q (\Gamma)) 3
?N with q
I +K)
I \Theta ~x:
Let
~
f
By (2.23), ~
?N for every q ! 2, thus (3.1) follows. The first part of
Lemma 1.1 (e) states that the singularity in 2 ~
f dominates the one in 4K ~
f . While
20 P.You,Z.Ding and J.Zhou
the second part of Lemma 1.1 (e) and (f) imply that ( 1I +K) \Gamma1 ffiK ffiK ~
f is continuous.
The above singularity decomposition formula plays an important role in our singularity
analysis and also in our numerical computation. It is used to prove the
uniform convergence and to estimate the uniform convergence rate of our numerical
algorithms in a subsequent paper.
Note that the fundamental velocity solution
E(-;
is not defined when in the sense that when x some of
the entries may oscillate between \Gamma1 and +1. So if we look at the simple dominant
singular term in the singularity decomposition formula of the optimal control, we
can see that in general, the optimal control ~u (x) is not defined at P k even with
the truncation by Bl and Bu. This is a significant difference between systems with
scalar valued fundamental solution and with matrix-valued fundamental solution. For
the formal case, e.g., the Laplacian, the optimal control is continuous at every point
where Bl and Bu are continuous. Of course, if Bl(P k
which means the control is not active at P k , then trivially ~u
prescribed value. This is the case when a sensor is placed at P k , then a control device
can not be put at the same point P k . However, in general point observation case, the
control may still be active at P k . The above analysis then states that the optimal
control is not defined at P k unless some other conditions are posed. This is the nature
of point observations. Notice that a distributed parameter control is assumed in our
problem setting, theoretically the values of the control variable at finite points will not
affect the system. But, in numerical computation we can only evaluate the optimal
control ~u at finite number of points. The observation points P k 's usually are of the
most interest. On the other hand, the optimal velocity state ~
w is well-defined and
continuous at P k , no matter ~u (P k ) is defined or not. So if one does want the optimal
control ~u to be defined at P k , when Bl(P k m, it is clear that
is defined at each P k . When Bl(P m, then we
have the following necessary and sufficient condition.
Theorem 3.2. Let Bl(P m, then the optimal
control ~u is well-defined at the observation points P k if and only if
where for each fixed k and i, the equality holds for at most one j 6= i unless
~
When ~u is well-defined at P k , we have
ae Bl i
Proof. If we observe the fundamental velocity solution, we can see that the proof
follows from the following argument. For
lim
OPTIMAL CONTROL OF STOKES FLUIDS WITH POINT OBSERVATIONS 21
exists (including \Sigma1) if and only if
where at most one equality can hold unless c that when (3.5)
holds, and two equalities hold in (3.5), then
We can make the limit either equal to zero by taking x
to sign (c i )1 by taking x i 6= \Upsilonx j or x i 6= \Upsilonx k and x ! 0. So the limit will not exist.
When lim x!0 - e i (x) exists and
lim
With the above result and the singularity decomposition formula for the optimal
control, the following continuous result can be easily verified.
Theorem 3.3. Let Bu and Bl be continuous on \Gamma 1 . If for each
either or the condition (3.3) holds strictly with ( ~
0, then the optimal control ~u is continuous on \Gamma 1 . So the equality in (2.4) holds for
every point on \Gamma.
From the state-feedback characterization (2.20), the control can be determined
by a physical measurement of the state at finite number of observation points
m. The question is then asked, will a small error in the measurement of the state
cause a large deviation in the control? Due to the appearance of the singular term
in (2.20), in general the answer is yes, i.e., the state-feedback system is not stable.
However under certain conditions, we can prove that the state-feedback system is
uniformly stable.
Theorem 3.4. Let ~
be the exact velocity state at observation points and
~u p be the control determined from (2.20) in terms of ~
w(P k ). If for each
either Bl and Bu are continuous and equal at P k or Bu and Bl are locally bounded
at P k , the condition (3.3) holds strictly with ( ~
then the state-feedback
system (2.20) is uniformly stable in the sense that for any " ? 0, there is
such that for any measurement ~
where ~u 0 is the control determined from (2.20) in terms of ~
Proof. For each " ? 0. For each fixed and Bu are continuous
and equal at P k , there is d 0
Since the control variable is bounded by Bl and Bu,
If instead the condition (3.3) holds strictly with ( ~
chosen so that when j ~
holds strictly with
Due to the singular term in (2.20) and since Bu and Bl are
22 P.You,Z.Ding and J.Zhou
locally bounded at P k , there is d k ? 0 such that when x
some m, we have
\Gammafl
either ? Bu(x) i
or
After the truncation by Bu and Bl, it follows that
So if we define
then in either case we have
Denote
~
I +K) \Gamma1
~
and
meas
implies that
there is nothing to prove. So we assume that meas (\Gamma CF
Theorem 2.3 can be applied. For x compact set, by using (2.20)
and triangle inequality, we obtain
Bl
Bl
OPTIMAL CONTROL OF STOKES FLUIDS WITH POINT OBSERVATIONS 23
Since the operator linear and bounded, and the function E(P k ; \Delta) is
continuous and bounded on the compact set
As for I 2 (x), Theorem 2.3 yields
where the constant fl depends only on \Gamma. Since there is constant C 0 independent of
~
there is
Finally for
The proof is complete.
As a final comment, it is worth while indicating that though in the problem
setting, the governing differential equation, the Stokes, is linear, the bound constraint
on the control variable introduces a nontrivial nonlinearity into the system. This can
be clearly seen in Theorem 2.2. Also our approach can be adopted to deal with certain
nonlinear boundary control problems.
--R
Structural actuator control of fluid/structure interactions
Feedback control of the driven cavity problem using LQR designs
Control of an elliptic problem with pointwise state constraints
Boundary control of semilinear elliptic equations with pointwise state constraints
Lectures on Singular Integral Operators
L'int'egral de Cauchy definit un operateur born'e sur L 2 pour les curbs Lipschitziennes
"New Developments in Differential Equations"
Boundary value problems for the systems of elastostatics in Lipschitz domains
Topics on Potential Theory on Lipschitz Domains and Boundary Control Problems
Constrained LQR problems in elliptic distributed control systems with point observations
Constrained LQR problems governed by the potential equation on Lipschitz domain with point observations
Constrained LQR problems in elliptic distributed control systems with point observations - convergence results
The Dirichlet problems for the Stokes system on Lipschitz domains
Fixed Point Theory
Finite Element Methods for
Finite Element Methods for Viscous Incompressible Flows: A Guide to Theory
Boundary velocity control of incompressible flow with an application to viscous drag reduction
A dissipative feedback control synthesis for systems arising in fluid dynamics
The Mathematical Theory of Viscous Incompressible Flow
Analysis IV: Linear and Boundary Integral Equations
Layer potentials and boundary value problems for Laplace's equation on Lipschitz domains
A fiber-optic combustion pressure sensor system for automotive engine control
"Constrained LQR problems in elliptic distributed control systems with point observations- on convergence rates"
--TR
--CTR
Zhonghai Ding, Optimal Boundary Controls of Linear Elastostatics on Lipschitz Domains with Point Observations, Journal of Dynamical and Control Systems, v.7 n.2, p.181-207, April 2001 | hydrostatic potential;boundary integral equation;stokes fluid;singularity decomposition;distributed boundary control;point observation;LQR |
278124 | Lipschitzian Stability for State Constrained Nonlinear Optimal Control. | For a nonlinear optimal control problem with state constraints, we give conditions under which the optimal control depends Lipschitz continuously in the L2 norm on a parameter. These conditions involve smoothness of the problem data, uniform independence of active constraint gradients, and a coercivity condition for the integral functional. Under these same conditions, we obtain a new nonoptimal stability result for the optimal control in the $L^\infty$ norm. And under an additional assumption concerning the regularity of the state constraints, a new tight $L^\infty$ estimate is obtained. Our approach is based on an abstract implicit function theorem in nonlinear spaces. | Introduction
We consider the following optimal control problem involving a parameter:
minimize
subject to
where the state x(t) 2 R
dt x, the control u(t) 2 R m , the parameter p lies
in a metric space, the functions h
Throughout the paper, L ff (J denotes the usual Lebesgue space
of functions equipped with its standard norm
Z
is the Euclidean norm. Of course, corresponds to the space of
essentially bounded functions. Let W m;ff (J ; R n ) be the usual Sobolev space consisting
of vector-valued functions whose j-th derivative lies in L ff for all its norm
is
When either the domain J or the range R n is clear from context, it is omitted. We let
denote the space W m;2 , and Lip denote W 1;1 , the space of Lipschitz continuous
functions. Subscripts on spaces are used to indicate bounds on norms; in particular,
- denotes the set of functions in W m;ff with the property that the L ff norm of the
m-th derivative is bounded by -, and Lip - denotes the space of Lipschitz continuous
functions with Lipschitz constant -. Throughout, c is a generic constant, independent
of the parameter p and time t, and B a (x) is the closed ball centered at x with radius
a. The L 2 inner product is denoted h\Delta; \Deltai, the complement of a set A is A c , and the
transpose of a matrix B is B T . Given a vector y
yA denotes the subvector consisting of components associated with indices in A. And
then YA is the submatrix consisting of rows associated with indices in
A.
We wish to study how a solution to either (1), or to the associated variational
system representing the first-order necessary condition, depends on the parameter p.
We assume that the problem (1) has a local minimizer (x; corresponding
to a reference value of the parameter, and the following smoothness condition
holds:
Smoothness. The local minimizer (x ; u ) of (1) lies in W 2;1 \Theta Lip. There exists a
closed set \Delta ae R n \Theta R m and a
The function values and first two derivatives of f p (x; u), g p (x; u), and h p (x; u), and the
third derivatives of g p (x), with respect to x and u, are uniformly continuous relative
to p near p and (x; u) 2 \Delta. And when either the first two derivatives of f p (x; u) and
or the first three derivatives of g p (x), with respect to x and u, are evaluated
at resulting expression is differentiable in t and the L 1 norm of the time
derivative is uniformly bounded relative to p near p .
Let A, B, and K be the matrices defined by
Here and elsewhere the * subscript is always associated with p . Let A(t) be the set
of indices of the active constraints at (x (t); p ); that is,
We introduce the following assumption:
Uniform Independence at A. The set A(0) is empty and there exists a scalar ff ? 0
such that
for each t 2 [0; 1] where A(t) 6= ; and for each choice of v.
Uniform Independence implies that the state constraints are first-order (see [12]
for the definition of the order of a state constraint). This condition can be generalized
to higher order state constraints (see Maurer [17]), however, the generalization of the
stability results in this paper to higher order state constraints is not immediate.
It is known (see, for instance, Theorem 7.1 of the recent survey [12] and the regularity
analysis in [8]), that under appropriate assumptions, the first-order necessary
conditions (Pontryagin's minimum principle) associated with a solution (x ; u ) of (1)
can be written in the following way: There exist / 2 W 2;1 and - 2 Lip such that
are a solution at of the variational system:
Here H p is the Hamiltonian defined by
and the set-valued map N is defined in the following way: Given a nondecreasing
Lipschitz continuous function -, a continuous function y lies in N(-) if and only if
Defining
where w be the quadratic form
and let L be the linear and continuous operator from H 1 \Theta L 2 to L 2 defined by
We introduce the following growth assumption for the
quadratic form:
Coercivity. There exists a constant ff ? 0 such that
where
In the terminology of [12], the form of the minimum principle we employ is the
"indirect adjoining approach with continuous adjoint function." A different approach,
found in [13] for example, involves a different choice for the multipliers and for the
Hamiltonian. The multipliers in these two approaches are related in a linear fashion
as shown in [11]. Normally, the multiplier -, associated with the state constraint,
and the derivative of / have bounded variation. In our statement of the minimum
principle above, we are implicitly assuming some additional regularity so that - and
/ are not only of bounded variation, but Lipschitz continuous. This regularity can
be proved under the Uniform Independence and Coercivity conditions (see [8]).
In Section 3 we establish the following result:
Theorem 1.1. Suppose that the problem (1) with has a local minimizer
and that the Smoothness and the Uniform Independence conditions hold. Let
/ and - be the associated multipliers satisfying the variational system (2)-(5). If the
Coercivity condition holds, then there exist a constant - and neighborhoods V of p and
U of w such that for every
is a unique solution U to the first-order necessary conditions (2)-(5)
with the property that ( -
u) is a local minimizer of the problem
(1) associated with p. Moreover, for every is the
corresponding solution of (2)-(5), the following estimate holds:
where
(w
kr
(w
In addition, we have
The proof of Theorem 1.1 is based on an abstract implicit function theorem appearing
in Section 2. In Section 4 we show that the L 1 estimate of Theorem 1.1
can be sharpened if the points where the state constraints change between active and
inactive are separated. In Section 5 we comment briefly on related work.
2. An implicit function theorem in nonlinear spaces
The following lemma provides a generalization of the implicit function theorem
that can be applied to nonlinear spaces. To simplify the notation, we let
denote the distance between the elements x and y of the metric space X.
Lemma 2.1. Let X and \Pi be metric spaces with X complete, let Y be a subset
of \Pi, and let P be a set. Given w 2 X and r ? 0, let W denote the ball B r (w ) in
X and suppose that T (the subsets of \Pi) have the
following properties:
restricted to Y is single-valued and Lipschitz continuous, with Lipschitz
constant -.
there exists a unique w 2 W
such that T (w; p) 2 F (w). Moreover, for every denotes the w
associated with p i , then we have
Proof. Fix . Observe that
for each w a contraction on W with contraction constant
-ffl. Let w 2 W . Since w
Thus \Phi maps W into itself. By the Banach contraction mapping principle, there
exists a unique w 2 W such that is equivalent to
we conclude that for each there is a unique
have
Rearranging this inequality, the proof is complete.
Let X, Y , and P be metric spaces and let w 2 X. Using the terminology of [3],
strictly stationary at uniformly in p near p , if for each
with the property that
for all w
Theorem 2.2. Let X be a complete metric space, let \Pi be a linear metric space,
let Y be a subset of \Pi, and let P be a metric space. Suppose that F :
continuous, and that for some w 2 X and
is continuous at p .
strictly stationary at uniformly in p near p .
restricted to Y is single-valued and Lipschitz continuous, with Lipschitz
constant -.
maps a neighborhood of (w ; p ) into Y .
Then for each -, there exist neighborhoods W of w and P of p such that for
each moreover, for every
denotes the w 2 W associated with p i , then we have
Proof. By (Q5) there exist neighborhoods U 0 of w and P 0 of p such that
We apply Lemma 2.1 with
the following identifications: X, Y , and \Pi are as defined in the statement of the
follow immediately from (Q1) and (Q4), respectively. Choose ffl ? 0 such that ffl !
that for this choice of ffl, we have ffl-
and By (Q3) and the identity T (w
exist neighborhoods
of w such that (P3) of Lemma 2.1 holds. Let fi satisfy -fi=(1 \Gamma ffl- r and by (Q2),
choose P smaller if necessary so that (P2) holds. By Lemma 2.1, for each
exists a unique w 2 W such that T (w; p) 2 F (w), and the estimate (8) holds. Since
T (w; p) 2 F (w) if and only if T (w; p) 2 F(w), the proof is complete.
A particular case of Theorem 2.2 corresponds to the well-known Robinson implicit
function theorem [20] in which X is a Banach space, Y is its dual X ,
N\Omega (w),
\Omega is a closed, convex set in X,
N\Omega (w) is the normal cone to the
set\Omega at the point
differentiable with respect to w, both T and its derivative rwT are continuous
in a neighborhood of (w ; p ), and
is the linearization of T . The Robinson framework is applicable to control problems
with control constraints after the range space X is replaced by a general Banach
space Y (see the discussion in Section 5). However, for problems with state con-
straints, there are difficulties in applying Robinson's theory since stability results for
state constrained quadratic problems, analogous to the results for control constrained
problems, have not been established. In our previous paper [3], we extend Robinson's
work in several different directions. For the solution map of a generalized equation
in a linear metric space, we showed that Aubin's pseudo-Lipschitz property, that
the existence of a Lipschitzian selection, and that local Lipschitzian invertibility are
"robust" under nonlinear perturbations that are strictly stationary at the reference
point. In Theorem 2.2, we focus on the latter property, giving an extension of our
earlier result to nonlinear spaces. In this nonlinear setting, we are able to analyze the
state constrained problem, obtaining a Lipschitzian stability result for the solution.
3. Lipschitzian stability in L 2
To prove Theorem 1.1, we apply Theorem 2.2 using the following identifications.
First, we define
where
- (with the H 1 norm),
(with the L 2 norm), -(1) - 0 and -
An appropriate value for - is chosen later in the analysis. The space X consists of the
collection of functions x, /, u, and - satisfying (10) and (11) with the norm defined
in (10) and (11). Observe that the norms we use are not the natural norms. For
example, the u and - components of elements in X lie in W 1;1 , but we use the L 2
norm to measure distance. Despite the apparent mismatch of space and norm, X is
complete by Lemma 3.2 below.
The functions T and F of Theorem 2.2 are selected in the following way:
r
The continuous operator L is obtained by linearizing the map T (\Delta; p ) in L 1 at the
reference point w In particular,
denote the components of - :
a
r
The space \Pi is the product L 2 \Theta L 2 \Theta L 2 \Theta H 1 while the elements - in Y have the
(with the L 2 norm), b 2 W 2;1 (with the H 1 norm),
ks
where - is a small positive constant chosen so that two related quadratic programs,
(37) and (41), introduced later have the same solution. As we will see, the constant -
associated with the space X must be chosen sufficiently large relative to -. Note that
the inverse is the solution (x; /; u; -) of the linear variational system:
Referring to the assumptions of Theorem 2.2, (Q1) holds by the definition of
X and by the minimum principle, (Q2) follows immediately from the Smoothness
condition. In Lemma 3.3, we deduce (Q3) from the Smoothness condition and a Taylor
expansion. In Lemma 3.6, (Q5) is obtained by showing that for w near w and p near
its associated derivatives are near those of -
L(w ). Finally, in a series of lemmas, (Q4) is established through manipulations of
quadratic programs associated with (15)-(18).
To start the analysis, we show that X is complete using the following lemma:
Lemma 3.1. If u 2 Lip - ([0; 1]; R 1 ), then we have
Proof. Since u is continuous, its maximum absolute value is achieved at some
on the interval [0; 1]. Let um the associated value of u. We
consider two cases.
Case 1: um ? -. Let us examine the maximum ratio between 1-norm and the
maximize fkukL 1=kuk
um ? -, the maximum is attained by the linear function v satisfying um
and -
\Gamma-. The 2-norm of this function is readily evaluated:
this interval, we have kvk 2
Taking square roots gives
which establishes the lemma in Case 1.
Case 2: um -. In this case, let us examine the maximum ratio between 1-norm
and the 2-norm to the 2/3-power:
maximize fkukL 1=kuk
The maximum is attained by the piecewise linear function v satisfying
it follows that
which completes the proof of case 2.
Lemma 3.2. The space X of functions w satisfying (9), (10), and (11), is
complete.
Proof. Suppose that w sequence in X. We analyze
the -component of w k . The sequence - k is a Cauchy sequence in L 1 by Lemma
3.1. Since L 1 is complete, there exists a limit point - 2 L 1 . Since the - k converge
pointwise to -
- and since each of the - k is Lipschitz continuous with Lipschitz constant
- is Lipschitz continuous with Lipschitz constant -. Since each of the - k is non-
decreasing, it follows from the pointwise convergence that - is nondecreasing; hence,
for each k, the pointwise convergence implies that -
This shows that the -component of X is complete. The other components can be
analyzed in a similar fashion.
Lemma 3.3. If the Smoothness condition holds, then for T and L defined in (12)
and strictly stationary at w , uniformly in p near p .
Proof. Only the first component of T (w; p) \Gamma L(w) is analyzed since the other
components are treated in a similar manner. To establish strict stationarity for the
first component, we need to show that for any given ffl ? 0,
for p near p and for (x; u) and (y; v) 2 W 2;1
- \Theta Lip - near (x ; u ) in the norm of
(y; v) are also near (x ; u ) in L 1 . After writing the difference f p (x;
an integral over the line segment connecting (x; u) and (y; v), we have
where is the average of the gradient of f p along the line segment connecting
(x; u) and (y; v). By the Smoothness condition,
as p approaches p and as both (x; u) and (y; v) approach (x ; u ) in L 1 . This
completes the proof.
Lemma 3.4. If the Smoothness condition holds, then for T and L defined in (12)
and (13) respectively, and for any choice of the parameter - ? 0 in (14), there exists
Proof. Again, we focus on the first component of T \Gamma L since the other components
are treated in a similar manner. Referring to the definition of Y , we should
show that
for p near p and for (x; u) 2 W 2;1
- \Theta Lip - near (x ; u ) in the norm of H 1 \Theta L 2 . The
W 1;1 norm in (20) is composed of two norms, the L 1 norm of the function values,
and the L 1 norm of the time derivative. By the same expansion used in Lemma 3.3,
we obtain the bound
for p near p and for (x; u) near Differentiating the expression within the
norm of (20) gives
d
By the Smoothness condition, -
A and -
lie in L 1 , and by the definition of X, we have
By the triangle inequality and by Lemma 3.1,
for x near x . Moreover, by Lemma 3.1 and by the Smoothness condition, r x f p (x; u)
approaches A and r u f p (x; u) approaches B in L 1 as p approaches p and (x; u)
approaches
dt
Analyzing each of the components of T \Gamma L in this same way, the proof is complete.
We now begin a series of lemmas aimed at verifying (Q4). After a technical
result (Lemma 3.5) related to the constraints, a surjectivity property (Lemma 3.6) is
established for the linearized constraint mapping. Then we study a quadratic program
corresponding to the linear variational system (15)-(18). We show that the solution
(Lemma 3.9) and the multipliers (Lemma 3.10) depend Lipschitz continuously on the
parameters. And utilizing the solution regularity derived in [8], the solution and the
multipliers lie in X for - sufficiently large.
To begin, let I be any map from [0; 1] to the subsets of f1; 2; with the
property that the following sets I i are closed for every i:
I
We establish the following decomposition property for the interval [0; 1]:
Lemma 3.5. If Uniform Independence at I holds, then for every ff
there exists sets J 1 , J 2 , \Delta \Delta \Delta, J l , corresponding points
a positive constant ae ! min i (- such that for each t 2 [-
we have I(t) ae J i , and if J i is nonempty, then
for every choice of v. The set J 1 can always be chosen empty.
Proof. For each t 2 (0; 1) with I(t) c 6= ;, there exists an open interval O centered
at t with O ae " i2I(t) cI
c
then we can choose a half-open interval O,
with t the closed end of the interval, such that O ae " i2I(t) cI c
. If I(t) c is empty, take
fixed t 2 [0; 1] with I(t) 6= ;, choose O smaller if necessary so that
for each s 2 O and for each choice of v. Since B and K are continuous, it is possible
to choose O in this way. Observe that by the construction of O, we have I(s) ae I(t)
for each s 2 O and (22) holds if I(t) is nonempty. Given any interval O on (0; 1), let
O 1=2 denote the open interval with the same center, but with half the length; for the
open intervals associated with denote the half-open interval with
the same endpoint, 0 or 1, but with half the length. The sets O 1=2 form an open cover
of [0; 1]. Let O 1 , O 2 , \Delta \Delta \Delta, O l be a finite subcover of [0; 1] and let t 1 ,
the associated centers of interior intervals, and the closed endpoint of the intervals
associated with or 1. It can be arranged so that no O i is contained in the
union of other elements of the subcover (by discarding these extra sets if necessary).
Arrange the indices of the O i so that the left side of O i is to the left of the left side
of O i+1 for each i. Let - 1 denote the successive left sides of the O i , and
let ae be 1/4 of the length of the smallest O i . Defining J
from the construction of the O i that I(t) ae J i and (22) holds for each t in an interval
associated with t i and with length twice that of O i . Since (-
By taking ae smaller if necessary, we can enforce the condition ae ! min i (-
Lemma 3.6. If Uniform Independence at I holds, then for each a 2 L 1 and
there exist x 2 W 1;1 and u 2 L 1 such that L(x; u)
and
This (x; u) pair is an affine function of (a; b), and for each ff - 1, there exists a
constant c ? 0 such that
for every (a is the pair associated with
Proof. We use the decomposition provided by Lemma 3.5 to enforce the equations
holds trivially on [-
that i ? 1, and let us consider (23) on the interval [-
we conclude that any j 2 I(t) is contained in either J
then by (27), (23) holds. If j 2 J i n J then by the construction of
the implies that (23) holds.
Suppose that j 2 J i and let oe j be any given Lipschitz continuous function. Observe
that if
d
dt
then K Carrying out the differentiation in the
second relation of (28) and substituting for -
x using the state equation (25), we obtain
a linear equation for u. By Lemma 3.5, this equation has a solution, and for fixed t
and x, the minimum norm solution can be written:
where
In the special case where J i is empty, we simply set u(t;
These observations show how to construct x and u in order to satisfy (26) and (27).
On the initial interval [0; - 2 ], u is simply 0 and x is obtained from (25). Assuming
x and u have been determined on the interval [0; - i ], their values on [- are
obtained in the following way: The control is given in feedback form by (29), where
For is linear on [-
With this choice for oe, the first equation in (28) is satisfied, and with x and u given
by (25) and (29) respectively, the second equation in (28) is satisfied. Also, by the
choice of oe,
for each
Hence, (26) and (27) hold, which yields (23).
For it follows from the definition of oe that
When u in (29) is inserted in (25) and this bound on j -
oe j (t)j is taken into account,
we obtain by induction that x 2 W 1;1 and u 2 L 1 . By the equations (25) for the
state, (29) for the control, and (31)-(32) for oe, (x; u) is an affine function of (a; b).
Moreover, the change (ffix; ffiu) in the state and control associated with the change
(ffia; ffib) in the parameters satisfies:
for each i where oe is specified in (31)-(32).
To complete the proof, we need to relate the oe term of (33) to the b term of (24).
For
Consequently, for almost every t 2 [-
us proceed by induction and assume that
Combining this with (34) and (33) for
Since jffix(- j+1 )j - kffixk W 1;ff [0;- j+1 ] , the induction step is complete.
In the following lemma, we prove a pointwise coercivity result for the quadratic
form B. See [4] and [7] for more general results of this nature.
Lemma 3.7. If Coercivity holds, then there exists a scalar ff ? 0 such that
xi] for all (x; u) 2 M; (35)
and
Proof. If Hence, the L 2 norm of x
and -
x are bounded in terms of the L 2 norm of u, and (35) follows directly from the
coercivity condition. To establish (36), we consider the control u ffl defined by
Let the state x ffl be the solution to
have
lim ffl!0
Combining this with the coercivity condition gives (36).
Consider the following linear-quadratic problem involving the parameters a, s,
minimize
subject to
If the feasible set for (37) is nonempty, then Coercivity implies the existence of a
unique minimizer over H 1 \Theta L 2 . Using the following lemma, we show that this minimizer
lies in W 1;1 \Theta L 1 , and that it exhibits stability relative to the L 2 norm.
Lemma 3.8. If Coercivity and Uniform Independence at I hold, then (37) has a
unique solution for every a; Moreover, the change (ffix; ffiu)
in the solution to (37) corresponding to a change (ffia; ffib; ffis; ffir) in the parameters
satisfies the estimate
Proof. By Lemma 3.6, Uniform Independence at I implies that the feasible set
for (37) is nonempty while the Coercivity condition implies the existence of a unique
solution From duality theory (for example, see [10]), there exists
with the property that is the minimum with respect to u of the
expression
h-
over all u 2 L 1 . It follows that
and by (36), u (t) is uniformly bounded in t. From the equations L(x ; u
x
The estimate (38) can be obtained, as in Lemma 5 in [2], by eliminating the
perturbation in the constraints. Let be the affine map in Lemma 3.6 relating the
feasible pair (x; u) to the parameters (a; b). By making the substitution (x;
to an equivalent problem of the form
minimize
subject to
Here oe and ae are affine functions of a; b; s and r. Utilizing the Coercivity condition
and the analysis of [9, Sect. 2], we obtain the following estimate for the change
corresponding to the change (ffioe; ffiae):
2:
Hence,
Taking into account the relations between (x; u), (y; v), (oe; ae), and (a; b; s; r), the
proof is complete.
Now let us consider the full linear-quadratic problem where the subscript I on the
state constraint has been removed:
minimize
subject to
The first-order necessary conditions for this problem are precisely (15)-(18). Observe
that x , u , / , and - satisfy (15)-(18) when - . Since the first-order necessary
conditions are sufficient for optimality when Coercivity holds, (x ; u ) is the unique
solution to (41) at - . In addition, if Uniform Independence holds, we now show
that the multipliers / and - satisfying (16)-(18) are unique; hence, x , u , / , and
- are the unique solution to (15)-(18) for - .
To establish this uniqueness property for the multipliers, we apply Lemma 3.5
to the active constraint map A of Section 1. Let J i be the index sets associated
with the complementary
slackness condition - (1) T g associated with the condition (5) of the minimum
principle, implies that (- ) J c
l
along with (16) and (17)
imply that (- ) J l
and / are uniquely determined on [- l ; 1]. Proceeding by induction,
suppose that / and - are uniquely determined on the interval [-
is constant on [- it is uniquely determined by the continuity of - , while (-
and / on [- are uniquely determined by (21), (16), and (17). This completes
the induction step.
We now use Lemma 3.8 to show that the solution to (41) depends Lipschitz continuously
on the parameters when Coercivity and Uniform Independence at A hold.
We do this by making a special choice for the map I. Again, let J i be the index sets
associated with I = A by Lemma 3.5. Since A(t) ae J i for each t 2 [- the
parameter
is strictly positive for each i. Setting in the case I = A ffl
where A ffl (t) is the index set associated with the ffl-active constraints for the linearized
problem:
Since A ffl (t) ae J i for each t 2 [- implies that Uniform Independence
at A ffl holds.
We now observe that the solution (x ; u ) of (41) at - is the solution of
(37) for I = A ffl and - . First, (x ; u ) is feasible in (37) since there are fewer
constraints than in (41). By the choice I = A ffl , all feasible pairs for (37) near
are also feasible in (41). Since (x ; u ) is optimal in (41), it is locally optimal in (37) as
well, and by the Coercivity condition and Lemma 3.7, (x ; u ) is the unique minimizer
of (37) for - . By Lemma 3.8, we have an estimate for the change in the solution
to (37) corresponding to a change in the parameters. Since kffixk L 1 - kffixk H 1, it
follows that for small perturbations in the data, the solution to (37) is feasible, and
hence optimal, for (41). Hence, our previous stability analysis for (37) provides us
with a local stability analysis for (41). We summarize this result in the following way:
Lemma 3.9. If Coercivity and Uniform Independence at A hold, then for s, r,
and a in an L 1 neighborhood of s , r , and a respectively, and for b in a W 1;1
neighborhood of b , there exists a unique minimizer of (41), and the estimate (38)
holds. Moreover, taking I = A ffl with defined in (42), the
solutions to (37) and (41) are identical in these neighborhoods.
Now let us consider the multipliers associated with (41):
Lemma 3.10 If Coercivity and Uniform Independence at A hold, then for s, r,
and a in an L 1 neighborhood of s , r , and a respectively, and for b in a W 1;1
neighborhood of b , there exists a unique minimizer of (41) and associated unique
multipliers satisfying the estimate:
Proof. Let A ffl be the ffl-active constraints defined by (43), where
Let J i be the index sets and let ae be the positive number associated with
by Lemma 3.5. Consider - small enough that the active
constraint set for (41) is a subset of A ffl (t) for each t. By the same analysis used
to establish uniqueness of (/ ; - ), there exists unique Lagrange multipliers (/;
corresponding to - + ffi-. We will show that
Combining this with Lemma 3.9 yields Lemma 3.10.
We prove (45) by induction. Let us start with the interval [- l \Gamma ae; 1]. If i 2 J c
l ,
l
Multiplying (17)
by KB, we can solve for ffi- J l
and substitute in (16) to eliminate -. Since
it follows that
for
Proceeding by induction, suppose that (46) holds for we wish to show
that it holds for
is constant on [- and we have
ae
Combining this with (46) for
for multiplying (17) by KB, we solve for ffi- J j and substitute in (16).
the induction bound (46) for coupled with
the bound already established for ffi- i ,
This completes
the induction.
Lemma 3.11. Suppose that Smoothness, Coercivity, and Uniform Independence
at A hold and let - be small enough that Y is contained in the neighborhoods of
Lemmas 3.9 and 3.10. Then for some - ? 0 and for each - 2 Y , there exists a unique
solution (x; u) to (41) and associated multipliers (/; -) satisfying the estimates (38)
and (44), (x; /; u;
Proof. If the first-order
necessary conditions (15)-(18) associated with (41). Lemmas 3.9 and 3.10 tell us
that the unique solution and multipliers for (41) satisfy the estimates (38) and (44)
for - near - . Since the first-order necessary conditions are sufficient for optimality
when Coercivity holds, the variational system (15)-(18) has a unique solution, for -
near - , that is identical to the solution and multipliers for (41), and the estimates
(38) and (44) are satisfied.
To complete the proof, we need to show that -
This follows from the regularity results of [8], where it is shown that the
solution to a constant coefficient, linear-quadratic problem satisfying the Uniform
Independence condition and with R positive definite, Q positive semidefinite, and
has the property that the optimal u and associated - are Lipschitz continuous
in time while the derivatives of x and / are Lipschitz continuous in time. Moreover,
the Lipschitz constant in time is bounded in terms of the constant ff in the Uniform
Independence condition and the smallest eigenvalue of R. Exactly the same analysis
applies to a linear-quadratic problem with time-varying coefficients, however, the
bound for the Lipschitz constant of the solution depends on the Lipschitz constant
of the matrices of the problem and of the parameters a, r, s, and -
b, as well as on a
uniform bound for the smallest eigenvalue of R(t) on [0; 1] and for the parameter ff
in the Uniform Independence condition. By Lemma 3.9, and with the choice for I
given in the statement of the lemma, the quadratic programs (37) and (41) have the
same solution for s, r, and a in an L 1 neighborhood of s , r , and a , and for b in
a W 1;1 neighborhood of b . Hence, for parameters in this neighborhood of - , the
indices of the active constraints are contained in I(t) for each t, and the independence
condition (21) holds. Lemma 3.7 provides a lower bound for the eigenvalues of R(t).
If (a; s; then the Lipschitz constants for a, s, r, and - b are bounded by those
for a , s , r , and -
b plus -. Hence, taking - sufficiently large, the proof is complete.
Proof of Theorem 1.1. We apply Theorem 2.2 with the identifications given
at the beginning of this section, and with - chosen sufficiently large in accordance
with Lemma 3.11. The completeness of X is established in Lemma 3.2, (Q1) is
immediate, (Q2) follows from Smoothness, (Q3) is proved in Lemma 3.3, (Q4) follows
from Lemma 3.11, and (Q5) is established in Lemma 3.4. Applying Theorem 2.2, the
estimate (7) is established. Under the Uniform Independence condition, Coercivity is
a second-order sufficient condition for local optimality (see [4], Theorem 1) which is
stable under small changes in either the parameters or the solution of the first-order
optimality conditions. Finally, we apply Lemma 3.1 to obtain the L 1 estimate of
Theorem 1.1.
We note that the Coercivity condition we use here is a strong form of a second-order
sufficient optimality condition; it not only provides optimality, but also guarantees
Lipschitz continuity of the optimal solution and multipliers when Uniform
Independence holds. As recently proved in [6] for finite-dimensional optimization
problems, Lipschitzian stability of the solution and multipliers necessarily requires a
coercivity condition stronger than the usual second-order condition. For the treatment
of second-order sufficient optimality under conditions equivalent to Coercivity,
see [18] and [21]. These sufficient conditions can be applied to state constraints of
arbitrary order. For recent work concerning the treatment of second-order sufficient
optimality in state constrained optimal control, see [16], [19], and [22].
4. Lipschitzian stability in L 1
One way to sharpen the L 1 estimate of Theorem 1.1 involves an assumption
concerning the regularity of the solution to the linear-quadratic problem (41). The
time t is a contact point for the i-th constraint of Kx+ b - 0 if (K(t)x(t)
and there exists a sequence ft k g converging to t with (K(t k )x(t k
each k.
Contact Separation: There exists a finite set I 1 I N of disjoint, closed intervals
contained in (0; 1) and neighborhoods of (a ; r ; s ) in W 1;1 and of b in W 2;1
with the property that for each a, r, s, and b in these neighborhoods, and for each
solution to (41), all contact points are contained in the union of the intervals I i with
exactly one contact point in each interval and with exactly one constraint changing
between active and inactive at this point.
Observe that if for (1) with there are a finite number of contact points,
at each contact point exactly one constraint changes between active and inactive,
and each contact point in the linear-quadratic problem (41) depends continuously
on the parameters, then Contact Separation holds. The finiteness of the contact set
is a natural condition in optimal control; for example, in [5] it is proved that for a
linear-quadratic problem with time invariant matrices and one state constraint, the
contact set is finite when Uniform Independence and Coercivity hold.
Theorem 4.1. Suppose that the problem (1) with has a local minimizer
that Smoothness, Contact Separation, and Uniform Independence at A
hold. Let / and - be the associated multipliers satisfying the first-order necessary
conditions (2)-(5). If the Coercivity condition holds, then there exist neighborhoods
V of p and U of w such that for every
there exists a unique solution U to the first-order necessary
conditions (2)-(5) and (x; u) is a local minimizer of the problem (1) associated with
p. Moreover, for every is the corresponding
solution of (2)-(5), the following estimate holds:
To prove this result, we need to supplement the 2-norm perturbation estimates
provided by Lemmas 3.9 and 3.10 with analogous 1-norm estimates.
Lemma 4.2. If Coercivity, Uniform Independence at A, and Contact Separation
hold, then there exist neighborhoods of (a ; r ; s ) in W 1;1 and of b in W 2;1 such that
for each a in these neighborhoods, the associated solutions
c(kffiak
Proof. Letting A ffl denote the ffl-active set defined in (43), we again choose
defined in (42). We consider parameters a, r, s, and b chosen
within the neighborhoods of the Contact Separation condition, and sufficiently close
to a , r , s , and b that the active constraint set for the solution of the perturbed
linear-quadratic problem (41) is contained in A ffl (t) for each t. By eliminating the
perturbations in the constraints, as we did in the proof of Lemma 3.8, there is no
loss of generality in assuming that a We refer to the quadratic program
corresponding to the parameters (r Problem 2.
Let (x; u) be either is a time for which K i
for some i, then d
Substituting for -
x using the state equation
for u using the necessary condition (17) yields:
This equation has the form
for suitable choices of the row vectors N i , S i , T i , and U i . Hence, at any time t where
the change in solution and multipliers corresponding to
a change in parameters satisfies the equation
By the Contact Separation condition, Problems 1 and 2 have the same active
set near 1. Since the components of - corresponding to inactive constraints
are constant and since - i
The relation (49) combined with Uniform
Independence, with the L 2 estimates provided in Lemmas 3.9 and 3.10, and with a
bound for the L 1 norm in terms of the H 1 norm, gives
Using the bound (36) of Lemma 3.7 in (17) and applying Gronwall's lemma to (16),
we have
for all t ! 1 in some neighborhood of As t decreases, this estimate is valid until
the first contact point is reached for either Problem 1 or Problem 2. Proceeding by
induction, suppose that we have established (51) up to some contact point; we now
wish to show that (51) holds up to the next contact point.
Again, by the Contact Separation condition, there is precisely one constraint,
say constraint j, that makes a transition between active and inactive at the current
contact point. Suppose that on the interval (ff; fi), the active sets for Problems 1 and
differ by the element j, and let - for the first contact point to the left of ff for either
Problem 1 or Problem 2. If there is no such point, we take By the Contact
Separation condition, the difference ff \Gamma - is uniformly bounded away from zero for
all choices of the parameters s and r near s and r . There are essentially two cases
to consider.
Case 1: Constraint j is active in Problem 2 to the left of
active in Problem 1 to the left of
Case 2: Constraint j is active in Problem 2 to the right of
is active in Problem 1 to the right of
Case 1. Since constraint j is active in both Problem 1 and 2 at
from (49) and from the Uniform Independence condition that
is the set of indices of active constraints at
on (ff; fi), the induction hypothesis yields
Hence, we have
is constant in Problem 1 on (ff; fi), and since it is monotone in Problem 2,
the bound (53) coupled with the bound (51) at implies that
Since ffi- i is constant on (ff; fi) for it follow from (51) that
Relation (49), for along with (54) and (55) yield
Combining (54)-(56) gives
On the interval from to the next contact point - , precisely the same
constraints are active in both Problems 1 and 2. Again, the relation (49) combined
with Uniform Independence, with the L 2 estimates provided in Lemmas 3.9 and 3.10,
and with a bound for the L 1 norm in terms of the H 1 norm gives
Relation (50) for along with (57) and (58), give
And combining this with (15)-(17) gives (51) for This completes the induction
step in Case 1.
Case 2. The mean value theorem implies that for some fl 2 (-; ff), we have
d
dt
Hence, even though the derivative of K j x i may not vanish on (-; ff), the derivative
of the change K j ffix is still bound by the perturbation in the parameters at some
d
dt
Since ff and - lie in disjoint closed sets I k associated with the Contact Separation
bounded away from zero by the distance between the closest pair
of sets. Focusing on the left side of (59), we substitute ffi -
substitute for ffiu using (17) to obtain the relation
where denote the set of indices of the
active constraints at Combining (60) with (49) for
The analysis for Case 1 can now be applied, starting with (52), but with ff replaced
by fl.
Remark 4.3. In the proof of Lemma 4.2, we needed to ensure that the difference
appearing in case 2, was bounded away from zero. The Contact Separation
condition ensures that this difference is bounded away from zero since ff and - lie in
disjoint closed intervals I k . On the other hand, any condition that ensures a positive
separation for the contact points ff and - in case 2 can be used in place of the Contact
Separation assumption of Theorem 4.1 and Lemma 4.2.
Proof of Theorem 4.1. The functions T , F , and L and the sets X, \Pi, and Y are
the same as in the proof of Theorem 1.1 except that L 2 is replaced by L 1 and H 1 is
replaced by W 1;1 everywhere. Except for this change in norms, and the replacement
of the L 2 estimates (38) and (44) referred to in Lemma 3.11 by the corresponding
estimate (47) of Lemma 4.2, the same proof used for Theorem 1.1 can be used to
establish Theorem 4.1.
5. Remarks
As mentioned in Section 2, Theorem 2.2 is a generalization of Robinson's implicit
function theorem [20] to nonlinear spaces. His theorem assumes that the nonlinear
term is strictly differentiable and that the inverse of the linearized map is Lipschitz
continuous. In optimal control, the latter condition amounts to Lipschitz continuity in
1 of the solution-multiplier vector associated with the linear-quadratic approxima-
tion. For problems with control constraints, this property for the solution is obtained,
for example, in [1] or [4].
In this paper, we obtain Lipschitzian stability results for state constrained problems
utilizing a new form of the implicit function theorem applicable to nonlinear
spaces. We obtain optimal Lipschitzian stability results in L 2 and nonoptimal stability
results in L 1 under the Uniform Independence and the Coercivity conditions.
And with an additional Contact Separation condition, we obtain a tight L 1 stability
result. These are the first L 1 stability results that have been established for state
constrained control problems.
The Uniform Independence condition was introduced in [8] where it was shown
that this condition together with the Coercivity condition yield Lipschitz continuity
in time of the solution and the Lagrange multipliers of a convex state and control
constrained optimal control problem. Using Hager's regularity result, Dontchev [1]
proved that the solution of this problem has a Lipschitz-type property with respect
to perturbations. Various extensions of these results have been proposed by several
authors. A survey of earlier results is given in [2].
In a series of papers (see [14], [15], and the references therein), Malanowski studied
the stability of optimal control problems with constraints. In [15] he considers an
optimal control problem with state and control constraints. His approach differs
from ours in the following ways: He uses an implicit function theorem in linear spaces
and a compactness argument, and the second-order sufficient condition he uses is
different from our coercivity condition. Although there are some similar steps in the
analysis of L 2 stability, the two approaches mainly differ in their abstract framework.
A prototype of Lemma 3.5 is given in [1], Lemma 2.5. Lemma 3.6 is related to
Lemma 3 in [2], although the analysis in Lemma 3.6 is much simpler since we ignore
indices outside of A(t). In the analysis of the linear-quadratic problem (37), we follow
the approach in [4].
Acknowledgement
. The authors wish to thank both Kazimierz Malanowski
for his comments on an earlier version of this paper, and the reviewers for their
constructive suggestions.
--R
Lipschitzian stability in nonlinear control and optimization
An inverse function theorem for set-valued maps
On regularity of optimal control
Characterizations of strong regularity for variational inequalities over polyhedral convex sets
Variants of the Kuhn-Tucker sufficient conditions in cones of nonnegative functions
Lipschitz continuity for constrained processes
Multiplier methods for nonlinear optimal control
Dual approximations in optimal control
Lagrange duality theory for convex control problems
A survey of the maximum principles for optimal control problems with state constraints
Theory of Extremal Problems
Stability and sensitivity of solutions to nonlinear optimal control problems
Sufficient optimality conditions in optimal control
On the minimum principle for optimal control problems with state constraints
First and second order sufficient optimality conditions in mathematical programming and optimal control
Second order sufficient conditions for optimal control problems with control-state constraints
Strongly regular generalized equations
Sufficient conditions for nonconvex control problems with state constraints
The Riccati equation for optimal control problems with mixed state- control constraints
--TR
--CTR
Stephen J. Wright, Superlinear Convergence of a Stabilized SQP Method to a Degenerate Solution, Computational Optimization and Applications, v.11 n.3, p.253-275, Dec. 1998
Olga Kostyukova , Ekaterina Kostina, Analysis of Properties of the Solutions to Parametric Time-Optimal Problems, Computational Optimization and Applications, v.26 n.3, p.285-326, December
W. Hager, Stabilized Sequential Quadratic Programming, Computational Optimization and Applications, v.12 n.1-3, p.253-273, Jan. 1999
D. Goldfarb , R. Polyak , K. Scheinberg , I. Yuzefovich, A Modified Barrier-Augmented Lagrangian Method for Constrained Minimization, Computational Optimization and Applications, v.14 n.1, p.55-74, July 1999 | optimal control;state constraints;lipschitzian stability;implicit function theorem |
278885 | Connectors for Mobile Programs. | AbstractSoftware Architecture has put forward the concept of connector to express complex relationships between system components, thus facilitating the separation of coordination from computation. This separation is especially important in mobile computing due to the dynamic nature of the interactions among participating processes. In this paper, we present connector patterns, inspired in Mobile UNITY, that describe three basic kinds of transient interactions: action inhibition, action synchronization, and message passing. The connectors are given in COMMUNITY, a UNITY-like program design language which has a semantics in Category Theory. We show how the categorical framework can be used for applying the proposed connectors to specific components and how the resulting architecture can be visualized by a diagram showing the components and the connectors. | Introduction
As the complexity of software systems grows, the role of Software Architecture is increasingly seen as the unifying
infrastructural concept/model on which to analyse and validate the overall system structure in various phases of the
software life cycle. In consequence, the study of Software Architecture has emerged, in recent years, as an autonomous
discipline which requires its own concepts, formalisms, methods, and tools [1], [2]. The concept of connector has been put
forward to express complex relationships between system components, thus facilitating the separation of coordination
from computation. This is especially important in mobile computing due to the transient nature of the interconnections
that may exist between system components. In this paper we propose an architectural approach to mobility that
encapsulates this dynamic nature of interaction in well-defined connectors.
More precisely, we present connector patterns for three fundamental kinds of transient interaction: action inhibition,
action synchronization, and message passing. Each pattern is parameterized by the condition that expresses the transient
nature of the interaction. The overall architecture is then obtained by applying the instantiated connectors to the mobile
system components. To illustrate our proposal, components and connectors will be written in COMMUNITY [3], [4], a
program design language based on UNITY [5] and IP [6].
The nature of the connectors proposed in the paper was motivated and inspired by Mobile UNITY [9], [10], an extension
of UNITY that allows transient interactions among programs. However, our approaches are somewhat different. Mobile
UNITY suggests the use of an interaction section to define coordination within a system of components. We advocate
an approach based on explicitly identified connectors, in order to make the architecture of the system more explicit and
promote interactions to first-class entities (like programs). Moreover, while we base our approach on the modification of
the superposition relation between programs, Mobile UNITY introduces new special programming constructs, leading
to profound changes in UNITY's syntax and computational model. However, we should point out that some of these
syntactic and semantic modifications (like naming of program actions and locality of variables) were already included
in COMMUNITY.
To make it easier for interested readers to compare our approach with Mobile UNITY we use the same example as
in [9]: a luggage distribution system. It consists of carts moving on a closed track transporting bags from loaders to
unloaders that are along the track. Due to space limitations we have omitted many details which, while making the
example more realistic, are not necessary to illustrate the main ideas.
In this paper we follow the approach proposed in [7] and give the semantics of connectors in a categorical framework.
In this approach, programs are objects of a category in which the morphisms show how programs can be superposed.
Because in Category Theory [8] objects are not characterized by their internal structure but by their morphisms (i.e.,
relationships) to other objects, by changing the definition of the morphisms we can obtain different kinds of relationships
between the programs, without having to change the syntax or semantics of the programming language. In fact, the
core of the work to be presented in the remainder of this paper is an illustration of that principle: by changing program
morphisms in a small way such that actions can be "ramified", transient action synchronization becomes possible.
This work was partially supported by JNICT through contract PRAXIS XXI 2/2.1/MAT/46/94 (ESCOLA) and by project ARTS under
contract to EQUITEL SA.
Michel Wermelinger is with the Departamento de Inform'atica, Universidade Nova de Lisboa, 2825 Monte da Caparica, Portugal. E-mail:
mw@di.fct.unl.pt.
Jos'e Luiz Fiadeiro is with the Departamento de Inform'atica, Faculdade de Ci-encias, Universidade de Lisboa, Campo Grande, 1700 Lisboa,
II. Mobile Community
The framework to be used consists of programs and their morphisms. This section introduces just the necessary
definitions. For a more thorough formal treatment, the interested reader should consult [4].
A COMMUNITY program is basically a set of named, guarded actions. Action names act as rendez-vous points for
program synchronization. At each step, one or more actions whose guards are true execute in parallel. Each action
consists of one or more assignments to execute simultaneously. Each attribute used by a program is either external-its
value is provided by the environment and may change at any time-or local-its value is initialized by the program
and modified only by its actions. Attributes are typed by a fixed algebraic data type specification
a set of sort
symbols,\Omega is an S \Theta S-indexed family of function symbols, and \Phi is a set of first-order axioms defining
the properties of the operations. We do not present the specification of the sorts and predefined functions used in this
paper.
A COMMUNITY program has the following structure
program P is
read R
init I
do []
a := F (g; a)]
where
ffl V is the set of local attributes, i.e., the program "variables";
ffl R is the set of external attributes used by the program, i.e., read-only attributes that are to be instantiated with local
attributes of other components in the environment;
ffl each attribute is typed by a data sort in
ffl I is the initialisation condition, a proposition on the local attributes;
is the set of action names, each one having an associated statement (see below);
ffl for every action g 2 \Gamma, the guard B(g) is a proposition on the attributes stating the necessary conditions to execute
ffl for every action g 2 \Gamma, its domain D(g) is the set of local attributes that g can change;
ffl for every action local attribute a 2 D(g), F (g; a) is a term denoting the value to be assigned to a each time
g is executed.
Formally, the signature of a program defines its vocabulary (i.e., its attributes and action names).
program signature is a tuple hV; R; \Gammai where
is a set of local attributes ;
R s is a set of external attributes ;
is a set of actions.
The sets V s , R s and \Gamma d are finite and mutually disjoint. The domain of an action is the set d ' V such that
Notation. The program attributes are A = S
The sort of attribute a will be denoted
by s a . The domain of action g is denoted by D(g). Inversely, for each a 2 V the set of actions that can change a is
A program's body defines the initial values of its local attributes and also when and how the actions modify them.
For that purpose the body uses propositions and terms built from the program's attributes and the predefined function
symbols.
program is a pair h'; \Deltai where is a program signature and is a program
body where
ffl I is a proposition over
F assigns to every and to every a 2 D(g) a term of sort s a ;
assigns to every proposition over A.
Notation. If D(g) is empty, then F will be denoted by skip. 2
Locations are an important aspect of mobility [11]. We take the same approach as Mobile UNITY and represent
location by a distinguished attribute. However, our framework allows us to handle locations in a more flexible way.
We can distinguish whether the program controls its own motion or if it is moved by the environment by declaring the
location attribute as local or external, respectively.
The formal treatment of locations is the same as for any attribute because they have no special properties at the
abstract level we are working at. However, any implementation of COMMUNITY will have to handle them in a special
way, because a change in the system's location implies a change in the value of the location attribute and vice-versa.
We assume therefore some special syntactic convention for location attributes such that a compiler can distinguish them
from other attributes. Following the notation proposed by Mobile UNITY, in this paper location attributes start with
-.
To give an example of a COMMUNITY program, we present the specification of a cart. Like bags and (un)loaders,
carts have unique identifiers, which are represented by external integer attributes, so that a cart cannot change its own
identity. A cart can transport at most one bag at a time from a source loader to a destination unloader. Initially, the
cart's destination is the loader from which it should fetch its first bag. The unloader at which a bag must be delivered
depends on the bag's identifier. After delivering a bag, or if a loader is empty, the cart proceeds to the next loader.
Absence of a bag will be denoted by the identifier zero.
The track is divided into segments, each further divided into ten units. The location of a cart is therefore given by
an integer. Carts can move at two different speeds: slow (one length unit per time unit) and fast (two length units). A
cart stops when it reaches its destination. The action to be performed at the destination depends on whether the cart
is empty or full.
program Cart is
dest : int;
read id, nbag : int;
do slow: [-6= dest ! -+ 1]
[] fast: [-6= dest ! -+ 2]
[] load: [-= dest nbag k dest := Dest(nbag, dest)]
[] unload: [-= dest - bag dest := Next(dest)]
We now turn to program morphisms, the categorical notion that expresses relationships between (certain) pairs of
programs. In the previous definitions of COMMUNITY [4], [7], a morphism between two programs P and P 0 is just a
mapping from P 's attributes and actions to those of P 0 , stating in which way P is a component of P 0 . It is therefore
called a superposition morphism, since it captures the notion of superposition of [5], P being the underlying program
and P 0 the transformed one.
In this paper we keep the basic intuition but introduce a small although fundamental change. In a mobile setting,
a program may synchronize each of its actions with different actions from different programs at different times. To
allow this, a program morphism may associate an action g of the base program P with a set of actions fg of
the superposed program P 0 . The intuition is that those actions correspond to the behaviour of g when synchronizing
with other actions of other components of P 0 . Each action g i must preserve the basic functionality of g, adding the
functionality of the action that has been synchronized with g. The morphism is quite general: the set fg may
be empty. In that case, action g has been effectively removed from P 0 . Put in other words, it has been permanently
inhibited, as if the guard had been made false. Due to technical reasons the mapping between actions of P and sets of
actions of P 0 is formalised as a partial function from P 0 to P . However, in examples and informal discussions we use
the "set version" of the action mapping.
Morphisms must preserve the types, the locality, and the domain of attributes. Preserving locality means that local
attributes are mapped to local attributes, and preserving domains means that new actions of the system are not allowed
to change local attributes of the components.
Definition 3 Given program signatures consists of
a total function oe ff : A ! A 0 and a partial function oe
Notation. In the following, the indices ff and fl are omitted. We denote the pre-image of oe fl by oe / . Also, if x is a
term (or proposition) of ', then oe(x) is the term (resp. proposition) of ' 0 obtained by replacing each attribute a of x by
Notice that through the choice of an appropriate morphism, it is possible to state whether a given component and a
given system are co-located (i.e., whenever one moves, so does the other) or if the component can move independently
within the system. This can be modeled by a morphism that maps (or not) the location attribute of the component to
the location attribute of the system.
Our first result is that signatures and their morphisms constitute a category. This basically asserts that morphisms
can be composed. In other words, the "component-of" relation is transitive (and reflexive, of course).
Proposition 1 Program signatures and signature morphisms constitute a category SIG.
Superposition of a program P 0 on a base program P is captured by a morphism between their signatures that obeys
the following conditions:
ffl the initialization condition is not weakened;
ffl the assignments are equivalent;
ffl the guards are not weakened.
where means validity in first-order sense.
The category of signatures extends to programs.
Proposition 2 Programs and superposition morphisms constitute a category PROG.
To give an example of a program morphism, consider the need to prevent carts from colliding at intersections. We
achieve that goal in two steps, the second of which to be presented in subsection IV-B. When two carts enter two
segments that intersect, due to the semantics of COMMUNITY allowing only one cart to move at each step, one of the
carts will be further away from the intersection. The first step to avoid collisions is to force that cart to move slowly.
In other words, its fast action is inhibited. Notice that in this case the inhibition depends on the presence of another
cart, and therefore a second (external) location attribute - 2 is needed. The Cart program is thus transformed into an
InhibitedCart as given by the diagram
program Cart is
dest : int;
read id, nbag : int;
do slow: [-6= dest ! -+1]
[] fast: [-6= dest ! -+2]
[] load: [-=dest - bag=0
[] unload: [-=dest - bag 6= 0
program InhibitedCart is
dest : int;
read id, nbag,
do slow: [-6= dest ! -+1]
[] fast: [-6= dest - :I ! -+2]
[] load: [-=dest - bag=0
[-=dest - bag6= 0
where the inhibition condition is I DistanceToCrossing(-). The
morphism is an injection: - 7! -, fast 7! fast, etc. The next section shows how the InhibitedCart program can be
obtained by composition of two components.
III. The Architecture
The configuration of a system is described by a diagram of components and channels. The components are programs,
and the channels are given by signatures that specify how the programs are interconnected. Given programs P and P 0 ,
the signature S is constructed as follows: for each pair of attributes (or actions) a 2 P and a that are to be shared
(resp. synchronized), the signature contains one attribute (resp. action) b; the morphism from S to P maps b to a and
the morphism from S to P 0 maps b to a 0 . We have morphisms only between signatures or only between programs, but
a signature can be seen as a program F(') with an "empty" body [7]. In categorical terms, the operator
F is a functor (i.e., a morphism between objects of different categories).
As a simple example consider the following diagram, which connects (through a channel that represents attribute
sharing) the generic cart program with a program that initializes an integer attribute with the value 2.
program Init 2 is
signature Share is
program Cart is
var .
init .
do .
The program that describes the whole system is given by the colimit of the diagram, which can be obtained by
computing the pushouts of pairs of components with a common channel. The program P resulting from the pushout of
obtained as follows. The initialization condition is the conjunction of the initialization conditions of the
components, and the attributes of P are the union of the attributes of P 1 and P 2 , renaming them such that only those
that are to be shared will have the same name. An attribute of P is local only if it is local in at least one component.
For the above example, the resulting pushout will represent the cart with identifier 2.
program Cart 2 is
var bag, -, dest, id : int
read nbag : int
do .
As for the actions of P , they are basically a subset of all pairs of actions Only
those pairs such that g 1 and g 2 are mapped to the same action of the channel may appear in P . If an action of P 1 (or
not mapped to any action of the channel-i.e., it is not synchronized with any action of P 2 (resp. P 1 )-then it
appears "unpaired" in P . Synchronizing two actions g 1 and g 2 (i.e., joining them into a single one g 1 taking
the union of their domains, the conjunction of their guards, and the parallel composition of their assignments. If the
actions have a common attribute a then the resulting assignment is a := F (g 1 ; a) and the guard is strengthened by
a). If the actions are "incompatible" (i.e., the terms denote different values for a) then the equality is
false and therefore the synchronized action will never execute, as expected.
As an illustration, the pushout of the diagram
program Cart is
var bag, -, dest : int
read id, nbag : int
do slow: [-6= dest ! -+1]
[] fast: [-6= dest ! -+2]
[] load: [-=dest - bag=0
[] unload: [-=dest - bag 6= 0
signature S is
do i
i7!fast
program Inhibitor is
read -: int
do i: [:I ! skip]
is program InhibitedCart shown in the previous section: actions fast and i were paired together, joining their guards
and assignments. Notice that attribute - of the Inhibitor program has been renamed to - 0 because names are local.
The next result states that every finite diagram has a colimit.
Proposition 3 Category PROG is finitely cocomplete.
Channels (i.e., signatures) only allow us to express simple static connections between programs. To express more
complex or transient interactions, we use connectors, a basic concept of Software Architecture [2]. A connector consists
of a glue linked to one or more roles through channels. The roles constrain what objects the connector can be applied to.
In a categorical framework, the connectors (and therefore the architectures) that can be built depend on the categories
used to represent glues, roles, and channels, and on the relationships between those categories. It is possible to use three
different categories for the three parts of a connector (e.g., [7] proposes roles to be specifications written in temporal
logic) but for simplicity we assume that roles and glues are members of the same category. We therefore adopt only the
basic definitions of [7].
connection is a tuple hC; G; R; fl; aei where
is the channel ;
is the glue;
is the role;
are morphisms in PROG.
A connector is a finite set of connections with the same glue.
The semantics of a connector is given by the colimit of the connections diagram. By definition, there are superposition
morphisms from each object in the diagram to the colimit. Therefore superposition becomes in a sense "symmetric", a
necessary property to capture interaction [10].
A connector can be applied only to programs which are instantiations of the roles. In categorical terms, there must
exist morphisms from the roles to the programs.
Definition 6 A correct instantiation of a connector fhC i ; G; R is a set of morphisms
PROG. The resulting system is the colimit of the diagram formed by morphisms
As an illustration, an instantiated connector with two roles has the diagram
IV. Interactions
An interaction between two programs involves conditions and computations. Therefore it cannot be specified just by
a signature; we must use a connector, where the programs are instances of the roles, the interaction is the glue, and
each channel states exactly what is the part of each program in the interaction.
A distributed system may consist of many components, but usually it can be classified into a relatively small set of
different types. Since interaction patterns normally do not depend on the individual components but on their types, it is
only necessary to define connectors for the existing component types. To obtain the resulting system, the connectors will
be instantiated with the actual components. Therefore, in the following we only consider the programs that correspond
to component types. In the luggage distribution example there are only three different program types: carts, loaders,
and unloaders. The programs for the individual components only differ in the initialization condition for the identifier
attribute.
In a mobile setting one of the important aspects of interactions is their temporary nature. This is represented by
conditions: an interaction takes place only while some proposition is true. Usually that proposition is based on the
location of the interacting parties. We consider three kinds of interactions:
inhibition An action may not execute. 1
synchronization Two actions are executed simultaneously.
communication The values of some local attributes of one program are passed to corresponding external attributes of
the other program.
For each kind of interaction we develop a connector template which is parameterized by the interaction conditions.
This means that, given the interacting programs (i.e., the roles) and the conditions under which they interact, the
appropriate connector can be instantiated.
Given the set of components that will form the overall system, the possible interactions are specified as follows:
ffl An inhibition interaction states that an action g of some program P will not be executed whenever the interaction
condition I is true.
ffl A synchronization interaction states that action g of program P will execute simultaneously with action g 0 of program
I is true.
ffl A communication interaction states that the value of the local attributes M (the "message") of program P can be
written into the external attributes M 0 of program P 0 if I is true. The sets M and M 0 must be compatible. Moreover,
each program must indicate which action is immediately executed after sending (resp. receiving) the message.
Definition 7 Given a set P of programs, a transient interaction is either one of the following:
ffl a transient inhibition hg;
ffl a transient synchronization hg;
ffl a transient communication hg; M;P;
where
there is a bijection
ffl I is a proposition over attributes of P .
The following subsections present the connector patterns corresponding to the above interactions. The glue of a
connector only needs to include the attributes that occur in the interaction condition. However, to make the formal
definitions easier, the glue patterns will include all the attributes of all the roles. Due to the locality of names, attributes
from different roles must be put together with the disjoint union operator (written ]) to avoid name clashes.
For further simplication, we assume that the interaction condition only uses attributes from the interacting programs
thus only those roles are presented in the patterns. If this is not the case, the instantiated connector
must have further roles that provide the remaining attributes. The next subsection provides an example.
A. Inhibition
Inhibition is easy and elegant to express: if an action is not to be executed while I is true, then it can be executed
only while :I is true.
Definition 8 The inhibition connector pattern corresponding to inhibition interaction hg;
1 In this case the interaction is between the program and its environment.
program P is
read R
init .
do g: [B(g) ! . ]
[] .
signature Target is
do g
program Inhibitor is
init true
do g: [:I ! skip]
For illustration, the action inhibition example of Sections II and III can be achieved through the following connector.
signature Context is
read -: int
program InhibitCrossing is
read
init true
do fast: [:I ! skip]
signature Target is
read -: int
do fast
fast7!fast
program Cart is. program Cart is.
Again, the inhibition condition is I
Notice that the connector has two roles, one for the cart whose action is to be temporarily inhibited, the other for the
cart that provides the context for the inhibition to occur.
An application of this connector and the resulting colimit will be presented in the next subsection.
B. Synchronization
Synchronizing two actions g and g 0 of two different components can be seen as merging them into a single action gg 0 of
the system, the only difference between the static and the mobile case being that in the latter the merging is only done
while some condition is true. When gg 0 executes, it corresponds to the simultaneous execution of g and g 0 . Therefore,
if g would be executed by a component, the system will in fact execute gg 0 which means that it is also executing g 0 , and
vice-versa. To sum it up, when two actions synchronize either both execute simultaneously or none is executed.
This contrasts with the approach taken by Mobile UNITY which allows two kinds of synchronization: coexecution
and coselection [10]. The former corresponds to the notion exposed above, while the latter forces the two actions to be
selected simultaneously but if one of them is inhibited or its guard is false then only the other action executes. This
extends the basic semantics of UNITY where only one action can be selected at a time. Since COMMUNITY already
allows (but does not impose) simultaneous selection of multiple actions, and because we believe that the intuitive notion
of synchronization corresponds to coexecution, we will not handle coselection.
The key to represent synchronization of two actions subject to condition I is to ramify each action in two, one
corresponding to its execution when I is false and the other one when I is true. Put in other words, each action has two
"sub-actions", one for the normal execution and the other for synchronized execution. As the normal sub-action can only
execute when the condition is false, it is inhibited when I is true, and the opposite happens with the synchronization
sub-action. Therefore we can use the same technique as for inhibition. Since there are two actions to be synchronized,
and the synchronization sub-action must be shared by both, there will we three (instead of four) sub-actions. To facilitate
understanding, the name of a sub-action will be the set of the names of the actions it is part of.
Definition 9 The synchronization connector pattern corresponding to synchronization interaction hg;
signature C is
do g
program Synchroniser is
read
init true
do g: [:I ! skip]
signature C 0 is
read
do
program P is
read R
init .
do g: [B(g) ! . ]
[] .
program P 0 is
read R 0
init .
do
[] .
In the colimit, the action gg 0 will have the guards and the assignments of g and g 0 . Therefore, if either B(g) or B(g 0 )
is false, or if the assignments are incompatible, then gg 0 will not get executed.
This connector describes what is called "non-exclusive coexecution" in [10]: outside the interaction period the actions
execute as normal. It is also possible to simulate exclusive coexecution which means that the actions are only executed
(synchronously) when the interaction condition is true. To that end, simply eliminate actions g and g 0 from the inhibition
connector shown above, just keeping the synchronized action gg 0 .
Continuing with the example, the second step to avoid collisions at crossings is to force the nearest cart to move fast
whenever the most distant one moves. Since the latter can only move slowly, the nearest cart is guaranteed to pass the
crossing first. Using the same interaction condition as in the previous section one gets the diagram
signature C 1 is
read -: int
do fast
program SynchCrossing is
read
init true
do fast: [:I ! skip]
signature C 2 is
read -: int
do slow
slow7!fslow;fastslowg
program Cart is. program Cart is.
To prevent collisions between Cart 1 and Cart 2 (obtained as shown in Section III) one must consider two symmetrical
cases, depending on which cart is nearer to the intersection. Let us assume that Cart 1 is nearer. Thus we must block
the fast action of Cart 2 with the inhibitor shown in the previous section and synchronize its slow action with the fast
action of Cart 1 using the connector above. The diagram is
Cart
Context
InhibitCrossing Target
// Cart
Cart
OO
SynchCrossing C 2
// Cart
with the following colimit (where i ranges over 1 and 2 to abbreviate code duplication)
program System is
read nbag
dest
do slow dest
dest
dest
fast 1 slow dest dest
dest
dest dest i := Dest(nbag i , dest i )]
dest dest i := Next(dest i )]
To see that synchronization is transitive, consider the following example where action g 0 is synchronized with two
other actions g and g 00 whenever I 1 and I 2 are true, respectively. The resulting system must provide actions for all four
combinations of the truth values of the interaction conditions. For example, if I 1 - I 2 is true then all actions must occur
simultaneously, but if I 1 -I 2 is false, then any subset of the actions can occur. This happens indeed because the pushout
of two morphisms
m g is basically given by the pairs fg 1 g 0
with morphism oe(g i
g. Putting into words: if an action g "ramifies" into
actions g, it means that whenever g would be executed, any subset of oe(g) executes in the superposed
program, and vice-versa, the execution of any g i implies that g is executed in the base program. Therefore, if g can be
ramified in two distinct ways, in the pushout any combination of the sub-actions can occur whenever g executes. The
pushout morphisms just state to which combinations each sub-action belongs.
do
do
ssh
do
do
do
ssg
gggggggggggggggggggg
do
As one can see, for all combinations of I 1 and I 2 the correct actions are executed. The colimit includes the combination
of all actions that share the name actions g 0 and gg 0 of the left middle pushout are synchronized with g 0 and g 0 g 00 on
the right in the four possible ways.
C. Communication
In Mobile UNITY communication is achieved through variable sharing. The interaction x - y when C engage I
disengage F x k F y states the sharing condition C, the (shared) initial value I of both variables, and the final value F x
and F y of each variable. The operational semantics states that whenever a program changes x, y gets the same value,
and vice-versa. This approach violates the locality principle. Furthermore, as pointed out in [10], several restrictions
have to be imposed in order to avoid problems like, e.g., simultaneous assignments of different values to shared variables.
We also feel that communication is a more appropriate concept than sharing for the setting we are considering, namely
mobile agents that engage into transient interactions over some kind of network. In the framework of COMMUNITY
programs, communication can be seen as some kind of sharing of local and external attributes, which keeps the locality
principle. We say "some kind" because we cannot use the same mechanism as in the static case, in which sharing meant
to map two different attributes of the components into a single one of the system obtained by the colimit. In the mobile
case the same local attribute may be shared with different external attributes at different times, and vice-versa. If we
were to apply the usual construction, all those attributes would become a single one in the resulting system, which is
clearly unintended.
We therefore will obtain the same effect as transient sharing using a communication perspective. To be more precise,
we assume program P wants to send a message M , which is a set of local attributes. If P 0 wants to receive the message,
it must provide external attributes M 0 which correspond in number and type to those of M . Program P produces the
values, stores them in M , and waits for the message to be read by P 0 . Since COMMUNITY programs are not sequential,
"waiting" has to be understood in a restricted sense. We only assume that P will not produce another message before
the previous one has been read (i.e., messages are not lost); it may however be executing other unrelated actions. To
put it in another way, after producing M , program P is expecting an acknowledge to produce the new values for the
attributes in M . For that purpose, we assume P has an action g which must be executed before the new message is
produced. Similarly, program P 0 must be informed when a new message has arrived, so that it may start processing it.
For that purpose we assume that P 0 has a single action g 0 which is the first action to be executed upon the receipt of a
new message 2 . That action may simply start using M 0 directly or it may copy it to local attributes of P 0 .
To sum up, communication is established via one single action for each program 3 : the action g of P is waiting for M
to be read, the action g 0 of P 0 reads M (i.e., starts using the values in M 0 ). As expected, it is up for the glue of the
interaction connector to transfer the values from M to M 0 and to notify the programs.
The solution is to explicitly model the message transmission as the parallel assignment of the message attributes,
which we abbreviate as M 0 := M . For this to be possible, the local attributes M of P must be external attributes of the
glue, and the external attributes M 0 of P 0 must be local attributes of the glue. The assignment can be done in parallel
with the notification of P . Moreover, the programs may only communicate when proposition I is true. Therefore the
glue contains an action wait : [I !M 0 := M ] to be synchronized with the "waiting" action g of P . The "reading" action
g 0 of P 0 can only be executed after the message has been transmitted. The solution is to have another action read in
the glue that is synchronized with g 0 . To make sure that read is executed after wait we use a boolean attribute. Thus
0 is inhibited while no new values have been transferred to M 0 . Again, this is like a blocking read primitive, except
2 It is always possible to write P 0 in such a way.
3 This is similar to pointed processes in the -calculus, or to ports in distributed systems.
that P 0 may execute actions unrelated to M 0 .
Since a receiver may get messages from different senders different times or not), there will be several
possible assignments M 0 := M i . Due to the locality principle, all assignments to an attribute must be in a single
program. Therefore for each message type a receiver might get, there will be a single glue connecting it to all possible
senders. On the other hand, a message might be sent to different receivers m. Therefore there will be several
possible assignments M 0
associated with the same wait action of the sender of message M . So there must be a
single glue to connect a sender with all its possible recipients. To sum up, for each message type there will be a single
glue acting like a "demultiplexer": it synchronizes sender i with receiver j when interaction condition I ij is true. This
assumes that the possible communication patterns are known in advance.
The communication connector pattern corresponding to communication interactions
m) is
signature Sender i is
read
do wait i
program Communicator is
read
init :new j
do
signature Receiver j is
read M 0
do read j
program P i is
read R i
init .
do
[] .
program
j is
read M 0
init .
do read
[] .
Notice that several actions wait ij may occur simultaneously, in particular for the same receiver j if the messages sent
have the same value. To distinguish messages sent by different senders, even if their content is the same, one can add
a local integer attribute s to the glue and add the assignment s := i to each action wait ij . This prevents two different
senders from sending their messages simultaneously.
In the luggage delivery example, communication takes place when a cart arrives at a station (i.e., a loader or an
unloader), the bag being the exchanged message. Loaders are senders, unloaders are receivers, and carts have both
roles. The bags held by a station will be stored in an attribute of type queue of integers. Although the locations
of stations are fixed they must be represented explicitly in order to represent the communication condition, namely
that cart and station are co-located. Since it is up for the connector to describe the interaction, the programs for
the stations just describe the basic computations: loaders remove bags from their queues, unloaders put bags on their
queues. The loader program must have separate actions to produce the message (i.e., the computation of the value of
the bag attribute) and to send the message (i.e., the bag has been loaded onto the cart).
The c carts are connected to the l loaders through a connector with c identical roles (each one being the Cart program
of Section I) and l identical roles, each being the Loader program. We only show the roles and respective morphisms
for the i-the loader (sender) and the j-th cart (receiver).
signature Sender is
read
do load
load7!fwait i1 ;:::;wait ic g
program Load is
init :new j
do
nbag
new j :=true]
new j :=false]
signature Receiver is
read -, nbag : int
do load
load7!read j
program Loader is
loaded
init loaded -= InitLoc(id)
do newbag: [q loaded
loaded:=false]
loaded
[] load: [:loaded
loaded:=true]
program Cart is
var -, dest, bag : int
read id, nbag : int
-=InitLoc(id)
do slow: [-6= dest ! -+1]
[] fast: [-6= dest ! -+2]
[] load: [-=dest - bag=0
bag:=nbag
[-=dest - bag6=0
Similarly, there is a connector with u roles for the unloaders and c roles for the carts. The i-the cart (sender) is
connected to the j-th unloader (receiver) as follows.
signature Sender is
read
do load
load7!fwait i1 ;:::;wait iu g
program Unload is
read
init :new j
do
signature Receiver j is
read
do unload
program Cart is
var -, dest, bag : int
read id, nbag : int
-=InitLoc(id)
do slow: [-6= dest ! -+1]
[] fast: [-6= dest ! -+2]
[] load: [-=dest - bag=0
bag:=nbag
[-=dest - bag6=0
program Unloader is
read
do unload: [true
be the program obtained by the pushout of programs Init i (of Section III) and X . Then the program
corresponding to a system consisting of two carts, one loader, and one unloader is obtained by computing the colimit of
the following diagram, which only shows the role instantiation morphisms between the connectors (which have the same
name as their glues) and the components.
SynchCrossing slow
fast
InhibitCrossing
fast
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
Loader 3 Load
O
O
O
O
O
O
O
O
O
O
O
O Unload
ggO O O O O O O O O O O O
Unloader 4
SynchCrossing
fast
slow
InhibitCrossing
fast
__ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
Notice that the binary connectors dealing with crossings are not symmetric; they distinguish which cart is supposed
to be nearer to the crossing. Therefore one must apply those connectors twice to each pair of carts.
V. Concluding Remarks
We have shown how some fundamental kinds of transient interactions, inspired by Mobile UNITY [9], [10], can be
represented using architectural connectors. The semantics has been given within a categorical framework, and the
approach has been illustrated with a UNITY-like program design language [3], [4].
As argued in [3], [12], the general benefits of working within a categorical framework are:
ffl mechanisms for interconnecting components into complex systems can be formalized through universal constructs
(e.g., colimits);
ffl extra-logical design principles are internalized through properties of universal constructs (e.g., the locality of names);
ffl different levels of design (e.g., signatures and programs) can be related through functors.
For this work in particular, the synergy between Software Architecture and Category Theory resulted in several conceptual
and practical advantages.
First, systems are constructed in a principled way: for each interaction kind there is a connector template to be
instantiated with the actual interaction conditions; the instantiated connectors are applied to the interacting programs
thus forming the system architecture, which can be visualized by a diagram; the program corresponding to the overall
system is obtained by "compiling" (i.e., computing the colimit of) the diagram.
Second, separation between computation and coordination, which is already supported by Software Architecture, has
been reinforced by two facts. On the one hand, the glue of a connector uses only the signatures of the interacting
programs, not their bodies. On the other hand, the superposition morphisms impose the locality principle.
Third, to capture transient interactions, only the morphism between program actions had to be changed; the syntax
and semantics of the language remained the same.
There are two ways of dealing with architectures of mobile components. In a system with limited mobility or with
a limited number of different component types, all possible interaction patterns can be foreseen, and thus a static
architecture with all possible interconnections can represent such a system. To cope with systems having a greater
degree of mobility, one must have evolving architectures, where components and connectors can be added and removed
unpredictably. This paper, being inspired by Mobile UNITY, follows the first approach. Our future work will address
the second approach.
One of the ideas we wish to explore is to remove the interaction condition from the glue's actions and instead associate
it to the application of the whole connector. The diagram of the system architecture thus becomes dynamic, at each
moment including only the connectors whose conditions are true. Another possibility is to apply graph rewriting
techniques to the system diagrams. A third venue is to change (again) the definition of morphism to represent the
notion of "changes-to" instead of "component-of". In other words, a morphism form P to P 0 indicates that P may
become . For the moment, these are just some of our ideas to capture software architecture evolution in a categorical
setting. Their suitability and validity must be investigated.
Acknowledgements
We would like to thank Ant'onia Lopes for many fruitful discussions and the anonymous referees for suggestions on
how to improve the presentation.
--R
"Special issue on software architecture,"
Perspectives on an Emerging Discipline
Parallel Program Design-A Foundation
"Semantics of architectural connectors,"
Basic Category Theory for Computer Scientists
"Mobile UNITY: Reasoning and specification in mobile computing,"
"Mobile UNITY: A language and logic for concurrent mobile systems,"
"Towards a general location service for mobile environments,"
--TR
--CTR
Michel Wermelinger , Cristvo Oliveira, The community workbench, Proceedings of the 24th International Conference on Software Engineering, May 19-25, 2002, Orlando, Florida
Performance evaluation of mobility-based software architectures, Proceedings of the 2nd international workshop on Software and performance, p.44-46, September 2000, Ottawa, Ontario, Canada
Michel Wermelinger , Antnia Lopes , Jos Luiz Fiadeiro, Superposing Connectors, Proceedings of the 10th International Workshop on Software Specification and Design, p.87, November 05-07, 2000
Antonio Brogi , Carlos Canal , Ernesto Pimentel, On the semantics of software adaptation, Science of Computer Programming, v.61 n.2, p.136-151, July 2006
Antnia Lopes , Jos Luiz Fiadeiro , Michel Wermelinger, Architectural primitives for distribution and mobility, Proceedings of the 10th ACM SIGSOFT symposium on Foundations of software engineering, November 18-22, 2002, Charleston, South Carolina, USA
Antnia Lopes , Jos Luiz Fiadeiro , Michel Wermelinger, Architectural primitives for distribution and mobility, ACM SIGSOFT Software Engineering Notes, v.27 n.6, November 2002
Andrea Bracciali , Antonio Brogi , Carlos Canal, A formal approach to component adaptation, Journal of Systems and Software, v.74 n.1, p.45-54, January 2005
Lus Filipe Andrade , Jos Luiz Fiadeiro, Agility through coordination, Information Systems, v.27 n.6, p.411-424, September 2002
Marco Antonio Barbosa , Lus Soares Barbosa, An Orchestrator for Dynamic Interconnection of Software Components, Electronic Notes in Theoretical Computer Science (ENTCS), 181, p.49-61, June, 2007
Michel Wermelinger , Jos Luiz Fiadeiro, A graph transformation approach to software architecture reconfiguration, Science of Computer Programming, v.44 n.2, p.133-155, August 2002
Michel Wermelinger , Jos Luiz Fiadeiro, Algebraic software architecture reconfiguration, ACM SIGSOFT Software Engineering Notes, v.24 n.6, p.393-409, Nov. 1999
Antnia Lopes , Jos Luiz Fiadeiro, Adding mobility to software architectures, Science of Computer Programming, v.61 n.2, p.114-135, July 2006
Dianxiang Xu , Jianwen Yin , Yi Deng , Junhua Ding, A Formal Architectural Model for Logical Agent Mobility, IEEE Transactions on Software Engineering, v.29 n.1, p.31-45, January | transient interactions;connectors;UNITY;software architecture |
278970 | A Framework-Based Approach to the Development of Network-Aware Applications. | AbstractModern networks provide a QoS (quality of service) model to go beyond best-effort services, but current QoS models are oriented towards low-level network parameters (e.g., bandwidth, latency, jitter). Application developers, on the other hand, are interested in quality models that are meaningful to the end-user and, therefore, struggle to bridge the gap between network and application QoS models. Examples of application quality models are response time, predictability, or a budget (for transmission costs). Applications that can deal with changes in the network environment are called network-aware. A network-aware application attempts to adjust its resource demands in response to network performance variations. This paper presents a framework-based approach to the construction of network-aware programs. At the core of the framework is a feedback loop that controls the adjustment of the application to network properties. The framework provides the skeleton to address two fundamental challenges for the construction of network-aware applications: 1) how to find out about dynamic changes in network service quality and 2) how to map application-centric quality measures (e.g., predictability) to network-centric quality measures (e.g., QoS models that focus on bandwidth or latency). Our preliminary experience with a prototype network-aware image retrieval system demonstrates the feasibility of our approach. The prototype illustrates that there is more to network-awareness than just taking network resources and protocols into account and raises questions that need to be addressed (from a software engineering point of view) to make a general approach to network-aware applications useful. | INTRODUCTION
applications use networks to provide access to remote
services and resources. However, in today's net-
works, users experience large variations in performance; e.g.,
bandwidth or latency may change by several orders of magnitude
during a session.
Such dramatic changes are observed in mobile environments
(where a user moves from one location to another) as well as in
stationary environments (where other network users cause con-
gestion). Variations in network performance are a problem for
applications since they result in unpredictable application be-
havior. Such unpredictability is annoying-e.g., if a user looks
through an on-line catalogue, a certain bandwidth must be continuously
available if the system wants to display images at the
J. Bolliger is with the Department of Computer Science, Swiss Federal Institute
of Technology (ETH), Z-urich, Switzerland. E-mail: bolliger@inf.ethz.ch.
Effort sponsored in part by ETH Polyprojekt 41-2641.5.
T. Gross is with the Department of Computer Science, ETH, Z-urich, Switzer-
land, and with the School of Computer Science, Carnegie Mellon University,
Pittsburgh, PA 15213. E-mail: thomas.gross@cs.cmu.edu.
Effort sponsored in part by the AdvancedResearch Projects AgencyandRome
Laboratory, Air Force Materiel Command, USAF, under agreement number
F30602-96-1-0287. The U.S. Government is authorized to reproduce and distribute
reprints for Governmental purposes notwithstanding any copyright annotation
thereon.
The views and conclusions contained herein are those of the authors and
should not be interpreted as necessarily representing the official policies or en-
dorsements, either expressed or implied, of the Advanced Research Projects
Agency, Rome Laboratory, or the U.S. Government.
speed expected by the user, or when congestion frustrates the
user to the point that the software becomes unusable.
To bridge the gap between network reality and application
expectation, i.e. to cope with the performance variations and
to provide for a certain predictability of the application behav-
ior, a number of researchers have proposed the development of
network-aware applications. The basic idea is to allow an application
to adapt to its network environment, e.g., by trading off
the volume (and with it the quality) of the data to be transferred
and the time needed for the transfer. That is, the application responds
to a drop in bandwidth by reducing its demands on the
networks, and increases its demands when there are additional
resources.
To develop a meaningful approach to such adaptation, we
must understand the realities of today's network architectures
and the dynamics of the services provided. There can be many
reasons for the variation in network performance. Some of the
reasons are inherent (e.g., for mobile wireless communication),
others are caused by the tremendous demand that always seems
to outgrow any capacity improvement. In response to this sit-
uation, modern networks are beginning to move away from the
best-effort service model to QoS models that allow the definition
of quality metrics based on a variety of parameters. Un-
fortunately, current QoS models are oriented towards low-level
network parameters (e.g., bandwidth, latency, jitter). Application
developers, on the other hand, are interested in quality models
that are meaningful to the end-user, such as response time.
Thus, network awareness includes mapping application-centric
quality measures (e.g., predictability) to network-centric quality
measures and vice versa.
Another motivation for network awareness is to avoid the
distinction between different application modes. For example,
some image retrieval systems distinguish between a preview
(or browse) mode, where only thumbnails are provided, and a
mode of higher quality image delivery. Avoiding the concept
of a mode simplifies implementation of the application components
and allows the system to dynamically take advantage of
available resources. A user on a high-bandwidth local area net-work
does not have to live with a thumbnail-sized view that is
statically defined and optimized for users accessing the image
server across a (slow) wide-area network. A time limit parameter
that controls how long a client is willing to wait provides
enough flexibility to toggle implicitly between the browsing and
the high-quality mode. Applications may need to adapt either
at start-up time or dynamically during the course of a session or
both.
There exist a number of network-aware applications, in particular
from the realm of multimedia [22], [2]. However the
solutions to the problem of network variability adopted by this
class of applications are often tailored to the specific needs of
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 24, NO. 5, MAY 1998, 376-390
an individual application or a specific programming model [41],
and there exists no general approach to develop network-aware
applications for other application domains. As network awareness
continues to be an important aspect of application develop-
ment, the need arises to identify and provide a general approach
to build network-aware systems.
We propose to use frameworks as an approach that encapsulates
(and integrates solutions to) the problems of adapting an
application's behavior to the availability of network resources.
A framework provides a basic solution to a class of problems;
clients of the framework employ the basic structure by exten-
sion, i.e. they provide concrete methods where the framework
relies on abstract methods [39]. So to build network-aware applications
by extending a framework, we must develop the over-all
structure, which is the foundation for a framework, as well
as the specific extensions that result in a real system, as has
been done in other application domains where frameworks have
proven useful.
The paper is organized as follows: Section II discusses issues
related to the problem of network-awareness. Section III
introduces the basic structure of our framework and the service
model supported; Sections IV and V provide a detailed description
of the methods employed to obtained information about net-work
resource availability and the strategies used to adapt to
changes in service quality respectively. After presenting performance
measurements in Section VI we summarize related work
and present our conclusions.
II. NETWORK- AND SYSTEM-AWARENESS
Networks are just one of the many resources employed by an
application. The model of a network-aware application emphasizes
the crucial role of the network connection: in many cases,
the network is on the critical path, and performance problems in
the network are the cause of the degradation of application per-
formance. However in other cases, a system is bottlenecked by
other components, e.g., the transfers across a local bus or from
the disks, or the amount of computation. (Some experimental
systems support a QoS model for internal transfers [15], [10],
[9].) If application performance is limited by parts other than the
network, then such an application should not be network-aware
but system-aware, i.e. it should be able to adjust its behavior in
response to other aspects of the system (response time, disk I/O
latency, bus bandwidth, etc. In the context of this paper we
focus on the concept of network-awareness implying that an ap-
plication's behavior is primarily controlled by the availability of
network resources, but we point out where an application must
go beyond network issues. System-awareness is especially important
if an application wants to trade off communication and
computation, i.e. an application may adjust to network changes
by computing, e.g., compression. In such cases it is important to
make sure that the computation overhead is not worse than the
network overload.
For our discussion of network service quality awareness we
concentrate on unicast request-response type communication
between clients and servers, where the traffic in at least one direction
can be described as bulk-transfer type network traffic.
This traffic pattern makes up a large fraction of application traffic
patterns observable in today's networks [5], [29].
A. Reservation vs. adaptation
Another approach to couple a service quality-aware application
to a network is to allow the application to reserve network
services in advance [40]. We do not discuss the relative benefits
of either approach since in practice, both of them coexist
(and continue to do so for a long time). Some network architectures
(or their implementations) may not support reservations
at all or may support them only to a limited degree (either by
choice or due to implementation faults), and although future
versions of popular protocol suites may support reservations,
not all sites will run the most recent software. Furthermore,
as network providers attempt to develop usage-based charging
schemes, there will be financial incentives to restrain applications
from uncontrolled use of network resources. (Today's networks
have really two aspects that make adaptivity unattractive:
there is almost no usage-based charging, and what is worse, the
most aggressive applications are often rewarded with the largest
share of the bandwidth pie [12].)
In a reservation-based approach an application must address
the two issues of (i) how to find out what and how much to reserve
(e.g., given some limit on the costs) and (ii) how to adjust
to meet the confirmed reservation, which may be less than the
application has asked for.
From a software engineering point of view, however, both
techniques require the same software technology: an application
must be able to adjust its resource demands, either to meet
a limit imposed by a reservation or to meet some constraints imposed
by the network. In either case the application must be
adaptive.
B. Quality
The objective of network-awareness is to allow an application
to be sensitive to changes in the network environment with the
goal of maximizing user-perceived quality. In our context, quality
means "conformance to a standard or a specification". Our
focus on system-awareness means that we are interested in "the
totality of features and characteristics of a product or service
that bears on its ability to satisfy given needs" [17].
Only the application (developer) knows what "quality" is. So
we build an infrastructure for those applications that are interested
in a quality-time tradeoff, i.e. applications that are willing
to sacrifice some degree of quality in return for faster response
time (or are willing to wait a little longer to get better results).
So the central issue is that we must find a software structure
that allows the application developer to specify what "quality"
means in the context of a specific application.
III. FRAMEWORK FOR NETWORK-AWARE APPLICATIONS
Before we can discuss a specific framework, we first want
to lay out a roadmap for the kind of interaction that is possible
or profitable between an application and the network. The
framework then provides, for some class of applications, a way
to structure their interaction with the network through extending
the framework. We start with principles of application-network
these principles stem from our experience with various
application projects and reflect study and rework involved
while factoring possible framework structures. We illustrate
BOLLIGER AND GROSS: A FRAMEWORK-BASED APPROACH TO THE DEVELOPMENT OF NETWORK-AWARE APPLICATIONS 3
the general principles with examples from a specific project,
the Chariot (Swiss Analysis and Retrieval of Image ObjecTs)
project [1], which is described in more detail in Section V-A.
The objective of the Chariot project is to allow networked clients
to search a remote image database. The Chariot system contains
an adaptive image server that serves as proof-of-concept for the
general ideas presented in the remainder of this paper.
A. Service model
Many networked applications using request-response type
communication include a user (client) that requests a set of objects
(images, texts, videos, byte code, etc.) from a remote site
(server), which is responsible for retrieving the requested objects
(from secondary storage) and delivering them to the client.
The response usually has a larger volume than the request and
dominates the transmission costs. In the following, we sometimes
refer to such servers and clients as sender and receiver of
the bulk-transfer, respectively.
A server accepts and acts upon request messages containing
a list of objects to be retrieved (or computed) and some QoS-
restrictions, where QoS-restrictions characterize the minimum
quality tolerable for the objects delivered, the maximum quality
that is beneficial for the user, and a limit T on the time allowed
for processing the request and transmitting the response. The
bounds on the quality may be (implicitly) imposed by the re-
quester's processing or display capabilities. The application decides
what kind of objects can be requested; quality is a property
of a requestable object and must also be defined by the application
Example (Chariot): Requestable objects are images or image sequences. The
quality of an image is defined by the resolution, color depth, the image format
(e.g., JPEG, GIF), a format-specific parameter, such as JPEG's compression factor
[18], and a user-defined weighting of these image characteristics.
The server's task is to deliver all the requested objects to the
client within time T , attempting to maximize the overall quality
of the objects transmitted while respecting the QoS-restrictions.
The range for dynamic adaptation to bandwidth availability is
bounded by the minimal and maximal quality specified by the
client. To quantify the task of the server, a quality metric must
be defined by the application, e.g., as a weighted sum of the individual
object qualities to be delivered. Weights for the quality
metric may include the relative importance of an object in comparison
to the other objects in the request list.
Example (Chariot): The weight of an image in the image request list is determined
by a value for the similarity of the image with respect to a query image.
Such a network-aware server need not only dynamically adapt
due to network service degradation (e.g., a drop in bandwidth),
but should also try to opportunistically exploit extra bandwidth
to deliver as many high quality objects as possible within time
T .
Network-aware applications adhering to the service model
above must address the following questions: (i) how to find out
about dynamic changes in network service quality on the path
from the sender to the receiver, and (ii) how to adapt the delivery
process to such dynamic changes such that the objectives of
the service model are met. Before we turn to each question in
detail in Sections IV and V, we present a general structure for
the type of network-aware application under consideration.
B. Application structure: software feedback control loop
A useful structure for network-aware applications using
request-response type communication is a software feedback
control loop, where the time left for the response-initially set
to T -constitutes the command variable of the closed-loop con-
trol. The feedback driving the sender adaptation comprises information
about the currently available bandwidth as obtained
by mechanisms described in Section IV.
We focus on closed-loop control systems because they are in
a position to deal with bursty applications. Other applications,
e.g., those that deal with continuous media streams, may use a
different control structure [2], [22]. We model sender-initiated
adaptation in a closed-loop control system with the three phases
monitor and react (P mr ), prepare (P prep ), and transmit (P trans ),
as depicted in Fig. 1. The three phases work independently and
share the list L of requested but not yet transmitted objects. P mr
is responsible for obtaining information (or feedback) about the
available bandwidth, determining whether the amount of data to
transmit must be reduced or whether it may be increased. In
case adaptation is needed, the P mr phase must decide which objects
to adapt, which transformations to apply, and must then
set the quality state of the objects according to these decisions.
The term "transformation" refers here to any activity, including
transfers, conversions, or computation. Once a (final) decision
on the quality of an object to be delivered has been made, P prep
must transform the object to the quality assigned by P mr . P trans
delivers completely prepared objects to the client.
Note that P mr does not invoke transformations directly, but
defers their execution to forthcoming phases P prep to allow for
"last-minute" adaptation. Furthermore, note that while P mr may
need to change the quality state of several objects at the same
time, P prep makes only one object ready for transmission at a
time (on a uniprocessor).
IV. FEEDBACK FROM THE NETWORK
A central issue that determines the effectiveness of the control
loop (and the frameworks built on this loop) is how it obtains
information about the state of the network.
A. What does a network-aware application want to know?
With the aim to provide predictable service, i.e. response delivery
within a specified amount of time T , an application ideally
wants to know the network service quality, and in particular the
bandwidth available for the time T . With a best-effort network
service model, such as IP's and ATM ABR's [7], there is no way
of getting such information in advance. Thus, all we can do is
gather as much QoS-information about the past behavior as possible
(and useful) and extrapolate future network behavior from
the observed QoS-values.
We can distinguish two different application-relevant characteristics
as far as bandwidth feedback is concerned: bottleneck
bandwidth and available bandwidth [30]. The former gives an
upper bound on how fast and how much an application may possibly
transmit, while the latter gives an estimate on how fast the
connection should transmit to preserve network stability, which
is an issue of primary concern to congestion control mechanisms
4 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 24, NO. 5, MAY 1998, 376-390
mon t or & reac
- mon tor/po bandw i dth
- nu l y | t d | by object
quality reduction or
expansion
deliver object to client
prepare
transform available
version of object to
desired object quality
reques t recep t i on
connection handling
request processing
ng
list of requested objects
feedback
from network
ob ject de l very
across network
app l ca t i on ayer
adap t a t i on aye r ( con t r
owe r aye r s
Fig. 1. Control-loop consisting of three phases monitor and react (P mr ), prepare (P prep ), and transmit (P trans )
While knowledge about the bottleneck bandwidth is useful in
bounding the approximations for the bandwidth estimates used
by a network-aware sender, information about the dynamics of
the available bandwidth on the end-to-end network path is indispensable
to enable timely adaptation of the volume of data to be
transmitted.
B. Three approaches to obtaining feedback
This section discusses three approaches to obtain feedback
about the characteristics and the dynamic behavior of an end-
to-end network path. The distinction is based on the layering
of the ISO/OSI-protocol stacks. The higher the layer providing
the feedback, the less cooperation is required from network
protocols on one side, but the less accurate and frequent will
the feedback information be on the other side. Feedback about
network service quality may be provided by:
Application-level QoS monitoring: A monitor assesses the
dynamics of network service quality by measuring sender
and receiver network quality parameters (e.g., packet inter-arrival
times, bandwidth, etc.) and repeatedly exchanges
the QoS-state between the peers, similar to the model proposed
in RTP [35]. The timeliness and accuracy of the information
depends on the averaging interval used for the
computation of the QoS-values and the frequency of the
QoS-state exchange. The monitoring approach provides
only a black box view of the network and transport ser-
vices. Therefore the sender has difficulties in distinguishing
between service degradation caused by the network and
degradation caused by the application or the end-system.
E.g., a (temporarily) slow receiver may lead the sender to
wrongly assume a network service degradation.
End-to-end transport-level congestion control: The goal of a
congestion control algorithm is to operate at a connection's
fair share of the bandwidth. To do so it must deploy mechanisms
to find the bottleneck bandwidth and detect incipient
congestion or network under-utilization. The implicit feed-back
that drives the adaptation of the sending rate may include
the fraction of packets lost or measurements of delay
variations, interarrival times of packet-pairs, etc. Several
benefits can be gained from making such transport-level
feedback information transparent to a network-aware appli-
cation: the feedback-loop is shortened and queuing unnecessary
data for transmission can be avoided in times of con-
gestion. Such information may help in bringing the appli-
cation's behavior in line with the protocol's behavior, since
the application has the same view of the network resources
as the protocol. Furthermore, if the congestion control
algorithm can make transparent its conclusions about the
available bandwidth, an even tighter coupling between application
and network can be achieved.
Network-level traffic management: Routers are most suited
to fairly allocate resources among competing connections.
Routers are the only authority capable of identifying and
isolating misbehaving senders. Furthermore, routers are
able to provide explicit feedback about their congestion
state to the end-systems. Each router on an end-to-end
path may generate feedback messages (either in binary
form [31] or as an explicit rate information [7]). The feed-back
must be processed in the end-systems to find the available
bandwidth used to control the sending rate.
It is important to note that the different layers may have different
perceptions of the current network status since they employ
different mechanisms to deal with exceptions such as loss
events. However, as far as the estimation of available resources
is concerned they all strive for a view as accurate as possible as
it helps them avoid exception situations. Therefore, each layer
may provide the information needed by a network-aware appli-
cation, however, the lower the layer the more timely and accurate
the information will be.
BOLLIGER AND GROSS: A FRAMEWORK-BASED APPROACH TO THE DEVELOPMENT OF NETWORK-AWARE APPLICATIONS 5
ca t on
mon
ne t wo r k
adapta t i on
app l ca on
mon
ne wo r k
adap t at on
response (bulk data stream)
reques response
QoS-state exchange
error ow con t ro exchange
rate contro exchange
send() ge _bw() r ecv
Fig. 2. Layered architecture of network-aware applications with the adaptation layer implementing the closed-loop control of Fig. 1.
C. Unified API
Although all three layers employ different feedback mecha-
nisms, they all aim at finding the available network service quality
to control an application's sending rate. Therefore, we devise
a unified API for network service quality feedback in general
and bandwidth feedback in particular that provides a network-aware
application with the required information. As the application
is interested in obtaining predictions about the band-width
to be expected (or any other QoS-value) and an estimation
for the reliability of the prediction, we extend a common transport
protocol API, the socket API [37], by a function get bw()
that returns bw(t) predicting the bandwidth for t ? now, and
prob bw (t), an estimate for the stability of the prediction 1 . Note
that to ease framework development we provide the same QoS-
interface at each layer in Fig. 2. Note also that both the monitor-
and the adaptation-layer are logically part of the application.
As provision of dynamic network QoS-information is not the
main topic of this paper we restrict our discussion to exemplifying
how end-to-end congestion control information can be
made transparent to an application through the API described.
Our implementation of a (TCP-based) user-level transport protocol
[4] distinguishes three high-level sender (congestion) states:
start-up (slow start), congestion avoidance, and congestion recovery
[19]. Each of the three state-classes (see State pattern
in [14]) provides the function get bw():
(i) The slow-start phase uses packet-pair probing to estimate
the bottleneck bandwidth 2 bw max and returns the function
RTT ), where cwnd denotes the
current congestion window and RTT stands for the (mea-
sured) round-trip time. In this phase, bw(t) reflects slow-
start's doubling of the bandwidth occupied every round-trip
time, which is represented by the ratio of cwnd and RTT .
The exponential increase continues up to (at most) the net-work
path's bottleneck capacity bw max .
(ii) When the protocol is in the congestion avoidance state,
i.e. when operating at the bandwidth effectively available,
we deploy TCP Vegas-style network-path adaptation [3],
and can therefore approximate bw(t) - cwnd
RTT , as changes to
cwnd are supposed to happen on a fairly large time-scale
(multiples of the round-trip time).
(iii) In the congestion recovery state, which effects a rate
1 For the sake of brevity we only discuss bandwidth-related functions. Similar
API-extensions exist for other QoS-parameters, such as delay or loss.
This process is known as initial slow-start threshold (ssthresh) estimation [3],
[16]. Note that standard TCP uses a statically defined ssthresh of 64 KBytes.
halving, bw(t) is modeled according to [21].
Congestion control's use-it-or-lose-it property [11] requires
the sender to be almost constantly sending, otherwise the feed-back
may not be useful. Moreover, the issue of dynamically
assessing the stability of end-to-end network-path characteristics
is an open research question, which is why we refrain from
discussing how to compute prob bw (t) here and refer to off-line
studies on this topic [30].
V. FEEDBACK LOOP AND ADAPTATION
As stated in the previous sections, the goal of a network-aware
sender is to meet a user-specified bound on the delivery
time by adapting the quality of the objects delivered to the mea-
sured/available network capacity. The adaptation process' objective
must be to utilize the available resources as efficiently as
possible and therefore to maximize the user-perceived quality
within the bounds (time, bandwidth, and boundary conditions
on quality) given. The following sections discuss in more detail
the mechanisms deployed in our prototype network-aware system
and elaborate on where and how application-specific information
can/must be factored out of the software control system
described to provide a reusable framework. However, before we
turn to the framework structure and its interaction with an appli-
cation, we briefly introduce the Chariot system as an example of
the type of application that can be based upon this framework.
A. Chariot: sample framework instantiation
The objective of the Chariot project is to allow networked
clients to search a remote image database. The Chariot system
uses query-by-example to let a user formulate a query for similar
images [1]. The low-level content (e.g., color and texture)
of each image in the repository is extracted to define feature
vectors, which are organized in a database index at the search
engine. The core of the system (as depicted in Fig.
of a client (to handle user access to the image library),
a search engine to identify matching images, and one or more
network-aware servers, which deliver the images in the best possible
quality, considering network performance, server load, and
a client-specified delivery time. Physical separation of the image
library index (in the search engine) from the image repository
(in the server) facilitates distribution and mirroring of the
library. The core components are connected by a coordination
layer that isolates the details of network access and adaptation
and gives each component a maximum of flexibility to take advantage
of future developments.
6 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 24, NO. 5, MAY 1998, 376-390
GUI
Client
Server
Search Engine
request delivery
query reply
Coordinator:
connection handling
Coordinator:
connection handling and
message convertion
image retrieval
and delivery
Coordination Layer
feature vector
extraction and
indexing
Coordinator:
connection handling
Fig. 3. Chariot architecture
It is the subsystem comprising client and the adaptive image
repository that is relevant to our discussion of network-
awareness and which serves as proof-of-concept for the ideas
presented in this paper.
B. Monitor- and react-phase
Table
I summarizes the terms and abbreviations introduced
in the next sections. The discussion of application-specific information
that is factored out of the control loop framework
and which must be provided by the application (developer) always
refers to the OMT-style [33] class hierarchy depicted in
Fig. 4. The name of abstract classes, which are part of the frame-
work, and abstract methods is shown in italics. Concrete classes
provided by the application that instantiates the framework-
Chariot in our example-are shaded. In the text we use "func-
tional" notation. E.g., foo(ob j) or bar(class) indicate that the
method foo is invoked on object ob j (ob j: foo() in our C++ im-
plementation) or that the method bar is invoked from class class
(class :: bar()), respectively.
The monitor- and react-phase (P mr ) is the key phase in our
framework. It is responsible for repeatedly obtaining feedback
from lower protocol layers and deciding whether adaptation is
required or not. The software control loop is part of the application
and may be layered on top of a network monitor (see Fig. 2),
from which it extracts feedback information about the available
network service quality (e.g., bandwidth).
As P mr is primarily interested in feedback about "relevant"
changes in service quality, it either must deploy a polling policy
to obtain feedback about the available bandwidth and assess
the significance of a QoS change on its own, or it has to register
with the monitor layer for asynchronous notification of QoS
change events. Whether a change in network service quality is
relevant is application-specific and depends on the granularity
of the adaptation possible, the cost incurred by the adaptation
mechanisms as well as on the bandwidth and processing power
available (Section V-C).
In both cases, P mr is executed repeatedly to establish whether
adaptation (e.g., data reduction) is required to account for a net-work
service degradation or whether adaptation is beneficial to
prevent network under-utilization. To do so the application-level
quality must be mapped down to network-level quality parameters
such as the bandwidth required or the amount of data remaining
to be shipped (d left ). d left , together with the feedback
on the available bandwidth, can be used to compute the time
needed needed ) for the transfer. Corrective action must be taken
if t needed and the time left (t left ) differ "significantly". (Signifi-
cance depends also on the size of the objects as well as network
and application properties.)
B.1 Application-to-network QoS-mapping
The kind of QoS-mapping that enables the comparison between
t needed and t left requires the application to provide a function
data(quality) that computes the amount of data necessary
for a given object quality (see member function Quality ::
data() in Fig. 4). d left is then determined by the sum of
data(quality(ob j)) of the objects ob j not yet delivered.
Given d left and get bw(), which estimates the band-width
available at time t in the future, we can compute t needed
by integrating (i.e. by summing up piecewise continuous parts
of) the function bw(t) over time t until an area (i.e. data vol-
ume) is covered which exceeds d left . Thus, t needed represents the
time needed to transfer d left given bandwidth bw(t). This fairly
general statement must be qualified to avoid misinterpretations:
with a best-effort network service model bw(t) can hardly be
predicted for more than a few round-trip times with a reasonably
high probability at the transport level (Section IV). Therefore,
approximates the available bandwidth after these first few
round-trip times with simple constant or linear functions based
on past measurements. This approximation simplifies the computation
of t needed ; knowledge about the bottleneck bandwidth
is used to bound the approximation.
Note that it is only for the time needed to prepare and transmit
the next object that P mr needs to estimate future network behavior
to be able to satisfy the user's request within the time limit-
the reason is that the control loop gets an opportunity to take
corrective action during the next iteration of P mr , if required. In
case we do not have such estimates, or if the conditions above
cannot be met, e.g., because we are dealing with large objects,
for which transmission takes longer than the system can reliably
predict bw(t), the situation is more complicated. Either the
control loop gets a chance to take corrective action (because the
time limit did not expire), or the data cannot be sent in the allot-
BOLLIGER AND GROSS: A FRAMEWORK-BASED APPROACH TO THE DEVELOPMENT OF NETWORK-AWARE APPLICATIONS 7
I
ABBREVIATIONS USED IN THE PAPER
react-phase
prep prepare phase
trans transmit phase
user specified time limit for response delivery
t left time left to deliver response, initialized to T
c prep CPU resources used to prepare objects for transmission
t prep time needed to prepare objects (given c prep , load(t))
t trans time needed to transmit objects (given bw(t))
needed time needed to deliver response (given t trans , t prep )
error variable of control loop (t needed \Gamma t left )
d left data remaining to be transmitted
d reduction reduction potential of an object
bw(t) bandwidth estimation/prediction, t ? now
load(t) system load estimation/prediction, t ? now
Chariot Class
Object Quality
data ()
Algorithm
prepare_costs()
Request
original, current
transforms
requested_objects
ImageObject
ImageQuality
data ()
ImageScaling
quality(obj, p)
prepare_costs()
prepare()
algorithm_iter()
ImageCompr
prepare_costs()
return w h - depth compr_ratio
weight
time_limit
h, depth,
compr_ratio
new ImageQuality (-p obj.orig.w, -p - obj.orig.h,
obj.orig.depth, obj.orig.compr_ratio)
cur_trans
cjpeg -quality param obj.orig
cur_trans.transform(this, param)
Framework Class
abstract_method()
Fig. 4. Application-specific part of the class hierarchy (OMT-notation [33])
ted time. In the latter case, the application must be able to deal
with the breakdown of the service model (Section V-E).
B.2 Network-to-application QoS-mapping
The goal of P mr is to bring t needed in line with t left by either reducing
or increasing the (overall) quality of the objects remaining
to be delivered; these actions thereby reduce or increase d left .
The following questions must be considered while the sender
tries to compensate for the difference t dif needed by
adapting the quality of the data awaiting delivery:
(i) Which object(s) should be chosen for adaptation (victim
choice)?
(ii) How should the amount of quality adaptation be distributed
among the chosen objects? Most importantly,
how does the sender find the amount of quality adaptation
needed given the volume of data adaptation required (d dif f )
(quality distribution)?
(iii) Which algorithms should be used to accomplish a desired
adaptation (algorithm selection)?
To a certain extent most of the questions above are
application-specific and therefore cannot be answered in gen-
eral. Thus, a framework for network-aware applications must
provide flexibility in replacing, refining, or extending strategies
as described in the following paragraphs.
(i) Victim choice: One strategy to chose objects for adaptation
(i.e. victims) proceeds along the following idea: if quality
reduction is required, choose the objects with the lowest weight-
quality product, because they influence the overall quality the
least. In case expansion is needed, the objects with the highest
weight-quality product should be chosen for analogous reasons.
Note that the metric used for the victim choice depends on the
application, e.g., in an image retrieval system it may be better
to only decide according to the weights, which are based on
similarity measures, because they reflect which images the user
is really interested in.
(ii) Quality distribution: Given a set of victims to be reduced
(or expanded), ideally the individual objects are reduced
in quality inversely proportional to their weight-quality product
(or their weight respectively). However, a problem arises
8 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 24, NO. 5, MAY 1998, 376-390
here because the system must satisfy two objectives at two different
levels: On one hand, it aims to balance the application-level
quality reduction according to the relative importance of
the objects (i.e. their weight) and on the other hand, it needs to
achieve a data reduction of a certain amount (d dif f ) at the net-work
level. The problem is that the mapping from network to
application quality measures is generally ambiguous (in contrast
to application-to-network mapping).
Example (Chariot): to effect an image size reduction of a factor N the server
may either scale the image down by
N, reduce the color depth by a factor of
N, find a JPEG quality factor that achieves such a compression ratio or use any
combination of the image transformation algorithms mentioned.
Although we find transformations that achieve a certain data
reduction, the direct effect on the quality reduction is not known
since the application-level quality depends on the user-specified
weighting of the individual quality attributes (e.g., resolution,
color depth). This ambiguity makes it hard to guarantee balanced
quality reduction and to find the required quality reduction
efficiently in general. A straightforward but inefficient solution
simply computes and compares the data and quality reduction
of all the algorithms. Unless the application provides additional
hints such as the continuity of the data(quality) function,
there is not much chance to improve upon such an approach.
Finding efficient and generally applicable approaches to this aspect
of application-controlled QoS-mapping is still an area of
ongoing research. For our prototype system we make the simplification
that data(quality) is a linear function of quality; this
assumption implies that the system must find only a "fair" distribution
of d dif f that respects the weights of the individual objects
(see Section V-D.2).
The problem of network-to-application QoS-mapping is further
complicated since (a) the adaptation potential of an object
(limited by the boundary conditions on min/max quality) must
be taken into account, and (b) the transformations applied on
the objects consume host resources and time. Therefore, the
transformations indirectly impact t needed . We address the issues
related to (b) in Section V-C.
(iii) Algorithm selection: The choice of the (transformation)
algorithm to accomplish a given quality adaptation is closely related
to the issue of how much quality adaptation is required for
each victim. There is usually an application-dependent choice
as indicated in the example above. In our prototype framework
we require the application to specify a list of transformation algorithms
for each class of objects that can be part of a request.
Each algorithm must provide a list of parameter values appli-
cable. In addition, the application must provide functions that
help the adaptation process estimate the data and quality reduction
potential of an algorithm on a per-object basis.
C. System-awareness
Quality adaptations (e.g., by means of transformations, such
as compression) cost CPU-resources and take a non-negligible
amount of time to be completed. On one hand, a reduction in object
quality may result in the desired reduction of transmission
time, on the other hand, the transformations necessarily imply
higher CPU-costs than simply retrieving an object (or image)
from disk. Obviously, we want to avoid situations where a reduction
of object quality in an attempt to reduce the error variable
incurs prepare costs (t prep ) that are higher than
the gain in transmission time (i.e. t prep ? t dif f ).
Therefore, our resource model also includes t prep , the time
needed for the phases P prep . The adaptation process is still
driven by network resource availability but additionally controlled
by host resource consumption and availability. Each
transformation algorithm registered for the requested objects
must provide a function prepare costs(ob j; param;cpu) returning
an estimate for the costs (c prep ) to transform ob j from its
original quality state to the one currently assigned on a given
cpu. c prep denotes the costs in terms of resources used, e.g.,
as given by system and user CPU time on Unix systems. c prep
is used to compute an estimate of the effective t prep needed for
a transformation by using an operating system dependent function
prepare time(c prep ; load), where load denotes the average
length of the process run-queue for example. (For most Unix
systems the time needed for a given task using c prep CPU time
at a system load of load can be approximated by c prep \Delta load, up
to a certain maximum load-level).
The effectiveness of the adaptation process and the reliability
of the server to meet the QoS-constraints depend on the accuracy
of all the models and estimates introduced in the last sec-
tions: bw(t), data(quality), prepare costs(ob j; param;cpu),
prepare time(cost; load), etc. The more accurate the estimates
used in the decision-making, the higher the probability that the
sender is able to meet the time constraints.
Example (Chariot): The server computes c prep as a function of image size
and param used for the transformation algorithm. In contrast to approaches
typically found in real-time systems, which rely on worst-case predictions for
c prep , our server bases its estimates on statistical data gained during past measurements
of request processing. We derived regression models for both an al-
gorithm's cost and its reduction potential. The regression models are regularly
updated with new measurements.
C.1 Practical considerations: communication latency hiding
In a simple implementation of the software control loop, the
phases of the framework execute sequentially. The adaptation
produces stable results if t prep trans for the adapted object
is smaller than t trans of the original object. However, sequential
operation wastes bandwidth while the host is busy preparing
the next object for transmission and wastes CPU resources
while transmitting objects over a slow end-to-end path. With
a slow connection, the sender is almost constantly congestion-
controlled, and there are ample CPU cycles. An improved control
loop tries to keep P trans constantly sending and uses threaded
prepare and transmit phases to hide the latency of the object de-
livery. Communication latency hiding calls for a different cost
model: t needed is no longer computed as t prep trans , but is approximated
denotes the fraction of t prep that is not available for latency hiding
[38].
Although the intrinsics of the various resource models are
outside the scope of this paper, the discussion above emphasizes
the need for suitable abstractions. To allow for
future refinements and extensions we encapsulate the computation
and communication model deployed by a function
overall time(t prep ; t trans ) that can be used to compute t needed .
BOLLIGER AND GROSS: A FRAMEWORK-BASED APPROACH TO THE DEVELOPMENT OF NETWORK-AWARE APPLICATIONS 9
D. Using the framework to implement an adaptive system
Fig. 5 summarizes the steps involved in computing the error
variable t dif f that drives the adaptation process. The function
compute t dif f () takes the request, i.e. the list of objects
not yet transmitted, and the functions bw(t), load(t) as argu-
ments. In addition, it uses the global variables t left and cpu.
compute t dif f () is then used by the function adapt(), which
is sketched in Fig. 6, and is invoked repeatedly by P mr after
obtaining new bandwidth feedback bw(t). If t dif f exceeds an
application-specific threshold e limiting oscillation, the remaining
objects in the request are subject to the adaptation process
described in the next sections.
To accomplish the adaptation, the sender must find objects
to transform. Given the list of objects that must be transmitted,
there are several possible approaches to identify the victims, distribute
the quality reduction, and select the transformation algo-
rithms. We discuss here two such approaches.
D.1 General exhaustive search
To avoid congestion and network under-utilization the adaptation
process should aim to find a combination of objects to adapt
and transformations to apply such that j t dif f j is minimized and
the overall quality metric is maximized. Unfortunately, an exhaustive
search for the global minimum of j t dif f j in the whole
solution space is not attractive, as we illustrate in the next paragraphs
Given a request consisting of N objects and given M transformation
algorithms each taking m different parameter values
on average, there are n - M
possible transformations
applicable to each of the objects. If we assume that all the
possible combinations fulfill the QoS-restrictions, there are approximately
possibilities to adapt the request to the currently
available bandwidth. In each iteration of P mr , the sender must
compute t dif f for each of the N n points in the solution space and,
e.g., find the combination with the smallest j t dif f j. As an alter-
native, the sender can try to find the combinations with jt dif f
and choose the one with maximal -weight \Delta quality.
As long as there is no additional information about the functions
used to compute t dif f (e.g., gradients), or as long as the
quality boundaries are not very restrictive, the size of the solution
space cannot be reduced, and hence the complexity is too
high to make this approach feasible in the general case. There-
fore, we cannot include a generic method to perform exhaustive
search in the framework, since we expect the methods of
the framework to provide a solution for all possible extensions.
However, we can provide the application with several strategies
[14] for the adaptation process (one being exhaustive search
for example) and leave it to the application developer to decide
on the most appropriate strategy to use in the context of the application
D.2 A practical approximative search
If N or n are large, the sender must either employ some approximations
or introduce simplifications in the adaptation process
to reduce the complexity of the adaptation process, otherwise
the search is so expensive that the resource consumption
of P mr must be included in the cost models. For the sake of
simplicity we restrict our discussion to the former case.
The idea that forms the basis of the currently implemented
adaptation process is to approximate the search for a minimal
iteratively trying to apply the possible transformation
algorithms with their respective parameters with the objective to
find a local minimum that is within the tolerance. If one algorithm
does not achieve the desired result, the next algorithm is
chosen [24]. The adaptation phase, i.e. the reduce() function in
Fig. 6, then proceeds along the following steps (see Section V-
(i) the victims are chosen as the first n objects from the request
list, which is ordered by increasing weights, such that
reduction (ob represents the amount
of data reduction that is required to compensate for t dif f .
d reduction (ob j) denotes the reduction potential of the current
quality state of object ob j, which is bounded by the
minimal quality tolerated by the user. If no such set of n
objects exists with which the necessary data reduction can
be achieved an exception is thrown, which is caught and
handled in adapt() (Fig. 6).
(ii) With the simplifying assumption data - quality, d dif f is
distributed among the victims by assigning the reduction
needed for each object to a fraction of d dif f inversely proportional
to the object's relative weight (in the request list),
unless d reduction poses a limit on the reduction attainable.
In such a case, the distribution step is repeated, as long as
there are objects whose reduction is limited by d reduction
and as long as t dif f has not been fully compensated for.
(iii) The transformation algorithm selection is done by iterating
over the algorithms, the objects, and the parameters.
In each step t dif f is computed and the iteration terminates
Note that the adaptation process outlined makes heavy use of
the iterators shown in Fig. 4. Use of iterators facilitates experimentation
with different priorities of the transformation algorithms
used. Based on our experience with Chariot, we found
that being able to cleverly apply application-knowledge to set
priorities is essential for the effectiveness of the approximative
search.
E. Problems with feedback control
This paper describes the overall structure of a framework for
network-aware applications. Several practical issues have not
been mentioned or discussed in detail:
Start-up behavior: Special care must be applied to find the
optimal operating point of the control loop as soon as possible
while avoiding overshooting and an excessively conservative
(i.e. slow) start-up. For a network-aware sender
this requirement means that the server ought to start delivering
objects as soon as possible to get early feedback.
Furthermore, the sender should refrain from sending too
large an object at the start, in case bandwidth turns out
to be unexpectedly low. These requirements impact application
design as follows: an application should either
(i) allow the list of requested objects to be reordered, such
that objects ob j with small data(ob j) that need not or cannot
be adapted are sent first; (ii) be able to cope with an
interrupted object delivery that may be restarted in lower
ob j2request
data(quality(ob j))
ob j2request
prepare costs(ob j; algorithm(ob j); param(ob j);cpu)
prepare time(c prep ; load(t))
Fig. 5. Function compute t dif f (request;bw(t); load(t)) returning t dif f
try f
prevent congestion
else prevent under-utilization
catch (NoAdaptationPossible exception) f
handle (exception); // application specific handler
Fig. 6. Function adapt(request;bw(t); load(t))
quality; or (iii) support hierarchical encoding and progressive
delivery of objects, such that the transmission can be
stopped at any time.
Bandwidth probing by the lower layers of the communication
system allows to estimate the expected bandwidth
after just a few RTTs (e.g., packet-pair probing [20], [30])
and can also help to alleviate the problems with start-up
behavior.
Communication idle time: Gaps in the sequence of object
transmissions should not only be avoided because of the
transmission opportunities lost at the application level, but
also because many congestion control mechanisms exhibit
a use-it-or-lose-it property [11]. That is, communication
idle time results in loss of the fair share of the bottleneck
bandwidth previously held by the connection and consequently
results in repeated start-up behavior.
Latency of prepare and transmit activities: With our model
of dynamic adaptation to network service quality, a
network-aware sender must rely on either good bandwidth
estimates or on the expectation that network service does
not degrade more during t prep trans of the next object
than there is data reduction potential inherent to the remaining
objects in the request list. Due to the nature of
best-effort network service these assumptions may not be
fulfilled. Such a situation results in the breakdown of the
service model.
Ill-specified boundary conditions are another cause of failure
that requires application-specific reaction. No application
should set T and then require a high minimal quality
such that even sending at minimal quality exceeds the time
limit. However, the appropriate settings of the boundary
conditions cannot always be anticipated. Therefore, an application
must be able to deal with such situations. Possible
reactions include delivery of objects at minimal quality (de-
sirable in an image retrieval system), a user-application dialogue
to renegotiate the boundary parameters, or termination
of transfers altogether. This last option is attractive if
it allows an overloaded server to catch up. The application-provided
exception handler in Fig. 6 deals with such situations
VI. EVALUATION
This section presents results from the Chariot system, which
is an extension of the framework presented here. We concentrate
here on assessing the ability of the (adaptive) server to respond
to bandwidth fluctuations, i.e. its network-awareness. Note that
the examples presented here serve the purposes of validating the
approach as well as pointing out areas of further research. The
restricted nature of selected examples can by no means replace
an extensive evaluation and quantification of the adaptation potential
in practice. However, such a study is beyond the scope of
this paper.
A. Evaluation Methodology
Our approach to evaluate the system's network-awareness
proceeds in two steps: First, we subject the system to synthetic
reference bandwidth waveforms (the example presented here is
the Step-Down waveform shown in Fig. 7a) to characterize its
ability to adapt in general and in accordance with the (well-
established) principles for measuring dynamic response from
the field of control systems [32]. Second, field tests in the Internet
with its high bandwidth dynamics enable us to assess the
BOLLIGER AND GROSS: A FRAMEWORK-BASED APPROACH TO THE DEVELOPMENT OF NETWORK-AWARE APPLICATIONS 11
system's agility with respect to real-world network traffic.
Since ensuring reliable and reproducible experiments on real
networks is extremely difficult, we follow the approach of
other researchers and resort to a technique called trace modulation
[28]. Trace modulation performs an application-transparent
emulation of a slower target network on a faster, wired LAN.
Each application's network traffic is delayed according to delay
and bandwidth parameters read from a so-called replay trace,
which is gathered from monitored transfers.
B. Experimental Setup
In our experiments the Chariot server runs on a 150 MHz
MIPS R4400 SGI Challenge S with 128 MB of memory. A
134 MHz MIPS R4600 SGI Indy with 64MB of memory serves
as the platform for the client. For both of the experiments
shown below, the client requests transmission of 90 JPEG images
stored at the server in a resolution of 380 \Theta 250 pixels and
a JPEG quality factor of 95-97. The 90 images total 5.2 MB of
data to be transmitted. The images are assumed to be equally
relevant, which means that equal weights are assigned to the 90
images. The user-imposed time limit for request processing is
arbitrarily chosen to be 60 seconds with a tolerance interval of
[-2, 2] seconds.
The bandwidth replay traces used for the two experiments
conducted are depicted in Fig. 7. The Step-Down waveform of
Fig. 7a is an idealization of real network scenarios; it approximates
possible situations in an overlay network for instance,
where a mobile client may seamlessly switch between different
network interfaces. Fig. 7b shows the monitor layer's perception
of the available bandwidth during a transfer between the
ETH Z-urich (Switzerland) and the University of Linz (Austria).
This bandwidth curve has been smoothed using a two second
averaging interval. Hence the system under test does not deal
with the problems of start-up behavior.
The Chariot server operates using the "approximative search"
adaptation process described in Section V-D.2. Chariot's reduction
algorithms registered with the framework are image scaling
(with factors 1/2 and 1/4) and image compression (with quality
factors 75, 50, and 25 [18]). The server performs communication
latency hiding by means of a separate thread for P prep . As
a consequence, P trans for image i of the sequentially processed
request list operates concurrently to P prep for image i + 1.
C. Experimental Results
C.1 Step-Down waveform
Fig. 8-a data vs. time plot as introduced in [19]-shows
that Chariot is able to both adapt the amount of data transmitted
(curve named "actual") to the amount of data transmittable
("possible") and deliver the 90 images within the 60-second
time limit. The Step-Down waveform of the available band-width
in Fig. 7a represents the derivative of the curve named
"possible". The sharp drop in bandwidth at seconds is
absorbed almost without loss of transmission possibilities. Loss
of transmission possibilities, which is characterized by the vertical
difference between the curve showing the data theoretically
transmittable ("possible") and the data actually transmitted ("ac-
tual"), can be caused by prepare or control loop overhead. The
curve depicting the control loop's estimate of the total amount
of data transmittable within the time limit ("estimated") shows
that the adaptation at place swiftly (within a small
fraction of a second). The estimate is based on the amount of
data already transmitted, the monitor's estimate of the available
bandwidth bw(t) and t left .
Fig. 9 plots the control loop's error variable t dif f that drives
Chariot's adaptation. The two horizontal lines at t dif
tolerance interval specified. The "time
difference" plot shows that in fact three different (major) adaptation
events occurred (adaptation is necessary when j t dif f j? 2
s). First, around adaptation steps are necessary to
reduce the 5.2 MB to the 4.7 MB estimated to be transferable.
Second, due to the sharp bandwidth drop at needed and
hence t dif f increase by approx. 33 seconds; this drop is compensated
in subsequent reduction steps. Third, t dif f exceeds the 2
second-tolerance twice at t - 33 s although no change in band-width
could be observed. This fact may be attributed to inaccuracies
in the estimates of c prep and the reduction potential of
images. Although provision of inaccurate estimates by the application
can have a detrimental impact on the overall performance
(i.e. the quality deliverable), the example shows that our
control loop mechanism is flexible enough to even cope with
such situations.
C.2 Internet traffic
Fig. 10 shows that Chariot is even capable of dealing with
frequent oscillations in the available bandwidth as present on to-
day's wide-area network paths. Note, however, that the penalty
in terms of transmission possibilities lost is higher than in the
previous case. The curve depicting the data volume transmittable
("possible") relates to the bandwidth waveform shown in
Fig. 7b. Careful examination of the curve plotting the data effectively
transmitted reveals two cases (at t - 3 s and t - 20 s)
where transmission lulls had to be accepted. The reason is that
in these cases P trans for image i finished before the concurrently
executed phase P prep for image i +1 and thus had to wait before
starting transmission of image i + 1. The causes for this behavior
can be twofold: Either c prep (img
case the adaptation process could try to reorder the images in
the request list to avoid communication idle time, or the server's
load is too high, such that t prep (img
trans (img i ). The latter problem calls for host resource reservation
by the operating system as other researchers have suggested
[25], [26].
Keep in mind, that although the examples presented show that
adaptation to meet the given time limit works, the whole process
of adaptation is quite sensitive to the choice of the "boundary
conditions", such as the time limit. Since the adaptation potential
is limited by the reduction potential of the objects/images
to be transmitted and the cost incurred for their transformation,
unrealistic expectations from the user may simply result in the
break-down of the service model.
VII. RELATED WORK
We can divide approaches to provide predictability of service
quality to the application/user into two categories: those that are
bandwidth
time [sec]
bandwidth
(a) Step-Down waveform0.10.30.50.70.90
bandwidth
time [sec]
bandwidth
(b) Bandwidth of Internet image transfer
Fig. 7. Bandwidth replay traces used
DDDDDDDDDDD D DD D D D D D D D D D D D D
D D D D D D D D D DD D D D D D DD D D D D D D DDD D D
DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD D DDD DDDDDDDDDDDDDDDDDDDD DD DD D D D D D D D D D D D D D D D D DD D D D D D D D D D D D D D D D D D D D D
data
time [sec]
Possible
. Actual
Estimated
Fig. 8. Data volume transmitted in Step-Down scenario
t_diff
time [sec]
. t_diff
lower bound
upper bound
Fig. 9. Time difference (t dif ) plot for Step-Down example
BOLLIGER AND GROSS: A FRAMEWORK-BASED APPROACH TO THE DEVELOPMENT OF NETWORK-AWARE APPLICATIONS 13
DDDDDDDDDDDDDDDDD D D D D D D D D D D D
DDDDDDD DD D
DDDDDDDDD
D DDDDDDDDDDDD D
D DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD D D
DDDDDDDDD D
DDDDDDDDDDDDDDDDDDDDDDDDDD D D D
DDDDDDDD DDDDDDDDDDDDDDDD D D D
DDDDDD
DDDDDDDDDDDDDDDDDDDDDDD
DD DDDDDDDDDDDDD D D DDDDDDDDDDDDDDDDDDDDDDDDDDD
DDDDDD D DDDDDD
DDDDDDD DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD D D D
DDDDD DDDDDDDDDDDDDDDDDDDDDDDDDDDDD D DDDDDDDDDDD
DDDDD D DDDDDD DDDDDDDDDD DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD DDDDDDDDDDDD D
D D D
DD
DDDDD D D D DD DD D D DD D D D D D D D D D D D D DD DD
data
time [sec]
Possible
. Actual
Estimated
Fig. 10. Data volume transmitted from Z-urich to Linz
based on reservations and those that are based on adaptation (see
Section II-A).
A. Reservation
There exists a long tradition of research into reservation of
network resources, with a trend towards integrating multiple service
models in a single cell- or packet-switched network [8],
[40]. It has been recognized that to support end-to-end QoS
guarantees not only network aspects must be considered, but
the end-system and OS-resources must also be taken into account
[26]. This requirement holds especially for continuous
media applications as they have the most stringent resource requirements
[36], [34]. In step with advances in resource guarantee
provision in both fields, researchers identified the need
for resource orchestration and developed methods that allow
for meeting the user's QoS requirements on an end-to-end basis
[25], [6]. Most methods involve QoS-negotiation procedures
mainly based on application-to-network QoS-mapping.
B. Adaptation
Adaptation is an effective way of enhancing the user's perception
of service quality in environments where resource reservation
is not possible, or in situations where it is impossible for
an application to specify its resource requirements in advance.
Recent adaptive system's such as RLM [22], [23] or IVS [2]
have shown that even continuous media applications can benefit
from adaptation in environments lacking reservation capabili-
ties. Their feedback-driven adaptation scales back quality and
hence resource consumption when application performance is
poor, and they attempt to discover additional resources by optimistically
scaling up usage from time to time. While IVS
employs sender-based bandwidth adaptation, RLM pioneered
receiver-based adaptation in a multicast environment. Also, both
systems continuously adapt their play-out point to account for
variations in the transmission latency.
In contrast to these systems, Odysee [27] seeks to provide
a more general approach to the construction of resource-aware
applications by modifying the interface between applications
and the operating system. Their measurement-based approach
employs receiver-driven adaptation and concentrates on orchestrating
multiple concurrent resource-aware applications on the
client rather than on the server. In contrast, our framework uses
sender-based adaptation and identifies a wide range of methods
that can be customized by the user.
Fox et al. [13] propose a proxy-based architecture employing
so-called distillation services to adapt the quality of the service
for the client to the variations in network resource availabil-
ity. Their system-in addition to being network-aware-also
accounts for variability in client software and hardware sophistication
VIII. CONCLUDING REMARKS
This paper presents a simple framework for the construction
of network-aware applications. Given the framework, the application
developer must specify functions to determine the relationships
between quality and size as well as provide estimates
on the effectiveness of various transformations to reduce size.
Fig. 5 summarizes the functions required.
Undoubtedly, further work is required to find more elaborate
solutions to the problems discussed in this paper. However, the
abstractions identified in the adaptation process allow for experimentation
with various methods for information collection and
with methods providing better estimates, such that tradeoffs can
be found between the accuracy achieved, the efforts involved in
providing the estimates, and their effect on the bandwidth adap-
tation. As it is not always possible to provide good estimates for
network behavior (or for the application's resource demands) it
is important that systems are designed for adaptivity. Such systems
can observe the actions involved with a decision and can
take corrective action if necessary. A framework provides the
context for such experimentation by application developers, and
frees the developer from the need to acquire a detailed understanding
of the monitoring system, network protocols, the net-work
interface or router capabilities. Our experience with the
development of an adaptive image server has demonstrated the
practicability and benefits of this approach.
The development of network-aware application requires considerable
effort, and no amount of adaptation can accomplish
the impossible-satisfy unrealistic expectations by an application
or a user. However, with adaptation, applications can push
the envelope of acceptable network performance, and we expect
increased use of adaptation techniques both in stationary and
14 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 24, NO. 5, MAY 1998, 376-390
mobile network applications. The framework outlined here provides
an approach that shelters the application developers from
many details of adaptivity and thus helps to reduce the effort
involved in the development of network-aware applications.
ACKNOWLEDGEMENTS
We thank S. Blott, P. Brandt, A. Dimai, R. Karrer, M. N-af,
M. Stricker, P. Walther and R. Weber for their contributions to
the design and implementation of the Chariot system. We appreciate
the feedback of the referees which improved the paper
considerably. Finally, we acknowledge the discussions during
the workshop on network-aware and mobile applications held in
conjunction with ESEC/FSE '97 in Z-urich.
--R
Architecture of a networked image search and retrieval system.
Scalable feedback control for multicast video distribution in the Internet.
Vegas: New techniques for congestion detection and avoidance.
Adaptives Transportprotokoll (in German).
Characteristics of wide-area TCP/IP conversations
A continuous media transport and orchestration service.
The available bit rate service for data in ATM networks.
Supporting real-time applications in an integrated services packet network: Architecture and mechansim
A workstation interconnect supporting time-dependent data transmission
A QoS communication architecture for workstation clusters.
Evolution of controls for the available bit rate service.
Router mechanisms to support end-to-end congestion control
Adapting to network and client variability via on-demand dynamic distillation
Design Patterns
The desk-area network
Improving the start-up behavior of a congestion control scheme for TCP
Independent JPEG Group.
Congestion avoidance and control.
A control-theoretic approach to flow control
Forward acknowledgement: Refining TCP congestion control.
vic: A flexible framework for packet video.
Ein adaptives Bildtransferprotokoll fuer Chariot (in German).
The QoS broker.
Resource management in networked multimedia systems.
Agile application-aware adaptation for mobility
Measurements and Analysis of End-to-End Internet Dynamics
The Design of Automatic Control Systems.
Operating system issues for continuous media.
RFC <Year>1889</Year>: RTP: A transport protocol for real-time applications
Analyzing the multimedia operating system.
UNIX Network Programming.
The Network Machine.
RSVP: A new resource reservation protocol.
Architectural support for quality of service for CORBA objects.
--TR
--CTR
Ewa Kusmierek , David H. C. Du, Streaming video delivery over internet with adaptive end-to-end QoS, Journal of Systems and Software, v.75 n.3, p.237-252, March 2005
Bruce Lowekamp , David O'Hallaron , Thomas Gross, Direct queries for discovering network resource properties in a distributed environment, Cluster Computing, v.3 n.4, p.281-291, 2000
Jennifer M. Schopf , Francine Berman, Stochastic scheduling, Proceedings of the 1999 ACM/IEEE conference on Supercomputing (CDROM), p.48-es, November 14-19, 1999, Portland, Oregon, United States
Irene Cheng , Anup Basu, QoS based video delivery with foveation and bandwidth monitoring, Pattern Recognition Letters, v.24 n.15, p.2675-2686, November
Arjan Peddemors , Hans Zandbelt , Mortaza Bargh, A mechanism for host mobility management supporting application awareness, Proceedings of the 2nd international conference on Mobile systems, applications, and services, June 06-09, 2004, Boston, MA, USA
Jaspal Subhlok , Peter Lieu , Bruce Lowekamp, Automatic node selection for high performance applications on networks, ACM SIGPLAN Notices, v.34 n.8, p.163-172, Aug. 1999
Bruce Lowekamp , Nancy Miller , Thomas Gross , Peter Steenkiste , Jaspal Subhlok , Dean Sutherland, A resource query interface for network-aware applications, Cluster Computing, v.2 n.2, p.139-151, 1999
Vincenzo Grassi , Raffaela Mirandola, Derivation of Markov Models for Effectiveness Analysis of Adaptable Software Architectures for Mobile Computing, IEEE Transactions on Mobile Computing, v.2 n.2, p.114-131, January
R. Weber , J. Bollinger , T. Gross , H.-J. Schek, Architecture of a networked image search and retrieval system, Proceedings of the eighth international conference on Information and knowledge management, p.430-441, November 02-06, 1999, Kansas City, Missouri, United States
K. Smith , R. Paranjape , L. Benedicenti, Agent behavior and agent models in unregulated markets, ACM SIGAPP Applied Computing Review, v.9 n.3, p.2-12, Fall 2001
Paolo Bellavista , Antonio Corradi , Andrea Tomasi, The mobile agent technology to support and to access museum information, Proceedings of the 2000 ACM symposium on Applied computing, p.1006-1013, March 2000, Como, Italy
Liang Cheng , Ivan Marsic, Piecewise network awareness service for wireless/mobile pervasive computing, Mobile Networks and Applications, v.7 n.4, p.269-278, August 2002
Athanasios G. Malamos , Theodora A. Varvarigou , Elias N. Malamas , Chi-Hsiang Yeh, MEQA3 - a multi-end QoS application adaptation architecture, Information processing and technology, Nova Science Publishers, Inc., Commack, NY, 2001
Vikram Adve , Vinh Vi Lam , Brian Ensink, Language and Compiler Support for Adaptive Distributed Applications, ACM SIGPLAN Notices, v.36 n.8, p.238-246, Aug. 2001
Paolo Bellavista , Antonio Corradi , Cesare Stefanelli, A mobile agent infrastructure for the mobility support, Proceedings of the 2000 ACM symposium on Applied computing, p.539-545, March 2000, Como, Italy
Vincenzo Grassi , Raffaela Mirandola , Antonino Sabetta, UML based modeling and performance analysis of mobile systems, Proceedings of the 7th ACM international symposium on Modeling, analysis and simulation of wireless and mobile systems, October 04-06, 2004, Venice, Italy
Manish Mahajan , Manish Parashar, Managing QoS for Multimedia Applications in the Differentiated Services Environment, Journal of Network and Systems Management, v.11 n.4, p.469-498, December
Paolo Bellavista , Antonio Corradi , Cesare Stefanelli, Mobile Agent Middleware for Mobile Computing, Computer, v.34 n.3, p.73-81, March 2001 | frameworks;software construction;network-aware computing;adaptive applications |
279011 | Unsupervised Segmentation of Markov Random Field Modeled Textured Images Using Selectionist Relaxation. | AbstractAmong the existing texture segmentation methods, those relying on Markov random fields have retained substantial interest and have proved to be very efficient in supervised mode. The use of Markov random fields in unsupervised mode is, however, hampered by the parameter estimation problem. The recent solutions proposed to overcome this difficulty rely on assumptions about the shapes of the textured regions or about the number of textures in the input image that may not be satisfied in practice. In this paper, an evolutionary approach, selectionist relaxation, is proposed as a solution to the problem of segmenting Markov random field modeled textures in unsupervised mode. In selectionist relaxation, the computation is distributed among a population of units that iteratively evolves according to simple and local evolutionary rules. A unit is an association between a label and a texture parameter vector. The units whose likelihood is high are allowed to spread over the image and to replace the units that receive lower support from the data. Consequently, some labels are growing while others are eliminated. Starting with an initial random population, this evolutionary process eventually results in a stable labelization of the image, which is taken as the segmentation. In this work, the generalized Ising model is used to represent textured data. Because of the awkward nature of the partition function in this model, a high-temperature approximation is introduced to allow the evaluation of unit likelihoods. Experimental results on images containing various synthetic and natural textures are reported. | INTRODUCTION
Textured image segmentation consists in partitioning an image into regions that are
homogeneous with regards to some texture measure. Texture description is an important
issue with respect to this task. Existing texture segmentation methods are
commonly classified according to the texture description they rely on. In structural
methods, textures are assumed to consist of structural elements obeying placement
rules. In feature based methods, a vector of texture features is computed for each
pixel. In stochastic model-based methods, textures are assumed to be realizations of
two-dimensional stochastic processes such as, for example, Markov random fields.
Among the existing texture segmentation methods [30], those based on Markov
random fields [23, 15] have retained substantial attention. Markov random fields are
attractive because they yield local and parsimonious texture descriptions. Past studies
have also shown the efficiency of Markov random fields in texture modeling, com-
pression, and classification [12, 6, 5]. Besides, the use of texture models presents a
methodological advantage. Textured images can be generated according to the specified
model so that the segmentation method can be evaluated independently of the
adequacy of the underlying texture characterization.
Texture segmentation using Markov random fields can be achieved through maximum
likelihood labelization [8]. Besides their ability to model textured data, Markov
random fields can also be used to incorporate a priori knowledge concerning the properties
of the labels themselves [15, 13, 14, 8, 27]. Segmentation is then achieved by
searching the labelization that maximizes the posterior probability of the labeling conditioned
on the data. This optimization problem can be solved using the Gibbs Sampler
combined with simulated annealing [16].
However, the parameter estimation problem is a crucial issue for methods based
on Markov random fields, and their performance depends on the availability of correct
parameter estimates [15]. These methods work well in supervised mode, wherein
the number of textures and their associated parameters are known, or can be esti-
mated, beforehand. In unsupervised mode, when such knowledge is not available, a
circular problem arises [26, 22]: parameter estimates are needed to segment the image,
while homogeneous texture samples, which can be provided in the form of an already
segmented image, are needed to compute these estimates.
Different solutions to this problem have been proposed. A first approach consists
in assuming that the shapes of the textured regions are such that the image can be
divided into a number of homogeneously textured blocks. Each such block provides
a parameter estimate. The number of textures and their associated parameters are
determined by applying a clustering algorithm on the parameter estimate set. These
final estimates are used to compute the segmentation that optimizes a criterion such as
the likelihood criterion [9, 29], the a posteriori probability criterion [26, 22, 29], or the
classification error criterion [26]. Conversely, a second approach consists in iterating
an estimation/segmentation cycle [24, 33]. Given a candidate number of textures and
an initial random set of texture parameters, a first segmentation is computed. Texture
parameters are then recomputed using the current segmentation. This cycle is repeated
several times until convergence. The whole procedure is repeated with different candidate
numbers of textures, and the number that optimizes a model fitting criterion is
retained as the true number of regions [33].
These solutions are feasible for images that contain large textured regions or that
contain only a limited number of textured regions. In practice, these conditions may not
be satisfied. Moreover, besides the parameter estimation problem, relaxation methods
based on Markov random fields are often computationally expensive. The problem
of segmenting textures modeled using Markov random fields indeed represents a large
combinatorial search problem.
In this work, a genetic algorithm based approach is adopted to overcome some of
the aforementioned difficulties of existing methods. Genetic algorithms [21, 18] are
stochastic search methods inspired by the conception that natural evolution is an optimization
procedure, which, if simulated, can be applied to solve artificial optimization
problems. In a genetic algorithm, a population of candidate solutions, initially generated
at random, undergoes a simulated evolutionary process, whereby solutions of
increasing quality progressively appear. Each generation, a new population is computed
from the previous one through a two-step cycle. During the first step, good
solutions are selected and duplicated to replace bad ones. During the second step,
new solutions are generated by recombining and mutating the solutions that have been
selected. Consequently, good solutions are progressively spreading within the popu-
lation, while being permanently exploited to build possibly better solutions. Genetic
algorithms are attractive in combinatorial problems because they achieve an efficient,
parallel exploration of the search space (which in particular may avoid being stuck in
local optima), while requiring minimum information on the function to be optimized
(in particular, the derivatives are not required) [18].
In the standard genetic algorithm [18], the population is panmictic, i.e., each individual
can compete or recombine with any other one in the population. Alternatively, in
distributed genetic algorithms, each individual is constrained to interact with a limited
number of other individuals. In coarse-grained distributed genetic algorithms [32, 10],
the population is divided into several subpopulations submitted to their own genetic
algorithm, and periodically exchanging some of their individuals. In fine-grained distributed
genetic algorithms [28, 25, 31, 11], the population is mapped onto a grid
whereupon a neighborhood system is defined to constrain interactions among individ-
uals. The purpose of distributed genetic algorithms is to increase the quality of the
obtained solutions, in particular by avoiding premature convergence to non-optimal
solutions, and to reduce the time needed to obtain the solutions.
The method presented herein is an unsupervised segmentation method whereby the
transformation of an input image into an output segmented image is computed by a
population of units that are mapped onto the image. Initially generated at random,
the population is iteratively updated and reorganizes through a fine-grained distributed
genetic algorithm. Consequently, a sequence of segmented images is generated. This
sequence progressively converges to a stable labelization, which is taken as the resulting
segmented image. This method, called selectionist relaxation to emphasize the role of
selection, has been previously applied to the unsupervised gray-level image segmentation
problem [1]. It is shown here how selectionist relaxation can be generalized to the
unsupervised textured image segmentation problem.
In this work, textures are represented using the generalized Ising model [16], also
known as the Derin-Elliott model [14]. With this model, the likelihood of a texture
window, the evaluation of which is required in selectionist relaxation, is not computable
in practice because of the intractability of the partition function. An approximation
of the partition function is therefore introduced to overcome this problem.
The organization of the paper is as follows. Background on the Markov random field
approach to texture modeling is given in Section 2. The approximation of the partition
function is set out in Section 3. Selectionist relaxation is presented in Section 4. Results
on synthesized texture patches are reported in Section 5. Section 6 is devoted to a final
discussion and conclusion.
s
Figure
1: Neighborhoods and cliques. A. The spatial extent of the neighborhood of
a site s depends on the order of the model. At order n, the neighborhood contains
all the sites that are assigned a digit less than or equal to n. B. The 10 clique types
associated to a second-order model. Cliques with non-zero potential in the model used
in this paper are shown in gray.
The input image is assumed to contain several textures, each of which is considered as
a realization of a Markov random field [23]. Further, the same model is used for all
textures, because it is assumed that these textures are different instances of the same
texture model.
2.1 Markov/Gibbs random fields
Consider the two-dimensional set of sites
NC g, wherein NR and NC are the numbers of rows and columns of the texture image,
respectively. A collection of subsets of sites defines a
neighborhood system if it satisfies the following two conditions: (1) 8s 2
clique is either a single site or a set of mutually neighboring
sites. The set of cliques is
being the number of different clique types.
Neighborhood structures and clique types are illustrated in Fig. 1.
A texture sample is considered herein as a realization of a random field
s is a random variable taking values in a discrete set
being the number of gray levels in the image. A realization x of
X is called a configuration. The state space of X
. The restriction of
a configuration to a subset R of S is noted xR . A collection of real-valued functions
defined
on\Omega and such that VR (x) only depends on xR is called a
potential. Further, V is a neighbor potential if 8x
According to the Hammersley-Clifford theorem [2], the random field X is a Markov
random field on S with respect to a neighborhood system N if and only if its distribution
on\Omega is a Gibbs distribution induced by a neighbor potential, that is to say
8x
is the energy of configuration x and the normalizing
constant
y2\Omega expf\GammaE(y)g is the partition function.
2.2 Generalized Ising model
Various Markov random field texture models have been proposed, each of which being
defined by its associated potential: the autologistic model [20], the autobinomial model
[12], the autonormal (Gaussian Markov random field) model [4] and the generalized
Ising model [16, 14, 17].
Despite its simplicity, we have retained this last model to work with in this work.
Indeed, the first step in our work is to test the feasibility of applying selectionist
relaxation to segment images that contain Markov random field samples, regardless of
whether the model is able or not to capture the complexity of natural textures.
The generalized Ising model is a pairwise interaction model [15]: only those cliques
that contain no more than 2 sites have non zero potentials. Singleton potentials are
set to zero so that the first-order histogram is uniform. Because we use a second-order
model, the effective number of clique types is clique types are shown in
gray in Fig. 1-B). For a pair clique the potential function is given by
is the parameter associated to clique type i, and
which case ffi xsxr = 1. Letting the vector of model parameters,
the energy of any configuration x can be written as
wherein the vector defined by
3 PARTITION FUNCTION APPROXIMATION
As described in the next section, selectionist relaxation requires that, given any w \Theta
w window W and any candidate vector model parameters, the
likelihood of a configuration xW is practically computable. Letting
\Omega W denote the state space of XW , this likelihood is given by:
Considering the generalized Ising model, the exact computation of the likelihood cannot
be achieved in practice: the partition function ZW (B) neither has a simple analytical
form, nor can be computed. This would involve calculating the energy of all possible
configurations over W , which is computationally intractable because of the huge
number of such configurations.
We therefore propose an approximation of the partition function. It consists in
approximating each of the terms in the expression of ZW (B) using its second-order
expansion:
The approximated terms are then summed up over y
2\Omega W . It is shown in the Appendix
how, assuming the window W has a toroidal structure, the resulting expression
Figure
2: Plot of the relative approximation error as a function of temperature. The
error was numerically determined with the following conditions: W is a 3 \Theta 3 window,
the number of gray levels is and the parameter set is B
of the partition function can be rearranged and simplified. This eventually leads to the
following approximated partition function:
~
wherein g is the number of gray levels and w is the number of sites in the
window W . It should be noted that the approximation is not only valid in the second-order
model case, but stands for any order of the model.
This approximation can be interpreted as a high-temperature approximation of the
partition function. Up to now, the temperature T of the Gibbs distribution (1) has
indeed been considered as incorporated in the energy, but if we define
then, from (2), the energy can be rewritten as
The error due to the approximation (3) vanishes as E(y; B) ! 0. From (6), this clearly
happens when T ! 1. The dependency on T of the resulting approximation error is
illustrated in Fig. 2, which plots the relative error defined by
T has been made explicit here only for the sake of the demonstration. In the
remainder of the paper, we return to the use of B (instead of B and T ), considering
T as an implicit scaling factor: the condition fi i - 1, which will be imposed to keep
the approximation error small enough, will be interpreted as an absorption of T within
the parameters themselves, according to (5).
input
image
image
output
algorithm
s
s
s
Figure
3: Selectionist relaxation. The unit U s assigned to site s consists of a feature
vector B s and of a label L s . The fitness of U s depends on how well B s matches the
data in the input window W s . The genetic algorithm applied to the population of
units results in a relaxation process, whereby highly fitted units spread over the image,
replacing badly fitted ones. In this process, unit U s primarily interacts with the units
located within its neighborhood N s . At the end of the process, the resulting segmented
image is build by attributing to each site s the corresponding label L s .
4.1 Outline of the method
Selectionist relaxation is an unsupervised segmentation method whereby the transformation
of an input image into an output image is computed by a population of units
that iteratively evolves through a distributed genetic algorithm (Fig. 3).
Each unit is an association between a candidate feature vector and a label. The
latter is used to label the unit pixel. The former is used to assign a fitness value
to the unit. The fitness of a unit is a measure of the matching between its feature
vector and the data in the image window whereupon the unit is centered. The features
that compose unit feature vectors are arbitrarily chosen on the basis of the desired
segmentation type. For example, pixel matrices were used as feature vectors for grey-level
image segmentation [1]. Here, it is proposed that texture segmentation can be
achieved using texture model parameters as feature vectors.
The population of units iteratively evolves through the application of genetic operators
[18]: the units whose feature vectors find good support from image data are
selected, recombined and mutated. These mechanisms allow units with high fitness
values to spread over their neighborhood by replacing the neighboring units that do
not fit the local image data. Additionally, some units can jump over large distances to
invade distant regions with similar characteristics. This results in a mixed local/distant
parameters
label
Figure
4: Unit U s is made of a vector B of texture model parameters
and of a label L s .
label spreading process that eventually leads to a stable label configuration, which is
taken as the resulting segmentation.
4.2 Units
As illustrated in Fig. 3, each site s of the input image has an associated unit U s . U s
is a couple U candidate vector of texture
model parameters and L s is a label (Fig. 4).
A collection of units is called a population (because selectionist
relaxation is an iterative method, units or population of units will be indexed with
time whenever this is necessary). L s is the label assigned to site s. The output of the
algorithm, the segmented image, is stands for
the stopping time step.
Each unit U s is assigned a fitness value f(U s ), which quantifies how well the unit
matches, according to the texture model, the w \Theta w texture window W s centered on site
s. The likelihood P measure of this match. However, with the
generalized Ising model, this criterion cannot be retained because of the aforementioned
awkwardness of the partition function. Instead, f(U s ) is defined as the approximated
likelihood:
ZWs (B s ) is the approximated partition function given in (4).
Using this approximation constrains the domain wherein unit parameters may be
reliably searched for by the genetic algorithm. Unsatisfactory segmentation results are
indeed expected if the fitness function is unreliable, due to a large approximation error.
As explained in Section 3, this error vanishes as the parameters go to zero. Thus unit
parameters must be initially close to zero (constraint on initialization), and they must
stay close to zero during the whole run (constraint on mutation). How close to zero the
parameters must be to yield good segmentation results is determined experimentally.
How these constraints are taken into account in the algorithm is explained in the
following subsection.
Selectionist relaxation implements a fine-grained distributed genetic algorithm.
This means that the population is spatially organized, each unit primarily interacting
with its neighboring units. For the unit at site s, these are the units located within
the j \Theta j window N s centered on site s (Fig. 3). Units located on the borders of the
image have fewer neighbors than interior units.
selection crossover/mutation
states
Figure
5: Selection, crossover and mutation are state-dependent. The population is
here in a state with three coexisting labels, corresponding to three subpopulations
of units. Their boundaries are shown as thin broken lines. Left. The sites whose
neighborhood (dark gray) crosses a label boundary have state 1 (light gray); the others
have state 0 (white). Middle. Selection at sites with state 0 only involves neighboring
units, while it additionally involves a randomly picked unit at sites with state 1. Right.
Only units with state 0 undergo crossover with a neighbor and mutation.
Each unit is attributed a binary value, called its state, which depends on the labels
of its neighbors. The state S s of unit U s is defined as follows:
This variable allows to discriminate units according to their distance from units with
different labels. As relaxation proceeds, some units spread over the image by being
copied from site to site, and so do their associated labels. Consequently, homogeneously
labeled regions are growing. As illustrated in Fig. 5 (Left ), units located in
the neighborhood of a boundary between two or more such regions have state 1, while
units located inwards these regions have state 0. As explained in the next subsec-
tion, in selectionist relaxation, genetic operators (selection, crossover and mutation)
are state-dependent.
4.3 Algorithm
Initialization. The first step in selectionist relaxation consists in creating the initial
population U(0) as follows. For each unit U s , each component fi s;i of its parameter
vector B s is assigned a value sampled from the uniform distribution over the interval
As explained in the previous subsection, the parameter initialization domain
is constrained because the fitness function relies on the approximation of the partition
Accordingly, ffi must be chosen small enough so that the approximation error
is acceptable. In the experiments reported in the next section, the simple rule
has been used with success.
Unit label L s is chosen as the raster scan index of site s. Consequently, there are
initially as many labels as there are pixels in the image, and all sites are in state 1.
Relaxation cycle. After the population of units has been initialized, selectionist
relaxation consists in repeating a two-step relaxation cycle until the stopping criterion
crossover
mutation
Figure
Crossover and mutation. Top, Crossover between unit U s and a neighboring
unit U r . The crossover position k is chosen at random. Bottom, Mutation of unit U s .
Mutation position l and mutation amount m are chosen at random. Unit labels are
not shown because they are not affected by crossover nor by mutation.
is met. Each time step t, the population U computed by, first, applying
selection to the population U (t) and, second, by applying crossover and mutation to
the selected population.
The operators are state-dependent (Fig. 5). During selection at a site with state 0,
competition only involves the neighboring units. At a site with state 1, it additionally
involves a remote unit. This mechanism allows spatially distant units to interact, and
was introduced so that a same texture can be assigned a unique label even though
it appears in different disconnected regions. Without this mechanism, such a texture
would be assigned as many labels as it forms separate regions, because labels would
exclusively be propagated from one site to the next. During the second step of the
cycle, crossover and mutation are applied only to units with state 0. This prevents
the label boundaries that are formed as relaxation proceeds from being perturbed and
disrupted by sudden fitness changes.
Each of the three operators synchronously affects all image sites. They are described
in detail below for a generic site s.
ffl Selection. The selection scheme implemented in selectionist relaxation is local
tournament selection [31], a variant of tournament selection [19]: the unit whose
fitness is the highest in a subpopulation U s of population U is selected to replace
the unit at site s. As said before, selection is state-dependent (Fig. 5, Middle):
where r is a randomly picked site. Once U s is build, the unit with the greatest
fitness value in U s is selected to replace the unit at site s:
ffl Crossover. If S does not undergo crossover (Fig. 5, Right ).
Otherwise, a neighboring unit U r ; r 2 N s , is randomly picked. Then, one component
in the parameter vector B s is chosen and is assigned the corresponding
value of the parameter vector B r (Fig. 6, Top).
ffl Mutation. As for crossover, unit U s does not undergo mutation whenever S
1. Otherwise, a parameter index l 2 randomly chosen and a value
m sampled from the uniform distribution in the interval [\Gamma- l
s ] is added to the
corresponding parameter fi s;l of U s (Fig. 6, Bottom). Two constraints are imposed
on mutation through - l
s . First, mutation amplitude must be small compared
to the initial parameter range ffi. Second, preliminary experiments have shown
that a texture dependent mutation scheme leads to better results than a texture
independent one. These constraints are taken into account by letting
The first term, ffl ffi, enforces the first constraint, provided ffl is small. In the
experiments reported in the next section, ffl is set to 0:02. The second term makes
mutation amplitude depend on the local texture configuration by allowing greater
mutation amplitude when j- l j is small. This occurs when there is some ambiguity
in the texture along clique type l, since by definition, j- l j small means that about
half of the cliques of type l contribute by +1 while the other half contribute by
\Gamma1. It experimentally appeared that a greater mutation amplitude was beneficial
to overcome such ambiguities.
5.1 Experimental setup
The results reported in this section illustrate selectionist relaxation segmentation of
images containing textures that are realizations of the generalized Ising model presented
in Section 2. The images contain 8 gray levels and are 256 \Theta 256 pixels wide. Texture
samples were synthesized using the Gibbs Sampler [16] for 100 steps.
For each test image, selectionist relaxation was run for 300 time steps. Each unit has
8 neighbors (j = 3). The only externally tuned parameter is w, which defines the size of
the texture window that is used to compute each unit fitness. As explained in Section 4,
w automatically determines the initial parameter range as well as the mutation range.
The number of textures and their associated parameters are automatically determined
by the algorithm through simulated evolution among the population of units.
Segmentation results are evaluated by visual examination and by computing the
error rate. Misclassified pixels are determined as follows: for each region of the true
segmentation, the label which is the most represented over that region in the segmented
image is determined. The pixels that are assigned another label are considered as
misclassified. The error rate is the percentage of misclassified pixels over the whole
image.
5.2 Segmentation results
Fig. 7 displays the segmentation result for an image that contains two textures spatially
arranged according to a simple geometry. Texture windows used to evaluate the fitness
of the units are w wide. The segmentation of an image containing the same
two textures arranged in a more complex fashion is illustrated on Fig. 8. Though
more complex, the two textures still form connected regions. Fig. 9 shows that the
same textures can also be correctly segmented when they form spatially disconnected
regions. This example proves that, though selectionist relaxation mainly proceeds by
propagating labels over nearest neighboring sites, spatially separated blobs of the same
Figure
7: Segmentation of Wave, a 2-textures image. A. True segmentation. B. Input
image. C. Selectionist relaxation segmentation. D. Misclassified pixels.
texture can be assigned the same label. This is because a randomly chosen unit is
systematically involved in the selection process at those sites s such that S s = 1.
Consequently, some units can literally jump over large spatial distances. As illustrated
in Fig. 10, images that contain a larger number of textures can be segmented as well.
In this last case, it was necessary to compute the fitness of the units over larger (w
11 \Theta 11) texture windows, to take into account the coarseness of the different textures.
rates are given in Table 1 (Middle). These are reasonably low, and, as can
be seen in Fig. 7, 8, 9 and 10, errors exclusively occur at the boundaries between
the textured regions. This suggests that comparing error rates among the four cases
is misleading, because the total length of texture boundaries differ among the four
cases. A relative error rate was defined as the number of misclassified pixels divided
by the total length of region boundaries in the true segmentation. According to the
relative error rate (last column in Table 1), it appears that, in spite of the varying
region shape, connectivity, and number, the performance of selectionist relaxation is
relatively constant among the four cases, and, in particular, the relative error rate is
always less than 1. However, using a larger window size (fourth case) seems to result
in an increased number of errors at region boundaries, which is not unexpected.
Figure
8: Segmentation of Spiral, a 2-textures image. A. True segmentation. B.
Input image. C. Selectionist relaxation segmentation. D. Misclassified pixels.
5.3 Estimated parameters
The issue naturally arises of the extent to which the parameters of the units that are
found through selectionist relaxation on a given texture match the true parameters
of that texture (i.e., the parameters that were used to synthesize the original tex-
ture). These parameter sets will be respectively referred to as B units and B true . Any
attempt to assess the correctness of B units with regards to B true is however hampered
by the constraints imposed on B units because of the partition function approximation.
As previously mentioned, B units must be considered as containing both the estimated
parameters of the texture (B estim ) and the temperature
B estim can thus be determined from B units (and subsequently compared to B true
T is known. The problem is that T is only implicit, and, consequently, unknown.
However, under the assumption that the estimated parameters are correct (i.e. assuming
the criterion
Figure
9: Segmentation of Blobs, a 2-textures image. A. True segmentation. B. Input
image. C. Selectionist relaxation segmentation. D. Misclassified pixels.
Using this criterion, a value of T can be computed and B estim can be determined
from B units . The textures can also be resynthesized using B estim and compared to the
originals.
This has been done for the Wave experiment reported in Fig. 7. For each texture,
the vector B units is computed by averaging unit parameters over all the units whose
label is the most represented label on that texture. Table 2 gives theoretical, unit, and
estimated parameters for each texture. Comparing B estim with B true shows that the
relative parameter values are acceptable for texture L, but are far from the original for
texture U. The textures resynthesized using estimated parameters are shown in Fig. 11.
In this work, selectionist relaxation is proposed as a new method for segmenting images
that contain textures modeled using Markov random fields. Using a high temperature
approximation of the partition function, the ability of selectionist relaxation to segment
samples of the generalized Ising model has been demonstrated. Selectionist relaxation
is unsupervised in so far as knowledge concerning the number of textures and their
associated parameters is not required beforehand. Estimation of these unknowns and
Figure
10: Segmentation of Rose, a 8-textures image. A. True segmentation. B. Input
image. C. Selectionist relaxation segmentation. D. Misclassified pixels.
segmentation are achieved simultaneously. The algorithmic complexity of the method
neither depends on the number of textures nor on the number of gray levels.
Several solutions to the problem of segmenting Markov modeled textured images in
unsupervised mode have been proposed [26, 22, 9, 33, 29]. These methods rest upon
the assumptions that the image contains only a limited number of textured regions or
that the shapes of the textured regions are such that the image can be divided into non-overlapping
blocks, the majority of which is homogeneous so that texture parameters
can be estimated on each such block. Selectionist relaxation loosens these constraints
since no assumption is made neither on the number of regions nor on the shapes of
these regions.
Relaxing these assumptions results in an increased difficulty. The problem can be
decomposed into three subproblems to be solved simultaneously. They respectively
consist in the determination of: (1) the number of different textured regions (which is
naturally bounded by the number of pixels in the input image); (2) the corresponding
set of model parameter vectors; (3) the optimal labelization of the data. It is noted
that the trivial solution, which consists in assigning a different label to each pixel,
is not observed. Partial suboptimal solutions, in which several labels are assigned
to a same region, are not observed either. On the contrary, the number of regions
Image
Wave 0.26 0.47
Spiral 3.40 0.52
Blobs 3.07 0.49
Rose 2.09 0.87
Table
1: Segmentation errors. percentage of misclassified pixels.
number of misclassified pixels divided by the total length of region boundaries in the
true segmentation.
Texture Parameters Temperature
true 1.000 -1.000 -1.400 -1.400
true -1.000 -1.000 -1.400 -1.400
Table
2: Comparison between actual texture parameters and unit parameters found
by selectionist relaxation (case of the Wave experiment reported in Fig. 7). Left : U
and L refer to the upper and to the lower textures in Fig. 7, respectively. Middle:
parameters used to synthesize the original textures (B true ), unit parameters (B units ),
and estimated parameters (B estim determined using criterion (7)
and used to compute B estim from B units .
has been correctly identified in all our experiments. This suggests that, though no
priors are imposed on the labels, regularizing constraints are implicitly incorporated
into the algorithm. The issue of identifying these constraints is the matter for further
investigations.
Though it is unsupervised, the method is not, however, fully data-driven, since the
size w of the texture window used to compute the fitness of the units has to be specified
by the user. Tuning this parameter is easy because it is in natural correspondence with
the coarseness of the textures. The coarser the textures are, the greater the size of the
window should be to insure that it contains enough information to yield reliable fitness
values. As a counterpart, this may affect segmentation accuracy, since large errors
are expected to occur at region boundaries when using large windows. The results
reported here show that in most cases, the spatial extent of boundary misclassifications
is unexpectedly small compared to the size of the window.
A comparison between actual texture parameters and parameters estimated through
selectionist relaxation has been done. For the first texture (texture L in Table 2),
estimated parameters agree with actual parameters. For the second texture (texture U),
the estimation is less satisfactory. Though correct signs and roughly correct absolute
values are obtained, pairwise parameter ratios within the estimated parameter set
and within the true parameter set largely differ. We propose that this may result
from several, possibly non-exclusive, causes. First, it can be argued that error in
parameter estimation results from the high temperature approximation of the partition
function. However, if this were systematically true, then correct parameters would not
Figure
11: Textures resynthesized using estimated parameters. A. Original patch (same
as Fig. 7-A). B. Reconstructed patch: the textures have been resynthesized using the
estimated parameters (given in Table 2).
be obtained for texture L. Second, the size w of window W may be too small, and
texture U may be more sensitive to this parameter because it is coarser than texture L.
Third, the computation of B estim relies on the assumption that all the parameters are at
the same temperature. This is certainly far from being a correct assumption, because
the mutation range is texture- and parameter-dependent, so that the parameters do not
evolve at the same rate. Texture U is such that the mutation range on fi 1 is, on average,
larger than the mutation range on the other parameters, while texture L is such that
the mutation range is roughly parameter-independent, because this texture is isotropic.
Fourth, it is also likely that sampling bias in the procedure used to synthesize the
textures is stronger for texture U than for texture L. This is confirmed by experiments
in which parameters were estimated (using maximum pseudo-likelihood estimation [3])
on 100 \Theta 100 homogeneous texture samples (data not shown). Pairwise parameter ratios
were in good agreement between estimated and actual parameter sets for texture L,
but not for texture U.
It has been suggested that the performance of the generalized Ising model on a
texture classification task was poorer than the performance of the autobinomial and
autonormal models [7]. It has also been argued that using either of these two other
models was better for natural texture segmentation [22]. We are currently trying to
apply selectionist relaxation to natural texture segmentation using more appropriate
models than the generalized Ising model. If the assumption that was made here is
valid, i.e., if the ability of the method to segment Markov random field texture samples
is independent of any particular model, then selectionist relaxation will appear as a
promising approach towards unsupervised texture image segmentation.
A
Given a window W and a set of texture parameters the problem is
to obtain an approximated, manageable expression of the partition function ZW (B).
For simplicity, and without loss of generality, the case considered here. The
partition function to be approximated is thus
x2\Omega expf\GammaE(x; B)g
is the set of all possible configurations over S, n being the number of
sites in S and being the set of gray level values. Remember that
is the set of all cliques in S, C i being the set of type i cliques. Under the
assumption that the grid S is toroidal, the number of cliques of each type equals the
number of sites: jC
For the generalized Ising model, the energy of a configuration can be written as
, the vector being defined by
with, for any clique
Approximating each term in the sum Z(B) by its second-order expansion (3) yields
the approximated partition function:
~
with
x2\Omega E(x; B);
The problem is now to rearrange Z 1 and Z 2 as functions of the fi i s (Z 0 is a constant
equals to g n ). These calculations rely on some preliminary results that are given in the
next subsection.
A.1 Preliminary results
For any clique c 2 C, and for any two cliques c 1 6= c 2 , one can show that
A.2 Calculation of
Using the definitions of Z 1 (B), E(x; B) and - i (x) leads to
which, together with (9), yields
A.3 Calculation of
Proceeding as for Z 1 (B) leads to
The problem is now to compute the coefficients
These coefficients are calculated by distinguishing, for each clique c 1 , those cliques
c 2 that are equal to c 1 from those that differ from c 1 , and then using (10) and (11). In
the expression of w ii , there will be only one clique c In
the expression of cliques c 2 necessarily differ from c 1 , since they belong
to different clique types. This gives
ng n\Gamma2
With these coefficients, the following expression of finally obtained
Collecting (8), (12) and (13) leads to the expression of the approximated partition
function given in (4).
ACKNOWLEDGMENTS
The authors would like to thanks Evelyne Lutton; her comments on an earlier version
of this work greatly contributed to improve the presentation of the paper and were
very much appreciated.
--R
Unsupervised image segmentation using a distributed genetic algorithm.
Spatial interaction and the statistical analysis of lattice systems.
On the statistical analysis of dirty pictures.
Classification of textures using Gaussian Markov random fields.
Texture synthesis and compression using Gaussian-Markov random field models
Markov random fields for texture classification.
Simple parallel hierarchical and relaxation algorithms for segmenting noncausal Markovian random fields.
Maximum likelihood unsupervised textured image segmentation.
Distributed genetic algorithms for the floorplan design problem.
Studies in Artificial Evolution.
Markov random field texture models.
Segmentation of textured images using Gibbs random fields.
Modeling and segmentation of noisy and textured images using Gibbs random fields.
Random field models in image analysis.
Stochastic relaxation
Probabilistic models of digital region maps based on Markov random fields with short- and long-range interaction
Genetic Algorithms in Search
A comparative analysis of selection schemes used in genetic algorithms.
The use of Markov random fields as models of texture.
Adaptation in Natural and Artifical Systems: An Introductory Analysis with Applications to Biology
Texture segmentation based on a hierarchical Markov random field model.
Markov Random Fields and Their Applications
Simultaneous parameter estimation and segmentation of Gibbs random fields using simulated annealing.
Unsupervised texture segmentation using Markov random field models.
Stochastic and deterministic networks for texture segmentation.
Parallel genetic algorithms
Gibbs random fields
du Buf.
A massively parallel genetic algorithm.
Distributed genetic algorithms.
Unsupervised segmentation of noisy and textured images using Markov random fields.
--TR
--CTR
C.-T. Li, Multiresolution image segmentation integrating Gibbs sampler and region merging algorithm, Signal Processing, v.83 n.1, p.67-78, January
Eun Yi Kim , Se Hyun Park , Sang Won Hwang , Hang Joon Kim, Video sequence segmentation using genetic algorithms, Pattern Recognition Letters, v.23 n.7, p.843-863, May 2002
J. Veenman , Marcel J. T. Reinders , Eric Backer, A Maximum Variance Cluster Algorithm, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.9, p.1273-1280, September 2002 | genetic algorithms;unsupervised texture segmentation;selectionist relaxation;partition function approximation;markov/gibbs random fields |
279017 | A Hierarchical Latent Variable Model for Data Visualization. | AbstractVisualization has proven to be a powerful and widely-applicable tool for the analysis and interpretation of multivariate data. Most visualization algorithms aim to find a projection from the data space down to a two-dimensional visualization space. However, for complex data sets living in a high-dimensional space, it is unlikely that a single two-dimensional projection can reveal all of the interesting structure. We therefore introduce a hierarchical visualization algorithm which allows the complete data set to be visualized at the top level, with clusters and subclusters of data points visualized at deeper levels. The algorithm is based on a hierarchical mixture of latent variable models, whose parameters are estimated using the expectation-maximization algorithm. We demonstrate the principle of the approach on a toy data set, and we then apply the algorithm to the visualization of a synthetic data set in 12 dimensions obtained from a simulation of multiphase flows in oil pipelines, and to data in 36 dimensions derived from satellite images. A Matlab software implementation of the algorithm is publicly available from the World Wide Web. | Introduction
Many algorithms for data visualization have been proposed by both the neural computing
and statistics communities, most of which are based on a projection of the data onto a two-dimensional
visualization space. While such algorithms can usefully display the structure
of simple data sets, they often prove inadequate in the face of data sets which are more
complex. A single two-dimensional projection, even if it is non-linear, may be insufficient
to capture all of the interesting aspects of the data set. For example, the projection which
best separates two clusters may not be the best for revealing internal structure within
one of the clusters. This motivates the consideration of a hierarchical model involving
multiple two-dimensional visualization spaces. The goal is that the top-level projection
should display the entire data set, perhaps revealing the presence of clusters, while lower-level
projections display internal structure within individual clusters, such as the presence
of sub-clusters, which might not be apparent in the higher-level projections.
Once we allow the possibility of many complementary visualization projections, we can
consider each projection model to be relatively simple, for example based on a linear
projection, and compensate for the lack of flexibility of individual models by the overall
flexibility of the complete hierarchy. The use of a hierarchy of relatively simple models
offers greater ease of interpretation as well as the benefits of analytical and computational
simplification. This philosophy for modelling complexity is similar in spirit to the "mixture
of experts" approach for solving regression problems [1].
The algorithm discussed in this paper is based on a form of latent variable model which
is closely related to both principal component analysis (PCA) and factor analysis. At
the top level of the hierarchy we have a single visualization plot corresponding to one
such model. By considering a probabilistic mixture of latent variable models we obtain
a soft partitioning of the data set into 'clusters', corresponding to the second level of the
hierarchy. Subsequent levels, obtained using nested mixture representations, provide successively
refined models of the data set. The construction of the hierarchical tree proceeds
top down, and can be driven interactively by the user. At each stage of the algorithm
the relevant model parameters are determined using the expectation-maximization (EM)
algorithm.
In the next section we review the latent-variable model, and in Section 3 we discuss the
extension to mixtures of such models. This is further extended to hierarchical mixtures in
Section 4, and is then used to formulate an interactive visualization algorithm in Section 5.
We illustrate the operation of the algorithm in Section 6 using a simple toy data set. Then
we apply the algorithm to a problem involving the monitoring of multi-phase flows along
oil pipes in Section 7 and to the interpretation of satellite image data in Section 8. Finally,
extensions to the algorithm, and the relationships to other approaches, are discussed in
Section 9.
Latent Variables
We begin by introducing a simple form of linear latent variable model and discuss its
application to data analysis. Here we give an overview of the key concepts, and leave the
detailed mathematical discussion to Appendix A. The aim is to find a representation of
a multi-dimensional data set in terms of two latent (or 'hidden') variables. Suppose the
data space is d-dimensional with coordinates y and that the data set consists of
a set of d-dimensional vectors ft n g where . Now consider a two-dimensional
latent space together with a linear function which maps the latent space
into the data space
where W is a d \Theta 2 matrix and - is a d-dimensional vector. The mapping (1) defines
a two-dimensional planar surface in the data space. If we introduce a prior probability
distribution p(x) over the latent space given by a zero-mean Gaussian with a unit covariance
matrix, then (1) defines a singular Gaussian distribution in data space with mean
- and covariance matrix h(y . Finally, since we do not expect
the data to be confined exactly to a two-dimensional sheet, we convolve this distribution
with an isotropic Gaussian distribution p(tjx; oe 2 ) in data space having a mean of zero and
covariance oe 2 I where I is the unit matrix. Using the rules of probability, the final density
model is obtained from the convolution of the noise model with the prior distribution over
latent space in the form
Z
Since this represents the convolution of two Gaussians, the integral can be evaluated
analytically, resulting in a distribution p(t) which corresponds to a d-dimensional Gaussian
with mean - and covariance matrix WW T
If we had considered a more general model in which conditional distribution p(tjx) is given
by a Gaussian with a general covariance matrix (having d independent parameters) then
we would obtain standard linear factor analysis [2, 3]. In fact our model is more closely
related to principal component analysis, as we now discuss.
The log likelihood function for this model is given by
likelihood can be used to fit the model to the data and hence determine values for the
parameters -, W and oe 2 . The solution for - is just given by the sample mean. In the
case of the factor analysis model, the determination of W and oe 2 corresponds to a non-linear
optimization which must be performed iteratively. For the isotropic noise covariance
matrix, however, it was shown by Tipping and Bishop [4, 5] that there is an exact closed
form solution as follows. If we introduce the sample covariance matrix given by
then the only non-zero stationary points of the likelihood occur for:
where the two columns of the matrix U are eigenvectors of S, with corresponding eigen-values
in the diagonal matrix , and R is an arbitrary 2 \Theta 2 orthogonal rotation matrix.
Furthermore, it was shown that the stationary point corresponding to the global maximum
of the likelihood occurs when the columns of U comprise the two principal eigenvectors
of S (i.e. the eigenvectors corresponding to the two largest eigenvalues) and that all other
combinations of eigenvectors represent saddle-points of the likelihood surface. It was also
shown that the maximum-likelihood estimator of oe 2 is given by
d
which has a clear interpretation as the variance 'lost' in the projection, averaged over the
lost dimensions.
Unlike conventional PCA, however, our model defines a probability density in data space,
and this is important for the subsequent hierarchical development of the model. The choice
of a radially symmetric rather than a more general diagonal covariance matrix for p(tjx)
is motivated by the desire for greater ease of interpretability of the visualization results,
since the projections of the data points onto the latent plane in data space correspond (for
small values of oe 2 ) to an orthogonal projection as discussed in Appendix A.
Although we have an explicit solution for the maximum-likelihood parameter values, it was
shown by Tipping and Bishop [4, 5] that significant computational savings can sometimes
be achieved by using the following EM (expectation-maximization) algorithm [6, 7, 8].
Using (2) we can write the log likelihood function in the form
Z
in which we can regard the quantities x n as missing variables. The posterior distribution of
the x n , given the observed t n and the model parameters, is obtained using Bayes' theorem
and again consists of a Gaussian distribution. The E-step then involves the use of 'old'
parameter values to evaluate the sufficient statistics of this distribution in the form
I is a 2 \Theta 2 matrix, and h i denotes the expectation computed with
respect to the posterior distribution of x. The M-step then maximizes the expectation of
the complete-data log likelihood to give
f
e
Nd
Whx
in which e denotes 'new' quantities. Note that the new value for f
W obtained from (9) is
used in the evaluation of oe 2 in (10). The model is trained by alternately evaluating the
sufficient statistics of the latent-space posterior distribution using (7) and (8) for given
oe 2 and W (the E-step), and re-evaluating oe 2 and W using (9) and (10) for given hx n i
and hx n x T
(the M-step). It can be shown that, at each stage of the EM algorithm,
the likelihood is increased unless it is already at a local maximum, as demonstrated in
Appendix
For N data points in d dimensions, evaluation of the sample covariance matrix requires
any approach to finding the principal eigenvectors based on an
explicit evaluation of the covariance matrix must have at least this order of computational
complexity. By contrast, the EM algorithm involves steps which are only O(Nd). This
saving of computational cost is a consequence of having a latent space whose dimensionality
(which, for the purposes of our visualization algorithm, is fixed at 2) does not scale with
d.
If we substitute the expressions for the expectations given by the E-step equations (7) and
(8) into the M-step equations we obtain the following re-estimation formulae
f
e
d
which shows that all of the dependence on the data occurs through the sample covariance
matrix S. Thus the EM algorithm can be expressed as alternate evaluations of (11) and
(12). (Note that (12) involves a combination of 'old' and `new' quantities.) This form of
the EM algorithm has been introduced for illustrative purposes only, and would involve
cost due to the evaluation of the covariance matrix.
We have seen that each data point t n induces a Gaussian posterior p(x n jt n ) distribution in
the latent space. For the purposes of visualization, however, it is convenient to summarize
each such distribution by its mean, given by hx n i, as illustrated in Figure 1. Note that
x
prior
posterior
Figure
1: Illustration of the projection of a data point onto the mean of the posterior
distribution in latent space.
these quantities are obtained directly from the output of the E-step (7). Thus a set of data
points projected onto a corresponding set of points fhx n ig in
the 2-dimensional latent space.
3 Mixtures of Latent Variable Models
We can perform an automatic soft clustering of the data set, and at the same time obtain
multiple visualization plots corresponding to the clusters, by modelling the data with a
mixture of latent variable models of the kind described in Section 2. The corresponding
density model takes the form
where M 0 is the number of components in the mixture, and the parameters - i are the
mixing coefficients, or prior probabilities, corresponding to the mixture components p(tji).
Each component is an independent latent variable model with parameters - i , W i and oe 2
This mixture distribution will form the second level in our hierarchical model.
The EM algorithm can be extended to allow a mixture of the form (13) to be fitted to the
data (see Appendix B for details). To derive the EM algorithm we note that, in addition
to the fx n g, the missing data now also includes labels which specify which component
is responsible for each data point. It is convenient to denote this missing data by a set
of variables z ni where z generated by model i (and zero otherwise). The
prior expectations for these variables are given by the - i and the corresponding posterior
probabilities, or responsibilities, are evaluated in the extended E-step using Bayes' theorem
in the form
Although a standard EM algorithm can be derived by treating the fx n g and the z ni
jointly as missing data, a more efficient algorithm can be obtained by considering a two-stage
form of EM. At each complete cycle of the algorithm we commence with an 'old'
set of parameter values - i , - i , W i and oe 2
i . We first use these parameters to evaluate
the posterior probabilities R ni using (14). These posterior probabilities are then used to
obtain 'new' values e
using the following re-estimation formulae
e
R ni (15)
e
The new values e
are then used in evaluation of the sufficient statistics for the posterior
distribution for x ni
Finally, these statistics are used to evaluate 'new' values f
and e oe 2
using
f
R ni hx ni x T
e
d
R ni kt
\Gamma2
R ni hx T
R ni
f
which are derived in Appendix B.
As for the single latent variable model, we can substitute the expressions for hx ni i and
ni i, given by (17) and (18) respectively, into (19) and (20). We then see that the
re-estimation formulae for f
i take the form
f
d
f
where all of the data dependence been expressed in terms of the quantities
and we have defined
R ni . The matrix S i can clearly be interpreted as a
responsibility-weighted covariance matrix. Again, for reasons of computational efficiency,
the form of EM algorithm given by (17) to (20) is to be preferred if d is large.
Hierarchical Mixture Models
We now extend the mixture representation of Section 3 to give a hierarchical mixture
model. Our formulation will be quite general and can be applied to mixtures of any
parametric density model.
So far we have considered a two-level system consisting of a single latent variable model
at the top level and a mixture of M 0 such models at the second level. We can now extend
the hierarchy to a third level by associating a group G i of latent variable models with each
model i in the second level. The corresponding probability density can be written in the
where p(tji; latent variable models, and - jji correspond to
sets of mixing coefficients, one for each i, which satisfy
1. Thus each level of
the hierarchy corresponds to a generative model, with lower levels giving more refined and
detailed representations. This model is illustrated in Figure 2.
Determination of the parameters of the models at the third level can again be viewed as
a missing data problem in which the missing information corresponds to labels specifying
which model generated each data point. When no information about the labels is provided
COPY
Second Level
Mixture Model
en t Mode
Third Level
Mixture Model
Figure
2: The structure of the hierarchical model.
the log likelihood for the model (24) would take the form
If, however, we were given a set of indicator variables z ni specifying which model i at the
second level generated each data point t n then the log likelihood would become
z ni ln
In fact we only have partial, probabilistic, information in the form of the posterior responsibilities
R ni for each model i having generated the data points t n , obtained from
the second level of the hierarchy. Taking the expectation of (26) we then obtain the log
likelihood for the third level of the hierarchy in the form
R ni
in which the R ni are constants. In the particular case in which the R ni are all 0 or 1,
corresponding to complete certainty about which model in the second level is responsible
for each data point, the log likelihood (27) reduces to the form (26).
Maximization of (27) can again be performed using the EM algorithm, as discussed in
Appendix
C. This has the same form as the EM algorithm for a simple mixture, discussed
in Section 3, except that in the E-step, the posterior probability that model (i; generated
data point t n is given by
in which
Note that R ni are constants determined from the second level of the hierarchy, and R njji
are functions of the 'old' parameter values in the EM algorithm. The expression (29)
automatically satisfies the relation
so that the responsibility of each model at the second level for a given data point n is
shared by a partition of unity between the corresponding group of offspring models at the
third level.
The corresponding EM algorithm can be derived by a straightforward extension of the
discussion given in Section 3 and Appendix B, and is outlined in Appendix C. This shows
that the M-step equations for the mixing coefficients and the means are given by
e
e
The posterior expectations for the missing variables z ni;j are then given by
Finally, the W i;j and oe 2
i;j are updated using the M-step equations
f
R ni;j hx ni;j x T
e
d
R ni;j kt
\Gamma2
R ni;j hx T
R ni;j
f
Again, we can substitute the E-step equations into the M-step equations to obtain a set
of update formulae of the form
f
e
d
f
where all of the summations over n have been expressed in terms of the quantities
in which we have defined
. The S i;j can again be interpreted as responsibility-
weighted covariance matrices.
It is straightforward to extend this hierarchical modelling technique to any desired number
of levels, for any parametric family of component distributions.
5 The Visualization Algorithm
So far we have described the theory behind hierarchical mixtures of latent variable models,
and have illustrated the overall form of the visualization hierarchy in Figure 2. We now
complete the description of our algorithm by considering the construction of the hierarchy,
and its application to data visualization.
Although the tree structure of the hierarchy can be pre-defined, a more interesting possi-
bility, with greater practical applicability, is to build the tree interactively. Our multi-level
visualization algorithm begins by fitting a single latent variable model to the data set, in
which the value of - is given by the sample mean. For low values of the data space dimensionality
d we can find W and oe 2 directly by evaluating the covariance matrix and
applying (4) and (5). However, for larger values of d it may be computationally more
efficient to apply the EM algorithm, and a scheme for initializing W and oe 2 is given in
Appendix
D. Once the EM algorithm has converged, the visualization plot is generated
by plotting each data point t n at the corresponding posterior mean hx n i in latent space.
On the basis of this plot the user then decides on a suitable number of models to fit at
the next level down, and selects points x (i) on the plot corresponding, for example, to
the centres of apparent clusters. The resulting points y (i) in data space, obtained from
(1), are then used to initialize the means - i of the respective sub-models. To initialize
the remaining parameters of the mixture model we first assign the data points to their
nearest mean vector - i and then either compute the corresponding sample covariance
matrices and apply a direct eigenvector decomposition, or use the initialization scheme of
Appendix
D followed by the EM algorithm.
Having determined the parameters of the mixture model at the second level we can then
obtain the corresponding set of visualization plots, in which the posterior means hx ni i are
again used to plot the data points. For these it is useful to plot all of the data points
on every plot, but to modify the density of 'ink' in proportion to the responsibility which
each plot has for that particular data point. Thus, if one particular component takes most
of the responsibility for a particular point, then that point will effectively be visible only
on the corresponding plot. The projection of a data point onto the latent spaces for a
mixture of two latent variable models is illustrated schematically in Figure 3.
The resulting visualization plots are then used to select further sub-models, if desired,
Figure
3: Illustration of the projection of a data point onto the latent spaces of a mixture
of two latent variable models.
with the responsibility weighting of (28) being incorporated at this stage. If it is decided
not to partition a particular model at some level, then it is easily seen from (30) that the
result of training is equivalent to copying the model down unchanged to the next level.
Equation (30) further ensures that the combination of such copied models with those
generated through further sub-modelling defines a consistent probability model, such as
that represented by the lower three models in Figure 2. The initialization of the model
parameters is by direct analogy with the second-level scheme, with the covariance matrices
now also involving the responsibilities R ni as weighting coefficients, as in (23). Again, each
data point is in principle plotted on every model at a given level, with a density of 'ink'
proportional to the corresponding posterior probability, given for example by (28) in the
case of the third level of the hierarchy.
Deeper levels of the hierarchy involve greater numbers of parameters and it is therefore
important to avoid over-fitting and to ensure that the parameter values are well-determined
by the data. If we consider principal component analysis then we see that three (non-
data points are sufficient to ensure that the covariance matrix has rank two and
hence that the first two principal components are defined, irrespective of the dimensionality
of the data set. In the case of our latent variable model, four data points are sufficient
to determine both W and oe 2 . From this we see that we do not need excessive numbers
of data points in each leaf of the tree, and that the dimensionality of the space is largely
irrelevant.
Finally, it is often also useful to be able to visualize the spatial relationship between a
group of models at one level and their parent at the previous level. This can be done
by considering the orthogonal projection of the latent plane in data space onto the corresponding
plane of the parent model, as illustrated in Figure 4. For each model in the
hierarchy (except those at the lowest level) we can plot the projections of the associated
models from the level below.
In the next section, we illustrate the operation of this algorithm when applied to a simple
toy data set, before presenting results from the study of more realistic data in Sections 7
and 8.
Figure
4: Illustration of the projection of one of the latent planes onto its parent plane.
6 Illustration using Toy Data
We first consider a toy data set consisting of 450 data points generated from a mixture of
three Gaussians in a three-dimensional space. Each Gaussian is relatively flat (has small
variance) in one dimension, and all have the same covariance but differ in their means.
Two of these pancake-like clusters are closely spaced, while the third is well separated
from the first two. The structure of this data set has been chosen order to illustrate the
interactive construction of the hierarchical model.
To visualize the data, we first generate a single top-level latent variable model, and plot
the posterior mean of each data point in the latent space. This plot is shown at the top of
Figure
5, and clearly suggests the presence of two distinct clusters within the data. The
user then selects two initial cluster centres within the plot, which initialize the second-
level. This leads to a mixture of two latent variable models, the latent spaces of which
are plotted at the second level in Figure 5. Of these two plots, that on the right shows
evidence of further structure, and so a sub-model is generated, again based on a mixture
of two latent variable models, which illustrates that there are indeed two further distinct
clusters.
At this third step of the data exploration, the hierarchical nature of the approach is evident
as the latter two models only attempt to account for the data points which have already
been modelled by their immediate ancestor. Indeed, a group of offspring models may be
combined with the siblings of the parent and still define a consistent density model. This
is illustrated in Figure 5, in which one of the second level plots has been 'copied down'
(shown by the dotted line) and combined with the other third-level models. When offspring
plots are generated from a parent, the extent of each offspring latent space (i.e. the axis
limits shown on the plot) is indicated by a projected rectangle within the parent space,
using the approach illustrated in Figure 4, and these rectangles are numbered sequentially
such that the leftmost sub-model is '1'. In order to display the relative orientations of the
latent planes, this number is plotted on the side of the rectangle which corresponds to the
top of the corresponding offspring plot. The original three clusters have been individually
coloured and it can be seen that the red, yellow and blue data points have been almost
perfectly separated in the third level.
Figure
5: A summary of the final results from the toy data set. Each data point is
plotted on every model at a given level, but with a density of ink which is
proportional to the posterior probability of that model for the given data
point.
7 Oil Flow Data
As an example of a more complex problem we consider a data set arising from a non-invasive
monitoring system used to determine the quantity of oil in a multi-phase pipeline
containing a mixture of oil, water and gas [9]. The diagnostic data is collected from
a set of three horizontal and three vertical beam-lines along which gamma rays at two
different energies are passed. By measuring the degree of attenuation of the gammas, the
fractional path length through oil and water (and hence gas) can readily be determined,
giving 12 diagnostic measurements in total. In practice the aim is to solve the inverse
problem of determining the fraction of oil in the pipe. The complexity of the problem
arises from the possibility of the multi-phase mixture adopting one of a number of different
geometrical configurations. Our goal is to visualize the structure of the data in the original
12-dimensional space. A data set consisting of 1000 points is obtained synthetically by
simulating the physical processes in the pipe, including the presence of noise dominated
by photon statistics. Locally, the data is expected to have an intrinsic dimensionality
of 2 corresponding to the two degrees of freedom given by the fraction of oil and the
fraction of water (the fraction of gas being redundant). However, the presence of different
flow configurations, as well as the geometrical interaction between phase boundaries and
the beam paths, leads to numerous distinct clusters. It would appear that a hierarchical
approach of the kind discussed here should be capable of discovering this structure. Results
from fitting the oil flow data using a 3-level hierarchical model are shown in Figure 6.
Homogeneous
Annular
Laminar
Figure
Results of fitting the oil data. Colours denote different multi-phase flow configurations
corresponding to homogeneous (red), annular (blue) and laminar
(yellow).
In the case of the toy data discussed in Section 6, the optimal choice of clusters and sub-clusters
is relatively unambiguous and a single application of the algorithm is sufficient
to reveal all of the interesting structure within the data. For more complex data sets, it
is appropriate to adopt an exploratory perspective and investigate alternative hierarchies,
through the selection of differing numbers of clusters and their respective locations. The
example shown in Figure 6 has clearly been highly successful. Note how the apparently
single cluster, number 2, in the top level plot is revealed to be two quite distinct clusters
at the second level, and how data points from the 'homogeneous' configuration have been
isolated and can be seen to lie on a two-dimensional triangular structure in the third level.
8 Satellite Image Data
As a final example, we consider the visualization of a data set obtained from remote-sensing
satellite images. Each data point represents a 3x3 pixel region of a satellite land image,
and for each pixel there are four measurements of intensity taken at different wavelengths
(approximately red and green in the visible spectrum, and two in the near infra-red). This
gives a total of 36 variables for each data point. There is also a label indicating the type
of land represented by the central pixel. This data set has previously been the subject of
a classification study within the Statlog project [10].
We applied the hierarchical visualization algorithm to 600 data points, with 100 drawn
at random of each of six classes in the 4435-point data set. The result of fitting a 3-level
hierarchy is shown in Figure 7. Note that the class labels are used only to colour the data
red soil
cotton crop
grey soil
damp grey soil
soil with vegetation stubble
very damp grey soil
Figure
7: Results of fitting the satellite image data.
points and play no role in the maximum likelihood determination of the model parameters.
Figure
7 illustrates that the data can be approximately separated into classes, and the
'very damp grey soil' continuum is clearly evident in
component 3 at the second level. One particularly interesting additional feature is that
there appear to be two distinct and separated clusters of 'cotton crop' pixels, in mixtures
1 and 2 at the second level, which are not evident in the single top-level projection. Study
of the original image [10] indeed indicates that there are two separate areas of 'cotton
crop'.
9 Discussion
We have presented a novel approach to data visualization which is both statistically principled
and which, as illustrated by real examples, can be very effective at revealing structure
within data. The hierarchical summaries of Figures 5, 6 and 7 are relatively simple to in-
terpret, yet still convey considerable structural information.
It is important to emphasize that in data visualization there is no objective measure of
quality, and so it is difficult to quantify the merit of a particular data visualization tech-
nique. This is one reason, no doubt, why there is a multitude of visualization algorithms
and associated software available. While the effectiveness of many of these techniques is
often highly data-dependent, we would expect the hierarchical visualization model to be a
very useful tool for the visualization and exploratory analysis of data in many applications.
In relation to previous work, the concept of sub-setting, or isolating, data points for further
investigation can be traced back to Maltson and Dammann [11], and was further developed
by Friedman and Tukey [12] for exploratory data analysis in conjunction with projection
pursuit. Such sub-setting operations are also possible in current dynamic visualization
software, such as 'XGobi' [13]. However, in these approaches there are two limitations.
First, the partitioning of the data is performed in a hard fashion, while the mixture of
latent variable models approach discussed in this paper permits a soft partitioning in
which data points can effectively belong to more than one cluster at any given level.
Second, the mechanism for the partitioning of the data is prone to sub-optimality as the
clusters must be fixed by the user based on a single two-dimensional projection. In the
hierarchical approach advocated in this paper, the user selects only a 'first guess' for the
cluster centres in the mixture model. The EM algorithm is then utilized to determine the
parameters which maximize the likelihood of the model, thus allowing both the centres
and the widths of the clusters to adapt to the data in the full multi-dimensional data
space. There is also some similarity between our method and earlier hierarchical methods
in script recognition [14] and motion planning [15] which incorporate the Kohonen Self-Organizing
Feature Map [16] and so offer the potential for visualization. As well as again
performing a hard clustering, a key distinction in both of these approaches is that different
levels in the hierarchies operate on different subsets of input variables and their operation
is thus quite different from the hierarchical algorithm described in this paper.
Our model is based on a hierarchical combination of linear latent variable models. A
related latent variable technique called the generative topographic mapping (GTM) [17]
uses a non-linear transformation from latent space to data space and is again optimized
using an EM algorithm. It is straightforward to incorporate GTM in place of the linear
latent variable models in the current hierarchical framework.
As described, our model applies to continuous data variables. We can easily extend the
model to handle discrete data as well as combinations of discrete and continuous vari-
ables. In case of a set of binary data variables y k 2 f0; 1g we can express the conditional
distribution of a binary variable, given x, using a binomial distribution of the form
is the
logistic sigmoid function, and w k is the k th column of W. For data having a 1-of-D coding
scheme we can represent the distribution of data variables using a multi-nomial distribution
of the form
are defined by a softmax, or normalized
exponential, transformation of the form
If we have a data set consisting of a combination of continuous, binary and categorical
variables, we can formulate the appropriate model by writing the conditional distribution
p(tjx) as a product of Gaussian, binomial and multi-nomial distributions as appropriate.
The E-step of the EM algorithm now becomes more complex since the marginalization
over the latent variables, needed to normalize the posterior distribution in latent space,
will in general be analytically intractable. One approach is to approximate the integration
using a finite sample of points drawn from the prior [17]. Similarly, the M-step is more
complex, although it can be tackled efficiently using the iterative re-weighted least squares
One important consideration with the present model is that the parameters are determined
by maximum likelihood, and this criterion need not always lead to the most interesting
visualization plots. We are currently investigating alternative models which optimize other
criteria such as the separation of clusters. Other possible refinements include algorithms
which allow a self-consistent fitting of the whole tree, so that lower levels have the opportunity
to influence the parameters at higher levels. While the user-driven nature of the
current algorithm is highly appropriate for the visualization context, the development of
an automated procedure for generating the hierarchy would clearly also be of interest.
A software implementation of the probabilistic hierarchical visualization algorithm in
Matlab is available from:
http://www.ncrg.aston.ac.uk/PhiVis
Acknowledgements
This work was supported by EPSRC grant GR/K51808: Neural Networks for Visualization
of High-Dimensional Data. We are grateful to Michael Jordan for useful discussions, and
we would like to thank the Isaac Newton Institute in Cambridge for their hostpitality.
--R
"Hierarchical mixtures of experts and the EM algo- rithm.,"
An Introduction to Latent Variable Models.
Multivariate Analysis Part 2: Classification
"Mixtures of principal component analysers,"
"Mixtures of probabilistic principal component analysers,"
"Maximum likelihood from incomplete data via the EM algorithm,"
"EM algorithms for ML factor analysis,"
Neural Networks for Pattern Recognition.
"Analysis of multiphase flows using dual-energy gamma densitometry and neural networks,"
Neural and Statistical Classification.
"A technique for determining and coding sub-classes in pattern recognition problems,"
"A projection pursuit algorithm for exploratory data analysis,"
"Interactive high-dimensional data visualiza- tion,"
"Script recognition with hierarchical feature maps,"
"Learning fine motion by using the hierarchical extended Kohonen map,"
"GTM: the generative topographic mapping,"
Chapman and Hall
--TR
--CTR
Peter Tino , Ian Nabney, Hierarchical GTM: Constructing Localized Nonlinear Projection Manifolds in a Principled Way, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.5, p.639-656, May 2002
Michalis K. Titsias , Aristidis Likas, Mixture of experts classification using a hierarchical mixture model, Neural Computation, v.14 n.9, p.2221-2244, September 2002
Tien-Lung Sun , Wen-Lin Kuo, Visual exploration of production data using small multiples design with non-uniform color mapping, Computers and Industrial Engineering, v.43 n.4, p.751-764, September 2002
Neil D. Lawrence , Andrew J. Moore, Hierarchical Gaussian process latent variable models, Proceedings of the 24th international conference on Machine learning, p.481-488, June 20-24, 2007, Corvalis, Oregon
Alexei Vinokourov , Mark Girolami, A Probabilistic Framework for the Hierarchic Organisation and Classification of Document Collections, Journal of Intelligent Information Systems, v.18 n.2-3, p.153-172, March-May 2002
Daniel Boley, Principal Direction Divisive Partitioning, Data Mining and Knowledge Discovery, v.2 n.4, p.325-344, December 1998
Hiroshi Mamitsuka, Essential Latent Knowledge for Protein-Protein Interactions: Analysis by an Unsupervised Learning Approach, IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), v.2 n.2, p.119-130, April 2005
Ting Su , Jennifer G. Dy, Automated hierarchical mixtures of probabilistic principal component analyzers, Proceedings of the twenty-first international conference on Machine learning, p.98, July 04-08, 2004, Banff, Alberta, Canada
Ian T. Nabney , Yi Sun , Peter Tino , Ata Kaban, Semisupervised Learning of Hierarchical Latent Trait Models for Data Visualization, IEEE Transactions on Knowledge and Data Engineering, v.17 n.3, p.384-400, March 2005
Michael E. Tipping , Christopher M. Bishop, Mixtures of probabilistic principal component analyzers, Neural Computation, v.11 n.2, p.443-482, Feb. 15, 1999
Unsolved Information Visualization Problems, IEEE Computer Graphics and Applications, v.25 n.4, p.12-16, July 2005
Kui-Yu Chang , J. Ghosh, A Unified Model for Probabilistic Principal Surfaces, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.1, p.22-41, January 2001
Wang , Yue Wang , Jianping Lu , Sun-Yuan Kung , Junying Zhang , Richard Lee , Jianhua Xuan , Javed Khan , Robert Clarke, Discriminatory mining of gene expression microarray data, Journal of VLSI Signal Processing Systems, v.35 n.3, p.255-272, November
Pradeep Kumar Shetty , R. Srikanth , T. S. Ramu, Modeling and on-line recognition of PD signal buried in excessive noise, Signal Processing, v.84 n.12, p.2389-2401, December 2004 | maximum likelihood;latent variables;principal component analysis;factor analysis;statistics;density estimation;hierarchical mixture model;EM algorithm;data visualization;clustering |
279092 | Interpolating Arithmetic Read-Once Formulas in Parallel. | A formula is read-once if each variable appears in it at most once. An arithmetic formula is one in which the operations are addition, subtraction, multiplication, and division (and constants are allowed). We present a randomized (Las Vegas) parallel algorithm for the exact interpolation of arithmetic read-once formulas over sufficiently large fields. More specifically, for $n$-variable read-once formulas and fields of size at least 3(n2+3n-2), our algorithm runs in $O(\log^2 n)$ parallel steps using O(n4) processors (where the field operations are charged unit cost). This complements some results from [N. H. Bshouty and R. Cleve, Proc. 33rd Annual Symposium on the Foundations of Computer Science, IEEE Computer Science Press, Los Alamitos, CA, 1992, pp. 24--27] which imply that other classes of read-once formulas cannot be interpolated---or even learned with membership and equivalence queries---in polylogarithmic time with polynomially many processors (even though they can be learned sequentially in polynomial time). These classes include boolean read-once formulas and arithmetic read-once formulas over fields of size $o(n / \log n)$ (for n variable read-once formulas). | Introduction
The problem of interpolating a formula (from some class C) is the problem of exactly
identifying the formula from queries to the assignment (membership) oracle.
The interpolation algorithm queries the oracle with an assignment a and the oracle
returns the value of the function at a.
There are a number of classes of arithmetic formulas that can be interpolated sequentially
in polynomial-time as well as in parallel in poly-logarithmic-time (with
polynomially many processors). These include sparse polynomials and sparse rational
functions ([BT88,BT90,GKS90,GrKS88,RB89,M91]).
Research supported in part by NSERC of Canada. Author's E-mail addresses:
bshouty@cpsc.ucalgary.ca and cleve@cpsc.ucalgary.ca.
A formula over a variable set V is read-once if each variable appears at most once
in it. An arithmetic read-once formula over a field K is a read-once formula over
the basic operations of the field K: addition, subtraction, multiplication, division,
and constants are also permitted in the formula. The size of an arithmetic formula
is the number of instances of variables (i.e. leaves) in it.
Bshouty, Hancock and Hellerstein [BHH92] present a randomized sequential
polynomial-time algorithm for interpolating arithmetic read-once formulas (AROFs)
over sufficiently large fields. Moreover, they show that, for arbitrarily-sized fields,
read-once formulas can be learned using equivalence queries in addition
to membership queries.
The question of whether arithmetic read-once formulas can be interpolated (or
learned) quickly in parallel depends on the size of the underlying field. It is shown
in [BC92] that for arithmetic read-once formulas over fields with o(n= log n) elements
there is no poly-logarithmic-time algorithm that uses polynomially many
processors (for interpolating, as well as learning with membership and equivalence
queries). Also, a similar negative result holds for boolean read-once formulas.
We present a (Las Vegas) parallel algorithm for the exact interpolation of arithmetic
read-once formulas over sufficiently large fields. For fields of size at least
2), the algorithm runs in O(log 2 n) parallel steps using O(n 4 ) processors
(where the field operations are charged unit cost).
If the "obvious" parallelizations are made to the interpolating algorithm in
(i.e., parallelizations of independent parts of the computation) one obtains
a parallel running time that is \Theta(d), where d is the depth of the target
formula. Since, in general, d can be as large as \Theta(n), this does not result in significant
speedup. Our parallel algorithm uses some techniques from the sequential
algorithm of [BHH92] as well as some new techniques that enable nonlocal features
of the AROF to be determined in poly-logarithmic-time.
The parallel algorithm can be implemented on an oracle parallel random access
machine (PRAM). More specifically, it is an exclusive-read exclusive-write (EREW)
PRAM-which means that processor's accesses to their communal registers are
constrained so that no two processors can read from or write to the same register
simultaneously. The EREW PRAM initially selects some random input values
(uniformly and independently distributed) and then performs O(n 3 ) membership
queries (via its oracle).
queries
The learning criterion we consider is exact identification. There is a formula f
called the target formula, which is a member of a class of formulas C defined over
the variable set V . The goal of the learning algorithm is to halt and output a
formula h from C that is equivalent to f .
In a membership query, the learning algorithm supplies values (x (0)
the variables in V , as input to a membership oracle, and receives in return the
value of f(x (0)
the projection of f obtained by
hard-wiring x to the value x (0) . An assignment of values to some subset of a
read-once formula's variables defines a projection, which is the formula obtained
by hard-wiring those assigned variables to their values in the formula and then
rewriting the formula to eliminate constants from the leaves. Note that if f 0 is
a projection of f , it is possible to simulate a membership oracle for f 0 using a
membership oracle for f .
We say that the class C is learnable in polynomial time if there is an algorithm
that uses the membership oracle and interpolates any f 2 C in polynomial time
in the number of variables n and the size of f . We say that C is efficiently
learnable in parallel if there is a parallel algorithm that uses the membership oracle
and interpolates any f 2 C in polylogarithmic time with polynomial number of
processors. In the parallel computation p processors can ask p membership queries
in one step.
3 Preliminaries
A formula is a rooted tree whose leaves are labeled with variables or constants from
some domain, and whose internal nodes, or gates, are labeled with elements from a
set of basis functions over that domain. A read-once formula is a formula for which
no variable appears on two different leaves. An arithmetic read-once formula over a
field K is a read-once formula over the basis of addition, subtraction, multiplication,
and division of field elements, whose leaves are labeled with variables or constants
from K.
In [BHH92] it is shown that a modified basis can be used to represent any
read-once formula. Let K be an arbitrary field. The modified basis
for arithmetic read-once formulas over K includes only two non-unary functions,
addition (+) and multiplication (\Theta). The unary functions in the basis are (ax
d) for every a; b; c; d 2 K such that ad \Gamma bc 6= 0. This requirement is to
prevent being identically 0 or differing by just a constant factor.
We can also assume that non-constant formulas over this modified basis do not
contain constants in their leaves. We represent such a unary function as f A , where
a b
c d
The restriction on a, b, c, and d is equivalent to saying the determinant of A
(denoted det(A)) is non-zero.
The value of a read-once formula on an assignment to its variables is determined
by evaluating the formula bottom up. This raises the issue of division by zero.
In [BHH92] this problem is handled by defining basis functions over the extended
domain K [ f1;ERRORg, where 1 represents 1=0 and ERROR represents 0=0.
For the special values we define our basis function as follows (assume x
and A is as above).
f A
( a
c
It is shown in [BHH92] that these definitions are designed so that the output of
the read-once formula is the same as it would be if the formula were first expanded
and simplified to be in the form p(x
p and q where gcd(p;
We say that a formula f is defined on the variable set V if all variables appearing
in f are members of V . Let g. We say a formula f depends on
variable x i if there are values x (0)
n and x (1)
i in K for which
and for which both those values of f are not ERROR. We call such an input vector
justifying assignment for x i .
Between any two gates or leaves ff and fi in an AROF, the relationships ancestor,
descendant, parent, and child refer to their relative position in the rooted tree. Let
that ff is a descendant of fi (or, equivalently, that fi is an ancestor of
ff). Let ff ! fi denote that ff is a proper descendant of fi (i.e., ff - fi but ff 6= fi).
For any pair of variables x i and x j that appear in a read-once formula, there is a
unique node farthest from the root that is an ancestor of both x i and x j , called
their lowest common ancestor, which we write as lca(x i ; x j ). We shall refer to the
type of lca(x to mean the basis function computed at that gate. We say that
a set W of variables has a common lca if there is a single node that is the lca of
every pair of variables in W .
We define the skeleton of a formula f to be the tree obtained by deleting any
unary gates in f (i.e. the skeleton describes the parenthesization of an expression
with the binary operations, but not the actual unary operations or embedded
constants).
We now list a basic property of unary functions f A that is proved in [BHH92].
1. The function f A is a bijection from K [ f1g to K [ f1g if and only if
is either a constant value from K[f1;ERRORg
or else is a constant value from K[f1g, except on one input value on which
it is ERROR.
2. The functions f A and f -A are equivalent for any - 6= 0.
3. Given any three distinct points
(a) If are on a line then there exists a unique function f A with
f A
(b) If are not on a line then there exists a unique function f A with
4. If functions f A and f B are equivalent and det (A); det (B) 6= 0, then there is
a constant - for which -A = B.
5. The functions (f A are equivalent.
6. If det(A) 6= 0, functions f
A and f A \Gamma1 are equivalent.
7. f A
A
A
A
A
(x).
4 Collapsibility of Operations
Whenever two non-unary gates of the same type in an AROF are separated by only
a unary gate it may be possible to collapse them together to a single non-unary
gate of the same type with higher arity. For ? 2 f+; \Thetag, a unary operation f A is
called ?-collapsible if
for some unary operations f B and f C . Intuitively, the above property means that
if the f A gate occurs between two non-unary ? gates then the two ? gates can be
"collapsed" into a single ? gate of higher arity, provided that new unary gates can
be applied to the inputs.
In [BHH92] it is explained that a unary gate f A is \Theta-collapsible if and only if A
is of the form ' a 0
or
and +-collapsible if and only if A is of the form
' a b
The following are equivalent definitions of ?-collapsible that will be used in this
paper.
Property 2 The following are equivalent
1. f A is +-collapsible.
2. f A
3. f A (1) = 1.
The following are equivalent
1. f A is \Theta-collapsible.
2. f A
3. ff A (1); f A (0)g = f0; 1g.
Proof: We prove the property by showing that 1 , 2 , 3. If f A is +-collapsible
then
' a b
and therefore (b=c). Since A is nonsingular a 6= 0 and c 6= 0 and
a=c
f A
d c
must
have
The result for \Theta-collapsible is left for the reader.2
In [BHH92], a three-way justifying assignment is defined as an assignment of
constant values to all but three variables in an AROF such that the resulting
formula depends on all of the three remaining variables. For the present results,
we require assignments that meet additional requirements, which are defined below.
For any two gates, ff and fi, with ff ! fi, define the ff-fi path as the sequence of
gate operations along the path in the tree from ff to fi (including the operations
of ff to fi at the endpoints of the path). Define a non-collapsing three-way justifying
assignment as a three-way justifying assignment with the following additional
property. For the unassigned variables x, y, and z, if lca(x; y) ! lca(x; z) and all
non-unary operations in the lca(x; y)-lca(x; z) path are of the same type ? (for
some then the function that results from the justifying assignment is
of the form
for some unary operations f A , f B , f C , f D and f E , where f C is not ?-collapsible.
Intuitively, this means that, after the justifying assignment, the two gates, lca(x; y)
and lca(x; z), cannot be collapsed-and thus the relationship lca(x; y) ! lca(x; z)
can still be detected in the resulting function.
Now, define a total non-collapsing three-way justifying assignment as a single
assignment of constant values to all variables in an AROF such that, for any three
variables, if all but those three are assigned to their respective constants then the
resulting assignment is non-collapsing three-way justifying.
5 Parallel Learning Algorithm
In this section, we present a parallel algorithm for learning AROFs. The algorithm
has three principal components: finding a total non-collapsing three-way justifying
assignment; determining the skeleton of the AROF; and, determining the unary
gates of the AROF.
The basic idea is to first construct a graph (that will later be referred to as the
LCAH graph) that contains information about the relative positions of the lcas of
all pairs of variables. This cannot be obtained quickly in parallel from justifying
assignments, because of the possibility that some of the important structure of an
AROF "collapses" under any given justifying assignment. However, we shall see
that any total non-collapsing justifying assignment is sufficient to determine the
entire structure of the AROF at once (modulo some polylog processing).
Once the LCAH graph has been constructed, the skeleton of the AROF can be
constructed by discarding some of the structure of the LCAH graph (a "garbage
collection" step). This is accomplished using some simple graph algorithms, as
well as a parallel prefix sum computation (which is NC 1 computable [LF80]).
Finally, once that skeleton is determined, the unary gates can be determined by
a recursive tree contraction method (using results from [B74]).
5.1 Finding a Total Non-Collapsing Three-Way Justifying
Assignment
In [BHH92], it is proven that, for any triple of variables x, y and z, by drawing
random values (independently) from a sufficiently large field, and assigning them
to the other variables in an AROF, a three-way justifying assignment for those
variables is obtained with high probability. In the parallel algorithm, a three-way
justifying assignment that is total non-collapsing is required. We show that, if the
size of the field K is at least O(n 2 ) then the same randomized procedure also yields
a total non-collapsing three-way justifying assignment with probability at least 1Therefore in time O(1) this step can be implemented.
We shall begin with some preliminary lemmas and then the precise statement
that we require will appear in Corollary 4.
Lemma 1: If A is not ?-collapsing then there
exists at most one value z (0) for z such that f C (y) j g(y; z (0) ) is ?-collapsing.
then by property 2 we have
where ff 2 Knf0g and fi 2 K. We substitute
f A is not +-collapsible, by property 2, we have f A Solving
the above system using property 1 we get
This shows that there is at most one value of z that makes f B (f A (y)
collapsible.
then by property 2 we have
where ff 2 Knf0g and fi 2 f+1; \Gamma1g. We substitute
f A is not \Theta-collapsible, by property 2, we have either f A (0) or f A (1) is not
in f0; 1g. Suppose f A (0) 62 f0; 1g and suppose f B (f A (0)z 0
are similar). Solving this gives
This shows that there is at most one value of z that makes f B (f A (y)z) \Theta-collapsible.2
Lemma 2: Let F be an AROF with
suppose that all non-unary operations in the lca(x 1 are of
the same type ? 2 f+; \Thetag. Let x (0)
n be independently uniformly randomly
chosen from S ' K, where m. Then the probability that x (0)
n is a
non-collapsing three-way justifying assignment is at least 1 \Gamma
Proof: Note that x (0)
n is not a non-collapsing three-way justifying assignment
if and only if it is not a justifying assignment or there exists a path between
the lcas of x 1 , x 2 and x 3 such that all non-unary operations are of the same type
and the path collapses under the assignment. From [BHH92], the probability of
the former condition is at most 2n+4
. We need to bound the probability of the
latter condition.
We have that F is of the form
E(fH k
may depend on variables from x
in addition to their marked arguments. Let -
E(y)
denote the above formulas (respectively) with x (0)
substituted for the variables
denote the degrees of C
as functions of x . By the assumption that F is in normal form, f H0 is not
?-collapsing. Therefore, by Lemma 1, there exists at most one value of C 1 for
which f H 1
?-collapsing. We can bound the probability of this value
occurring for C 1 . Since the degree of C 1 is d 1 , an application of Schwartz's result
in [Sch80] implies that probability that this value occurs for C 1 is at most d 1 =m.
Similarly, if f H 1
?-collapsing then Lemma 1 implies that
there exists at most one value of C 2 for which f H 2
collapsing, which occurs with probability at most d 2 =m, and so on. It follows that
the probability that
is ?-collapsing is at most (d
The result now follows by summing the two bounds. 2
Theorem 3: Let F be an AROF over K, and x (0)
n be chosen
uniformly from a set S ' K with m. Then the probability that x (0)
is a total non-collapsing three-way justifying assignment is at least 1 \Gamma 6n 2
Proof: First, note that, from Lemma 2, we can immediately infer that if
are drawn independently uniformly randomly from S ' K, where
then the probability that x (0)
n is a non-collapsing three-way justifying
assignment is at most
To obtain a better bound, consider each subformula C i that is an input to some
non-unary gate in the AROF. By results in [BHH92], there are at most two possible
values of C i that will result in some triple of variables with respect to which the
the assignment is not three-way justifying (the values are 0 and 1). Thus, as in
the proof of Lemma 2, the probability of one of these values arising for C i is at
most 2d
, where d is the degree of C i . Also, from Lemma 2, there is at most one
value of C i that will result in a collapsing assignment, and the probability of this
arising is at most d
. Thus, the probability of one of the two events above arising
is at most 3d
, and, since d - n, this is at most 3n
Since there are at most 2n such subformulas C i , the probability of any one of
them attaining one of the above values is at most 6n 2
. 2
The constant in the proof of theorem 3 can be improved to obtain probability
of
by using the following observation. Notice that we upper bounded the degree of
each subtree by n. In fact we can upper bound the degree of the leaves (there are
n leaves) by degree 1 since they are variables. Then we have another
subformulas of degrees . It is easy to show that d i - i +1
(simple induction on the number of nodes). Taking all this into account we obtain
the above bound.
By setting m - we obtain the following.
Corollary 4: Let F be an AROF over K, and x (0)
n be chosen
uniformly from a set S ' K with 2). Then the probability that
n is a total non-collapsing three-way justifying assignment is least 1
This Corollary implies that the expected time complexity of finding a total non-
collapsing three-way justifying assignment is O(1).
5.2 Determining the Skeleton of a Read-Once Formula in
Parallel
In this section, we assume that a total non-collapsing three-way justifying assignment
is given and show how to construct the skeleton with O(n 3 ) membership
queries in one parallel step followed by O(log n) steps of computation.
Firstly, suppose that, for a triple of variables x, y, and z, we wish to test whether
or not lca(x; y) ! lca(x; z). If op(x; y) 6= op(x; z) then this can be accomplished
by a direct application of the techniques in [BHH92], using the fact that we have
an assignment that is justifying with respect to variables x, y, and z. On the
other hand, if could be difficult to
detect with a mere justifying assignment because the justifying assignment might
collapse the relative structure between these three variables. If all the non-unary
operations in the lca(x; y)-lca(x; z) path are identical then, due to the fact that
we have a non-collapsing justifying assignment, we are guaranteed that the sub-structure
between the three variables does not collapse, and we can determine
that lca(x; y) ! lca(x; z) in O(1) time (again by directly applying techniques in
[BHH92]). This leaves the case where op(x; but the non-unary operations
in the lca(x; y)-lca(x; z) path are not all of the same type. In this case, the
techniques of [BHH92] might fail to determine that lca(x; y) ! lca(x; z) and report
them as equal. We shall overcome this problem at a later stage in our learning
algorithm, by making inferences based on hierarcical relationships with other vari-
ables. For the time being, we can, in time O(1) with one processor, compute the
following.
YES if lca(x; y) ! lca(x; z) and all non-unary operations
in the lca(x; y)-lca(x; z) path are of the same type;
YES or MAYBE if lca(x; y) ! lca(x; z) and op(x; but not
all non-unary operations in the lca(x; y)-lca(x; z)
path are of the same type;
MAYBE otherwise.
Note that if DESCENDANT(x; must be that lca(x; y) !
then it is possible that
and the non-unary operations on the
are not of the same type, or that lca(x; y) 6! lca(x; z).
To construct the extended skeleton of an AROF, we first construct its least
common ancestor hierarchy (LCAH) graph, which is defined as follows.
Definition: The least common ancestor hierarchy (LCAH) graph of an AROF
with n variables consists of
vertices, one corresponding to each (unordered) pair
of variables. For the distinct variables, x and y, denote the corresponding vertex
by xy or, equivalently, yx. Then, for distinct vertices xy and zw, the directed edge
present in the LCAH graph if and only if lca(x; y) - lca(z; w).
We shall prove that the following algorithm constructs the LCAH graph of an
AROF.
Algorithm CONSTRUCT-LCAH-GRAPH
1. in parallel for all distinct variables x; y; z do
if DESCENDANT(x;
insert edges xy ! xz and xy ! yz and xz ! yz and yz ! xz
2. in parallel for all distinct variables x;
if edges xy ! xw ! xz are present then
insert edge xy ! xz
3. in parallel for all distinct variables x; y; z do
if no edges between any of xy; xz; yz are present then
insert edges in each direction between every pair of xy; xz; yz
4. in parallel for all distinct variables x;
if edges xy ! xw ! zw present or edges xy ! yw ! zw present then
insert edge xy ! zw
Theorem 5: Algorithm CONSTRUCT-LCAH-GRAPH constructs the LCAH
graph of an AROF.
Proof: The proof follows from the following sequence of observations:
(i) For all distinct variables x, y and z for which lca(x; y) ! lca(x;
after executing steps 1 and 2 of the algorithm, the appropriate edges pertaining to
vertices xy, xz and yz (namely, xy ! xz, xy ! yz, xz ! yz and yz ! xz) are
present.
(ii) For all distinct variables x, y and z for which lca(x;
after executing step 3 of the algorithm, the appropriate edges pertaining to vertices
xy, xz and yz (namely, edges in both directions between every pair) are present.
(iii) For all distinct variables x, y, z and w, after executing step 4 of the algo-
rithm, the edge xy ! zw is present if and only if lca(x; y) - lca(z; w).2
It is straightforward to verify that algorithm CONSTRUCT-LCAH-GRAPH
can be implemented to run in O(log n) time on an EREW PRAM with O(n 4 )
processors. Moreover, the O(n 3 ) membership queries can be made initially in one
parallel step.
In an AROF, each non-unary gate corresponds to a biconnected component
(which is a clique) of its LCAH graph. Thus, to transform the LCAH graph into
the extended skeleton of the AROF, we simply "compress" each of its biconnected
components into a single vertex and then extract the underlying tree structure
of this graph (where the underlying tree structure of a graph is the tree whose
transitive closure is the graph 1 ).
This is accomplished using standard graph algorithm techniques, including a
parallel prefix sum computation ([LF80]). The details follow.
We first designate a "leader" vertex for each biconnected component. We then
record the individual variables that are descendants of each non-unary gate, and
then discard the other nodes in each biconnected component.
The algorithm below selects a leader from each connected component in an
LCAH graph. We assume that there is a total ordering OE on the vertices of the
LCAH graph (for example, the lexicographic ordering on the pair of indices of the
two variables corresponding to each vertex).
Algorithm LEADER
in parallel for all vertices xy OE zw do
if edges xy ! zw and zw ! xy are present then
mark xy with X
It is easy to prove the following.
Lemma executing algorithm LEADER, there is precisely one unmarked
node (namely, the largest in the OE ordering) in each biconnected component of the
LCAH graph.
After selecting a leader from each biconnected component of the LCAH graph,
we add n new nodes to this graph that correspond to the n variables. The edge
inserted if and only if the variable x is a descendant of lca(y; z). This is
accomplished by the following algorithm.
Algorithm LEAVES
in parallel for all distinct variables x;
insert edge x ! xy
if edge xy ! zw is present then
insert edge x ! zw
Lemma 7: After executing algorithm LEAVES, the edge x ! yz is present if
and only if variable x is a descendant of lca(y; z).
Both algorithms LEADER and LEAVES can be implemented in O(log n) time
with O(n 4 ) processors.
After these steps, the marked nodes are discarded from the augmented LCAH
graph (that contains
resulting in a graph with at most 2n \Gamma 1
vertices that is isomorphic to the extended skeleton of the AROF. This discarding
is accomplished by a standard technique involving the computation of prefix sums.
We first adopt the convention that the order OE extends to the augmented LCAH
graph as x 1 OE \Delta \Delta \Delta OE x n and x OE yz for any variables x, y and z. Then, for each
All edges are directed towards the root.
node v, set
ae 1 if v is unmarked
and compute the prefix sums
u-v
With algorithms for parallel prefix sum computation ([LF80]) this can be accomplished
in O(log(
processors.
The function oe is a bijection between the unmarked nodes of the augmented
LCAH graph and some S ' ng.
The following algorithm uses the values of this function to produce the extended
skeleton of the AROF.
Algorithm COMPRESS-AND-PRUNE
in parallel for all distinct vertices u; v do
if vertices u; v are both unmarked
and edge is in augmented LCAH graph then
insert edge oe(u) ! oe(v) in skeleton graph
in parallel for all distinct do
if edges are in skeleton graph then
remove edge from skeleton graph
The following is straightforward to prove.
Lemma 5: The "skeleton" graph that COMPRESS-AND-PRUNE produces
is isomorphic to the extended skeleton of the AROF, where the inputs x
correspond to the vertices of the graph.
5.3 Determining a Read-Once Formula from its Skeleton
Once the skeleton of an AROF is determined, what remains is to determine the
constants in its unary gates (note that the non-unary operations are easy to determine
using the techniques in [BHH92]). We show how to do this in O(log 2 n)
steps with O(n log n) processors. The main idea is to find a node that partitions
the skeleton into three parts whose sizes are all bounded by half of the size of the
skeleton. Then the unary gates are determined on each of the parts (in a recursive
manner), and the unary gates required to "assemble" the parts are computed.
The following lemma is an immediate consequence from a result in [B74].
Lemma 9 [B74]: For any formula F exists a non-unary gate of
type ? that "evenly" partitions it in the following sense. With a possible relabelling
of the indices of the variables,
and the number of variables in G(y; x
are all bounded above by d ne.
A minor technicality in the above lemma is that, since the skeleton is not necessarily
a binary tree, it may be necessary to "split" a non-binary gate into two
smaller gates.
It is straightforward to obtain the above decomposition of a skeleton in NC 1 .
Once this decomposition is obtained, the recursive algorithm for computing the
unary gates of the ROF follows from the following lemma.
Lemma 10: Let x (0)
n be a total non-collapsing justifying assignment for
the AROF F
(i) Given the skeleton of F and the subformulas G(y; x
possible to determine A, B and C, and,
thus, the entire structure of F steps with O(n log n) processors
(ii) Given the skeleton of F the problem of determining G(y; x
reducible to the problem of determining a ROF
given its skeleton.
Proof: For part (i), assume that the subformulas G(y; x
and I(x are given. Since x (0)
n is a justifying assignment, G(y; x (0)
l ) are all nonconstant unary functions, so
there exist nonsingular matrices A 0 , (which are easy to determine in O(log n)
parallel steps) such that
l
Also,
so the matrices A 0 can be determined in O(1) steps, [BHH92].
From this, the matrices A, B, C can be determined.
For part (ii), consider the problem of determining G(y; x
for some nonsingular A 00 . Therefore, if we fix x l to x (0)
l then we
have a reduction from the problem of determining G(fA 00 (y); x
Similarly, we have reductions from the problem of determining f B 00 (H(x
and f C 00 (I(x Since the matrices
can be absorbed into the processing of part (i) this is sufficient.2
By recursively applying Lemmas 9 and 10, we obtain a parallel algorithm to determine
an AROF given its skeleton and a total noncollapsing three-way justifying
assignment in O(log 2 n) steps. The processor count for this can be bounded by
O(n log n).
--R
Machine Learning
Learning Read-Once Formulas with Queries
When Won't Membership Queries Help?
The parallel evaluation of general arithmetic expressions.
Learning arithmetic read-once formulas
On the exact learning of formulas in par- allel
Learning boolean read-once formulas with arbitrary symmetric and constant fan-in gates
Asking Questions to Minimize Errors.
An Algorithm to Learn Read-Once Threshold Formulas
A deterministic algorithm for sparse multivariate polynomial interpolation.
On the Decidability of Sparse Univariate Polynomial Interpolation.
Exact Identification of Read-Once Formulas Using Fixed Points of Amplification Functions
Fast parallel algorithms for sparse multivariate polynomial interpolation over finite fields.
Interpolation of sparse rational functions without knowing bounds on the exponent.
Learning read-once formulas over fields and extended bases
Testing polynomials that are easy to com- pute
Parallel prefix computation.
Learning Quickly When Irrelevant Attributes Abound: A New Linear Threshold Algorithm
Randomized approximation and interpolation of sparse polynomials.
On the complexity of learning from counterexamples and membership queries.
Interpolation and approximation of sparse multivariate polynomials over GF(2).
Learning sparse multivariate polynomials over a field with queries and counterexamples.
Fast polynomial algorithms for verification of polynomial identities.
A theory of the learnable.
Learning in parallel
--TR
--CTR
Amir Shpilka, Interpolation of depth-3 arithmetic circuits with two multiplication gates, Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, June 11-13, 2007, San Diego, California, USA | read-once formula;learning theory;parallel algorithm |
279128 | Localizing a Robot with Minimum Travel. | We consider the problem of localizing a robot in a known environment modeled by a simple polygon P. We assume that the robot has a map of P but is placed at an unknown location inside P. From its initial location, the robot sees a set of points called the visibility polygon V of its location. In general, sensing at a single point will not suffice to uniquely localize the robot, since the set H of points in P with visibility polygon V may have more than one element. Hence, the robot must move around and use range sensing and a compass to determine its position (i.e., localize itself). We seek a strategy that minimizes the distance the robot travels to determine its exact location.We show that the problem of localizing a robot with minimum travel is NP-hard. We then give a polynomial time approximation scheme that causes the robot to travel a distance of at most (k - 1)d, where which is no greater than the number of reflex vertices of P, and d is the length of a minimum length tour that would allow the robot to verify its true initial location by sensing. We also show that this bound is the best possible. | Introduction
Numerous tasks for a mobile robot require it to have a map of its environment and knowledge of
where it is located in the map. Determining the position of the robot in the environment is known
as the robot localization problem. To date, mobile robot research that supposes the use of a map
generally assumes either that the position of the robot is always known, or that it can be estimated
using sensor data acquired by displacing the robot only small amounts [BR93, KMK93, TA92].
However, self-similarities between separate portions of the environment prevent a robot that has
been dropped into or activated at some unknown place from uniquely determining its exact location
without moving around. This motivates a search for strategies that direct the robot to travel around
its environment and to collect additional sensory data [BD90, KB87, DJMW93] to deduce its exact
position.
In this paper, we view the general robot localization problem as consisting of two phases:
hypothesis generation and hypothesis elimination. The first phase is to determine the set H of
hypothetical locations that are consistent with the sensing data obtained by the robot at its initial
location. The second phase is to determine, in the case that H contains two or more locations (see
Figure
1), which location is the true initial position of the robot; i.e. to eliminate the incorrect
hypotheses.
Ideally, the robot should travel the minimum distance necessary to determine its exact location.
This is because the time the robot takes to localize itself is proportional to the distance it must
travel (assuming sensing and computation time are negligible in comparison). Also, the most
common devices for measuring distance, and hence position, on actual mobile robots are relative
measurement tools such as odometers. Therefore, they yield imperfect estimates of orientation,
distance and velocity, and the errors in these estimates accumulate disastrously with successive
motions [Dav86]. Our strategy is well-suited to handling the accumulation of error problem via
simple recalibration, as we will point out later.
A solution to the hypothesis generation phase of robot localization has been given by Guibas,
Motwani and Raghavan in [GMR92]. We describe this further in the next section, after making
more precise the definitions of the two phases of robot localization. Our paper is concerned with
minimizing the distance traveled in the hypothesis elimination phase of robot localization. It begins
where [GMR92] left off. Together, the two papers give a solution to the general robot localization
problem.
In this paper, we define a natural algorithmic variant of the problem of localizing a robot with
minimum travel and show this variant is NP-hard. We then solve the hypothesis elimination phase
with what we call a greedy localization strategy. To measure the performance of our strategy, we
employ the framework of competitive analysis for on-line algorithms introduced by Sleator and
Tarjan [ST85]. That is, we examine the ratio of the distance traveled by a robot using our strategy
to the length d of a minimum length tour that allows the robot to verify its true initial position.
The worst-case value of this ratio over all maps and all starting points is called the competitive
ratio of the strategy. If this ratio is no more than k, then the strategy is called k-competitive. Since
our strategy causes the robot to travel a distance no more than (k 0 1)d, where
no greater than the number of reflex vertices of P ), our strategy is (k 0 1)-competitive. We also
show that no on-line localization strategy has a competitive ratio better than k 0 1, and thus our
strategy is optimal.
The rest of this paper is organized as follows. In Section 2 we give a formal definition of the robot
localization problem, we define some of the terms used in the paper, and we comment on previous
work. In Section 3 we prove that given a solution set H to the hypothesis generation phase of the
localization problem that contains more than one hypothetical location, the hypothesis elimination
phase, which localizes the robot by using minimum travel distance, is NP-hard. In Section 4 we
define the geometric structures that we use to set-up our greedy localization strategy. In Section 5
we give our greedy localization strategy and prove the previously mentioned performance guarantee
of k01 times optimum. We also give an example of a map polygon for which no on-line localization
strategy is better than (k01)-competitive. Section 6 summarizes and comments on open problems.
Localization Through Traveling and Probing
In this section, we describe our robot abstraction and give some key definitions.
The most common application domain for mobile robots is indoor "structured" environments.
In such environments it is often possible to construct a map of the environment, and it is acceptable
to use a polygonal approximation P of the free space [Lat91] as a map. A common sensing method
used by mobile robots is range sensing (for example, sonar sensing or laser range sensing).
2.1 Assumptions about the robot
We assume the following throughout this paper.
ffl The robot moves in a static 2-dimensional obstacle-free environment for which it has a map.
The robot has the ability to make error-free motions between arbitrary locations in the
environment 1 . We model the movement of the robot in the environment by a point p moving
inside and along the boundary of an n-vertex simple polygon P positioned somewhere in the
plane.
ffl The robot has a compass and a range sensing device. It is essential that the robot be able to
determine its orientation (with the compass); otherwise it can never uniquely determine its
exact location in an environment with non-trivial symmetry (such as a square).
ffl The robot's sensor can detect the distances to those points on walls for which the robot has
an unobstructed straight line of sight, and the robot's observations at a particular location
determine a polygon V of points that it can see (see the next subsection for a definition of
This is analogous to what can be extracted by various real sensors such as laser range
finders. The robot also knows its location in V .
In practice, position estimation errors accrue in the execution of such motions; however the strategy we present
here is exceptionally well suited to various methods for limiting these errors using sensor feedback (see Section 5.1).
2.2 Some definitions and an example
Two points in P are visible to each other or see each other if the straight line segment joining them
does not intersect the exterior of P . The visibility polygon V (p) for a point is the polygon
consisting of all points in P that are visible from p. The data received from a range sensing device
is modeled as a visibility polygon. The visibility polygon of the initial location of the robot is
denoted by V , and the number of its vertices is denoted by m. Since the robot has a compass, we
assume that P and V have a common reference direction.
We break the general problem of localizing a robot into two phases as follows.
The Robot Localization Problem
Hypothesis Generation: Given P and V , determine the set H of all points p i 2 P such that
the visibility polygon of p i is congruent under translation to V (denoted by V (p
Hypothesis Elimination: Devise a strategy by which the robot can correctly eliminate all but
one hypothesis from H , thereby determining its exact initial location. Ideally, the robot should
travel a distance as small as possible to achieve this.
As previously mentioned, the hypothesis generation phase has been solved by Guibas, Motwani
and Raghavan. We describe their results in the next subsection. This paper is concerned with the
hypothesis elimination phase.
Consider the example illustrated in Figure 1. The robot knows the map polygon P and the
visibility polygon V representing what it can "see" in the environment from its present location.
Suppose also that it knows that P and V should be oriented as shown. The black dot represents
the robot's position in the visibility polygon. By examining P and V , the robot can determine that
it is at either point p 1 or point p 2 in P , i.e. g. It cannot distinguish between these two
locations because V (p 1 However, by traveling out into the "hallway" and taking
another probe, the robot can determine its location precisely.
Figure
1: Given a map polygon P (left) and a visibility polygon V (center), the robot must
determine which of the 2 possible initial locations p 1 and p 2 (right) is its actual location in P .
An optimal strategy for the hypothesis elimination phase would direct the robot to follow an
optimal verification tour, defined as follows.
verification tour is a tour along which a robot that knows its initial position a priori
can travel to verify this information by probing and then return to its starting position. An optimal
verification tour is a verification tour of minimum length d.
Since we do not assume a priori knowledge of which hypothetical location in H is correct, an
optimal verification tour for the hypothesis elimination phase cannot be pre-computed. Even if we
did have this knowledge, computing an optimal verification tour would be NP-hard. This can be
proven using a construction similar to that in Section 3 and a reduction to hitting set [GJ79]. For
these reasons, we seek an interactive probing strategy to localize the robot. In each step of such a
strategy, the robot uses its range sensors to compute the visibility polygon of its present position,
and from this information decides where to move next to make another probe. To be precise, the
type of strategy we seek can be represented by a localizing decision tree, defined as follows.
localizing decision tree is a tree consisting of two kinds of nodes and two kinds of
weighted edges. The nodes are either sensing nodes (S-nodes) or reducing nodes (R-nodes), and
the node types alternate along any path from the root to a leaf. Thus tree edges directed down the
tree either join an S-node to an R-node (SR-edges), or join an R-node to an S-node (RS-edges).
ffl Each S-node is associated with a position defined relative to the initial position of the robot.
The robot may be instructed to probe the environment from this position.
ffl Each R-node is associated with a set H 0 ' H of hypothetical initial locations that have not
yet been ruled out. The root is an R-node associated with H , and each leaf is an R-node
associated with a singleton hypothesis set.
ffl Each SR-edge represents the computation that the robot does to rule out hypotheses in light
of the information gathered at the S-node end of the edge. An SR-edge does not represent
physical travel by the robot and hence has weight 0.
ffl Each RS-edge has an associated path defined relative to the initial location of the robot. This
is the path along which the robot is directed to travel to reach its next sensing point. The
weight of an RS-edge is the length of its associated path.
Since we want to minimize the distance traveled by the robot, we define the weighted height of
a localizing decision tree as follows.
Definition. The weight of a root-to-leaf path in a localizing decision tree is the sum of the
weights on the edges in the path. The weighted height of a localizing decision tree is the weight of a
maximum-weight root-to-leaf path. An optimal localizing decision tree is a localizing decision tree
of minimum weighted height.
In the next section, we show that the problem of finding an optimal localizing decision tree is
NP-hard.
We call a localization strategy that can be associated with a localizing decision tree a localizing
decision tree strategy. As an example of such a strategy, consider the map polygon P shown on the
left in Figure 2.
Imagine that from the visibility polygon sensed by the robot at its initial location it is determined
R
R
R
Go west d 1
Go south d 2 Go south d 3
Figure
2: A map polygon and 4 hypothetical locations fp (left) with a localizing decision
tree for determining the true initial position of the robot (right).
that the set of hypothetical locations is g. Hence the root of the localizing decision
tree (shown on the right in Figure 2) is associated with H . In the figure, the SR-edges are labeled
with the visibility polygons seen by the robot at the S-node endpoints of these edges. Assuming
that north points straight up, the strategy given by the tree directs the robot first to travel west a
distance d 1 , which is the distance between p i and p 0
and then to take another probe
at its new location. Depending on the outcome of the probe, the robot knows it is located either at
one of fp 0
2 g or at one of fp 0
g. If it is located at p 0
1 or p 0
, then the strategy directs it to travel
south a distance d 2 , which is the distance between p 0
2, to a position just past
the dotted line segment shown in P . By taking a probe from below this line segment, it will be
able to see the vertex at the end of the segment if it is at location p 00
1 , and it will not see this vertex
if it is at location p 00
2 . Thus after this probe it will be able to determine its unique location in P .
Similarly, if the robot is located at p 0
3 or p 0
4 , then the strategy directs it to travel south a distance
d 3 and take another probe to determine its initial position. The farthest that the robot must travel
to determine its location is so the weighted height of this decision tree is d 1
2.3 Previous work
Previous work on robot localization by Guibas, Motwani, and Raghavan [GMR92] showed how to
preprocess a map polygon P so that given the visibility polygon V that a robot sees, the set of
points in P whose visibility polygon is congruent to V , and oriented the same way, can be returned
quickly. Their algorithm preprocesses P in O(n 5 log n) time and O(n 5 ) space, and it answers queries
in O(m is the number of vertices of P , m is the number of vertices of
V , and k is the size of the output (the number of places in P at which the visibility polygon is V ).
They also showed how to answer a single localization query in O(mn) time with no preprocessing.
Kleinberg [Kle94b] has independently given an interactive strategy for localizing a robot in a
known environment. As in our work, he seeks to minimize the ratio of the distance traveled by a
robot using his strategy to the length of an optimal verification path (i.e. the competitive ratio).
Kleinberg's model differs from ours in several ways. First of all, he models the robot's environment
as a geometric tree rather than a simple polygon. A geometric tree is a pair (V; E), where V is a
finite point set in R d and E is a set of line segments whose endpoints all lie in V . The edges
do not intersect except at points of V and do not form cycles. Kleinberg only considers geometric
trees with bounded degree 1. Also, his robot can make no use of vision other than to know
the orientation of all edges incident to its current location. Using this model, Kleinberg gives an
O(n 2=3 )-competitive algorithm for localizing a robot in a geometric tree with bounded degree 1,
where n is the number of branch vertices (vertices of degree greater than two) of the tree.
The competitive ratio of Kleinberg's algorithm appears to be better than the lower bound
illustrated by Figure 10 in Section 5.3. However, if this map polygon were modeled as a geometric
tree it would have degree n, where n is the number of branch vertices, rather than a constant degree,
and the distance traveled by a robot using Kleinberg's algorithm can be linear in the degree of the
tree. If Kleinberg's algorithm ran on this example, it would only execute Step 1, which performs
a spiral search, and it would cause the robot to travel a distance almost 4n times the length of an
optimal verification path. Our algorithm causes the robot to travel a distance less than 2n times
the length of an optimal verification path on this example. Our algorithm is similar to Step 3
of Kleinberg's algorithm, and he gives a lower bound example (Figure 3 of [Kle94b]) illustrating
that an algorithm using only Steps 1 and 3 of his algorithm is no better than O(n)-competitive.
Although this example does not directly apply to our model since the robot in our model has the
ability to see to the end of the hallway, by adding small jogs in the hallway a similar example can
be constructed where our strategy is no better than O(n)-competitive. In this example, the number
of branch vertices of the geometric tree represented by P would be n and the number of vertices of
P would be O(n). However, in this example jH does not contradict our results.
Other theoretical work on localizing a robot in a known environment has also been done. Betke
and Gurvits [BG94] gave an algorithm that uses the angles subtended by landmarks in the robot's
environment to localize a robot. Their algorithm runs in time linear in the number of landmarks,
and it assumes that a correspondence is given between each landmark seen by the robot and
a point in the map of the environment. Avis and Imai [AI90] also investigated the problem of
localizing a robot using angle measurements, but they did not assume any correspondence between
the landmarks seen by the robot and points in the environment. Instead they assumed that the
environment contains n identical markers, and the robot takes k angle measurements between an
unknown subset of these markers. They gave polynomial time algorithms to determine all valid
placements of the robot, both in the case where the robot has a compass and where it does not. In
addition they showed that with polynomial-time preprocessing location queries can be answered in
O(log n) time.
Theoretical work with a similar flavor to ours has also been done on navigating a robot through
an unknown environment. In this work a point robot must navigate from a point s to a target t,
which is either a point or an infinite wall, where the Euclidean distance from s to t is n. There are
obstacles in the scene, which are not known a priori, but which the robot learns about only as it
encounters them. The goal is to optimize (i.e. minimize) the ratio of the distance traveled by the
robot to the length of a shortest obstacle-free path from s to t. As with localization strategies, the
worst-case ratio over all environments where s and t are distance n apart is called the competitive
ratio of the strategy.
Papadimitriou and Yannakakis [PY91] gave a deterministic strategy for navigating between two
points, where all obstacles are unit squares, that achieves a competitive ratio of 1.5, which they
show is optimal. For squares of arbitrary size they gave a strategy achieving a ratio of
26=3.
They also showed, along with Eades, Lin and Wormald [ELW93], that when t is an infinite wall
and the obstacles are oriented rectangles, there is a lower bound of \Omega# p
n) on the ratio achievable
by any deterministic strategy.
Blum, Raghavan and Schieber [BRS91] gave a deterministic strategy that matched
n)
lower bound for navigating between two points with oriented, rectangular obstacles. Their strategy
combines strategies for navigating from a point to an infinite wall and from a point on the wall of
a room to the center of the room, with competitive ratios of O(
n) and O(2
3 log n ) respectively.
The competitive ratio for the problem of navigating from a corner to the center of a room was
improved to O(ln n) by a strategy of Bar-Eli, Berman, Fiat and Yan [BEBFY92], who also showed
that this ratio is a lower bound for any deterministic strategy. Berman et al. [BBF gave a
randomized algorithm for the problem of navigating between two points with oriented, rectangular
obstacles with a competitive ratio of O(n 4=9 log n).
Several people have studied the problem of navigating from a vertex s to a vertex t inside an
unknown simple polygon. They assume that at every point on its path the robot can get the
visibility polygon of that point. Klein [Kle92] proved a lower bound of
2 on the competitive ratio
and gave a strategy achieving a ratio of 5:72 for the class of street polygons. A street is a simple
polygon such that the clockwise chain L and the counterclockwise chain R from s to t are mutually
weakly visible. That is, every point on L is visible to some point on R and visa versa. Kleinberg
[Kle94a] gave a strategy that improved Klein's ratio to 2
2, and Datta and Icking [DI94] gave a
strategy with a ratio of 9.06 for a more general class of polygons called generalized streets, where
every point on the boundary is visible from a point on a horizontal line segment joining L and R.
They also showed a lower bound of 9 for this class of polygons.
Previous work in the area of geometric probing has examined the complexity of constructing
minimum height decision trees to uniquely identify one of a library of polygons in the plane using
point probes. Such probes examine a single point in the plane to determine if an object is located at
that point. If each polygon in the library is given a fixed position, orientation and scale, then it has
been shown that both the problem of finding a minimum cardinality probe set (for a noninteractive
probing strategy) [BS93] and the problem of constructing a minimum height decision tree for
probing (for an interactive strategy) [AMM + 93] are NP-Complete. Arkin et al.[AMM
a greedy strategy that builds a decision tree of height at most dlog ke times that of an optimal
decision tree, where k is the number of polygons in the library. The minimum height decision tree
used for probing in [AMM + 93] is different than our localizing decision tree. It is a binary decision
tree whose internal nodes represent point probes whose outcome is either positive or negative and
whose edges are unweighted. The height of such a decision tree is the number of levels of the tree,
and it represents the maximum number of probes necessary to identify any polygon in the library.
3 Hardness of Localization
In this section we show that the problem of constructing an optimal localizing decision tree, as
defined in the previous section, is NP-hard. To do this, we first formulate the problem as a decision
problem.
Robot-Localizing Decision Tree (RLDT)
INSTANCE: A simple polygon P and a star-shaped polygon V , both with a common reference
direction, the set H of all locations positive integer h.
QUESTION: Does there exist a localizing decision tree of weighted height less than or equal to h
that localizes a robot with initial visibility polygon V in the map polygon P?
We show that this problem is NP-hard by giving a reduction from the Abstract Decision
Tree problem, proven NP-complete by Hyafil and Rivest in [HR76]. The Abstract Decision
Tree problem is stated as follows:
Abstract
Decision Tree (ADT)
INSTANCE: A set of objects, a set of subsets of X representing
binary tests, where test T j is positive on object x i if x and is negative otherwise, and a positive
QUESTION: Does there exist an abstract decision tree of height less than or equal to h 0 , where
the height of a tree is the maximum number of edges on a path from the root to a leaf, that can
be constructed to identify the objects in X?
An abstract decision tree has a binary test at all internal nodes and an object at every leaf. To
identify an unknown object, the test at the root is performed on the object, and if it is positive the
right branch is taken, otherwise the left branch is taken. This procedure is repeated until a leaf is
reached, which identifies the unknown object.
Theorem 1 RLDT is NP-hard.
Proof: Given an instance of ADT, we create an instance of RLDT as follows. We construct P to
be a staircase polygon, with a stairstep for each object x Figure 3). For each stairstep
we construct protrusions, one for each test in T (see Figure 4). If test T j is a positive test
for object x i , then protrusion T j on stairstep x i has an extra hook on its end (such as T 3 , T 4 , and
T n in
Figure
4). The length of a protrusion is denoted by l and the distance between protrusions
denoted by d, where d and l are chosen so that dh 0 ! l. The vertical piece between
adjacent stairsteps is longer than (2l +d)h 0 , and the width w of each stairstep is much smaller than
the other measurements. The polygon P has O(nk) vertices, where
Consider a robot that is initially located at the shaded circle shown in Figure 4 on one of the k
stairsteps. The visibility polygon V at this point has O(n) vertices and is the same at an analogous
point on any internal stairstep x i . We output the polygons P and V , which can be constructed in
polynomial time, the k locations weighted height
as an instance of RLDT.
In order for the robot to localize itself, it must either travel to one of the "ends" of P (either the
top or the bottom stairstep) to discover on which stairstep it was located initially, or it must examine
Figure
3: Construction showing localization is NP-hard.
a sufficient number of the n protrusions on the stairstep where it is located to distinguish that
stairstep from all the others. Since the vertical piece of each stairstep is longer than
only a strategy that directs the robot to remain on the same stairstep can lead to a decision tree
of weighted height less than or equal to h.
Any decision tree that localizes the robot by examining protrusions on the stairstep corresponds
to an equivalent abstract decision tree to identify the objects of X using tests in T , and visa versa.
Each time the robot travels to the end of protrusion T j to see if it has an extra hook on its end,
it corresponds to performing binary test T j on an unknown object to observe the outcome. The
robot must travel 2l to perform this test, and it travels at most d in between tests. Therefore, if the
robot can always localize itself by examining no more than h 0 protrusions, then it has a decision
tree of weighted height no more than which corresponds to an abstract decision tree
l
d
Figure
4: Close-up of a stairstep x i in NP-hard construction. Not to scale: l ?? d ?? w.
of height h 0 for the ADT problem. Since dh 0 ! l, in a localizing decision tree of weighted height
- h the robot cannot examine more than h 0 protrusions on any root-to-leaf path. ut
4 Using a Visibility Cell Decomposition for Localization
In this section we discuss the geometric issues involved in building a data structure for our greedy
localization strategy.
4.1 Visibility cells and the overlay arrangement
When we consider positions where the robot can move to localize itself, we reduce the infinite
number of locations in P to a finite number by first creating a visibility cell decomposition of P
[Bos91, BLM92, GMR92]. A visibility cell (or visibility region) C of P is a maximally connected
subset of P with the property that any two points in C see the same subset of vertices of P
([Bos91, BLM92]). A visibility cell decomposition of P is simply a subdivision of P into visibility
cells. This decomposition can be computed in O(n 3 log n) using techniques in [Bos91, BLM92]. It
is created by introducing O(nr) line segments, called visibility edges, into the interior of P , where
r is the number of reflex vertices 2 of P . Each line segment starts at a reflex vertex u, ends at the
boundary of P , and is collinear with a vertex v that is either visible from u or is adjacent to it. The
number of cells in this decomposition, as well as their total complexity, is O(n 2 r) (see [GMR92] for
a proof).
Although two points p and q in the same visibility cell C see the same subset of vertices of P ,
they may not have the same visibility polygon (i.e. it may be that V (p) 6= V (q)). This is because
some edges of V (p) may not actually lie on the boundary of P (these edges are collinear with p
and are produced by visibility lines), so these edges may be different in V (q). Therefore, in order
to represent the portion of P visible to a point p in a visibility cell C in such a way that all points
in C are equivalent, we need a different structure than the visibility polygon. The structure that
we use is the visibility skeleton of p.
Definition. The visibility skeleton V 3 (p) of a location is the skeleton of the visibility
polygon V (p). That is, it is the polygon induced by the non-spurious vertices of V (p), where a
spurious vertex of V (p) is one that lies on an edge of V (p) that is collinear with p, and the other
endpoint of this edge is closer to p. The non-spurious vertices of V (p) are connected to form V 3 (p)
in the same cyclical order that they appear in V (p). The edges of the skeleton are labeled to
indicate which ones correspond to real edges from P and which ones are artificial edges induced by
the spurious vertices. If p is outside P , then V 3 (p) is equal to the special symbol ;.
For a complete discussion of visibility skeletons and a proof that V 3 any two
points p and q in the same visibility cell, see [Bos91, BLM92, GMR92].
As stated in Section 2, the hypothesis generation phase of the robot localization problem generates
a set ae P of hypothetical locations at which the robot might be located
reflex vertex of P is a vertex that subtends an angle greater than 180 ffi .
initially. The number k of such locations is bounded above by r (see [GMR92] for a proof). From
this set H , we can select the first location p 1 (or any arbitrary location) to serve as an origin
for a local coordinate system. For each location we define the translation vector
that translates location p j to location p 1 , and we define P j to be the translate of P
by vector t j . We thus have a set fP of translates of P corresponding to the set H
of hypothetical locations. The point in each P j corresponding to the hypothetical location p j is
located at the origin.
In order to determine the hypothetical location corresponding to the true initial location of the
robot, we construct an overlay arrangement A that combines the k translates P j that correspond
to the hypothetical locations, together with their visibility cell decompositions. More formally, we
define A as follows.
Definition. The overlay arrangement A for the map polygon P corresponding to the set of
hypothetical locations H is obtained by taking the union of the edges of each translate P j as well
as the visibility edges in the visibility cell decomposition of P j .
Figure
5 for an example of an overlay arrangement. Since each visibility cell decomposition
is created from O(nr) line segments introduced into the interior of P j , a bound on the total number
of cells in the overlay arrangement as well as their total complexity is O(k which may be
O(n 6 ).
Figure
5: A visibility polygon, a map polygon and the corresponding overlay arrangement.
4.2 Lower bound on the size of the overlay arrangement
Figure
6 shows a map polygon P whose corresponding overlay arrangement for the visibility polygon
shown in Figure 7(a)
cells. This polygon has a long horizontal "hallway" with k identical,
equally spaced "rooms" on the bottom side of it Figure 6). Each room has width 1 unit,
and the distance between rooms is 2k 01 units. If the robot is far enough inside one of these rooms
so that it cannot see any of the rooms on the top of the hallway, then its visibility polygon is the
same no matter which room it is in. The k 0 1 rooms on the top side of the hallway are identical,
have width 1 unit, and are spaced 2k+1 units apart. Each top room is between two bottom rooms.
The i th top room from the left has its left edge a distance 2i 0 1 to the right of the right edge of
the bottom room to its left, and it has its right edge a distance 2(k 0 i) 0 1 to the left of the left
edge of the bottom room to its right (see Figure 6).
Figure
A map polygon whose overlay arrangement
Figure
7: (a) A visibility polygon (b) Visibility cells in a bottom room
Consider the visibility edges starting from the reflex vertices of the bottom rooms that are
generated by (i.e. collinear with) the reflex vertices of the top rooms. The i th bottom room from
the left will have 2(k0i) such visibility edges starting from its right reflex vertex and 2(i01) starting
from its left reflex vertex. Due to the spacing of the top rooms, the visibility edges starting from
the reflex vertices of one bottom room will be at different angles than those in any other bottom
room. See the picture in Figure 7(b) for an illustration of the visibility cells inside a bottom room.
When the overlay arrangement A for the visibility polygon shown in Figure 7(a) is constructed,
it will consist of k translates, one for each of the bottom rooms of P . Since these rooms are identical
and equally spaced, A will have 2k 0 1 rooms on its bottom side. Since the visibility edges inside
each bottom room are at different angles, these edges will not coincide when bottom rooms from
two different translates overlap in A. This means that A will have \Omega#
visibility edges starting from the left reflex vertex,
edges starting from the right
reflex vertex, resulting in \Omega# k 4 ) cells inside each of these bottom rooms of A. Therefore, A will
have \Omega# k 5 ) cells in total. Since the number of vertices of P is 8k, A
Closing the gap between the upper and lower bounds on the size of the arrangement is an open
problem.
4.3 The reference point set Q
Each cell in the overlay arrangement A represents a potential probe position, which can be used to
distinguish between different hypothetical locations of the robot. For each cell C of A and for each
translate P j that contains C, there is an associated visibility skeleton V 3
(C). If two translates
and P j have different skeletons for cell C, or if C is outside of exactly one of P i and P j , then C
distinguishes p i from p j .
For our localization strategy we choose a set Q of reference points in A that will be used to
distinguish between different hypothetical locations. For each cell C in A that lies in at least one
translate of P , and for each translate P j that contains C, let q C;j denote the point on the boundary
of C that is closest to the origin. Here, the distance d j (q C;j ) from the origin to the closest point
in C is measured inside P j . We choose g. In the remainder of this paper we drop the
subscripts from q C;j when they are not necessary.
Computing the reference points in Q involves computing Euclidean shortest paths in P j from
the origin to each cell C. To compute these paths we can use existing algorithms in the literature
for shortest paths in simple polygons. We first compute for each hypothetical initial location p j
the shortest path tree from the origin to all of the vertices of P j in linear time using the algorithm
given in [GHL + 87]. This algorithm also gives a data-structure for storing the shortest path tree
so that the length of the shortest path from the origin to any point x 2 P j can be found in time
O(log n) and the path from the origin to x can be found in time O(log n+ l), where l is the number
of segments along this path. We can use this data-structure later to extract the shortest path to
any cell C in A within any translate P j .
We use -(p to denote the shortest path from the origin to x in P j . To find the shortest
path from the origin to a segment xy contained in P j we use the following theorem.
Theorem 2 If P is a simple polygon, then the Euclidean shortest path -(s; xy) from a point s in
P to a line segment xy in P is either the shortest path -(s; x) from s to x, the shortest path -(s; y)
from s to y, or a polygonal path with l edges such that the first l 0 1 edges are the first l 0 1 edges
on either -(s; x) or -(s; y) and the last edge is perpendicular to xy.
Proof: The theorem follows from standard geometry results. We sketch the proof here. It is shown
in [LP84] that the shortest paths -(s; x) and -(s; y) are polygonal paths whose interior vertices are
vertices of P , and if v is the last common point on these two paths, then -(v; x) and -(v; y) are
both outward-convex (i.e. the convex hull of each of these subpaths lies outside the region bounded
by -(v; x), -(v; y) and the segment xy). As in [LP84] we call the union -(v; x) [ -(v; y) the funnel
associated with xy, and we call v the cusp of the funnel. See Figure 8 for an example of a simple
polygon with edges of this funnel shown as dashed line segments.
The shortest path -(s; xy) has -(s; v) as its initial subpath. To complete the shortest path
-(s; xy) we must find a shortest path -(v; xy). If v has a perpendicular line of sight to xy, then
this visibility line will be -(v; xy). If v does not have a perpendicular line of sight to xy, then
consider the edge e adjacent to v on the funnel that is the closest to perpendicular. Without loss
of generality, assume e is the first edge on -(v; y). The path -(v; xy) will follow -(v; y) until it
reaches y or it reaches a vertex that has a perpendicular line of sight to xy. ut
x y
s
Figure
8: A simple polygon with shortest paths from s to x, y and xy shown.
Using this theorem we can in O(n) time determine the length of the shortest path in P j from
the origin o to xy and the closest point on xy to o. We first use the data-structure in [GHL + 87] to
determine in O(log n) time the length d x and the last edge e x on the shortest path -(o; x) and the
length d y and the last edge e y on the shortest path -(o; y). For each of these edges we check its
angle with respect to xy. Note that both of these angles cannot be 90 ffi or greater, or else it would
be impossible to form a funnel with -(o; x) and -(o; y). If the angle between e x (e y ) and xy is at
least 90 ffi , then we return d x (d y ) as the shortest distance to xy and x (y) as the closest point on
xy.
If both the angles formed by e x and e y with xy are less than 90 ffi , then the last edge on the
shortest path -(o; xy) will be a perpendicular drawn from one of the vertices on the funnel associated
with xy. To find this edge we again use the data-structure in [GHL + 87] to examine the edges of
the funnel in order, starting with e x . For each edge we calculate the angle formed by its extension
with xy. That is, for each edge (u; w) whose extension intersects xy at point z, we calculate the
angle
6 uzy. As we move around the funnel these angles increase. When the angle becomes greater
than 90 ffi , we have found the vertex from which to drop a perpendicular to xy. It takes O(n) time
to find this vertex, and an additional O(log n) time to calculate the distance to xy and the closest
point (this is the time it takes to determine the length of the shortest path from o to this vertex).
To compute the reference point q C;j , we compute the shortest path distance in P j from the
origin to each edge of C. We then choose the smallest distance as the distance to the cell C. For
each cell C we will have up to k reference points fq and their corresponding distances
)g. We define d points q not within P j .
Partition of H
For each cell C we compute a partition of H that represents which hypothetical locations can be
distinguished from one another by probing from inside C. If two translates P i and P j have the
same visibility skeleton for cell C, or if C is outside of both P i and P j , then p i and p j are in the
same subset of the partition of H corresponding to cell C.
Since the visibility polygon and the visibility skeleton for a point can be computed in
time (see [GA81]) and we can compare two visibility skeletons with m vertices in O(m) time
to see if they are identical, we can compute the partition of H for C in O(kn m) time, where
m is the maximum number of vertices on any of the k visibility skeletons.
Although there may be O(n 6 ) cells in the overlay arrangement A, yielding up to O(kn 6 ) reference
points, we show in Section 5.4 that only O(k 2 ) reference points are needed for our localization
strategy, so we do not need to compute a partition of H for all O(n 6 ) cells.
5 A Greedy Strategy for Localization
In this section we present a localizing decision tree strategy, called Minimum Distance Localization
Strategy or Strategy MDL for short, for completing the solution of the hypothesis elimination phase
of the robot localization problem. Our strategy, which has a greedy flavor, uses the set Q of
reference points described in the previous section for choosing probing locations. Strategy MDL
has a competitive ratio of k 0 1, where
In devising a localizing decision tree strategy, there are two main criteria to consider when
deciding where the robot should make the next probe: (1) the distance to the new probe position,
and (2) the information to be gained at the new probe position. It is easy to see that a strategy
that only considers the second criterion can do arbitrarily worse than an optimal localizing decision
tree strategy. Strategy MDL considers (2) only to the extent that it never directs the robot to
make a useless probe. Nevertheless, its performance is the best possible. Although it would seem
beneficial to weight each possible probe location with the amount of information that could be
gained in the worst case by probing at that location, this change will not improve the worst-case
behavior of Strategy MDL, as the lower bound example given in Section 5.3 illustrates.
Even a strategy that considers both the distance and the information criteria when choosing
the next probe position can do poorly. For example, if the robot employs an incremental strategy
that at each step tells it to travel to the closest probe location that yields some information, then
a map polygon can be constructed such that in the worst case the robot will travel distance 2 k d.
Using Strategy MDL for hypothesis elimination, a strategy for the complete robot localization
problem can be obtained as follows. Preprocess the map polygon P using a method similar to
that in [GMR92]. This preprocessing yields a data structure that stores for each equivalence class
of visibility polygons either the location in P yielding that visibility polygon, if there is only one
location, or a localizing decision tree that tells the robot how to travel to determine its true initial
location.
5.1 Strategy MDL
In this subsection we present the details of Strategy MDL. Using the results of Section 4, it is
possible to pre-compute Strategy MDL's entire decision tree. However, for ease of exposition we
will only describe how the strategy directs the robot to behave on a root-to-leaf path in the tree.
In practice, it may also sometimes be preferable not to pre-compute the entire tree, but rather to
compute the robot's next move on an interactive basis, as the robot carries out the strategy.
Strategy MDL uses the map polygon P , the set H generated in the hypothesis generation phase,
and the set Q of reference points defined in Section 4.3. Also, for each point q C;j 2 Q the strategy
uses the distance d j (q C;j ) of q C;j from the origin, a path path j (q C;j ) within P j of length d j (q C;j ),
and the partition of H associated with cell C, as defined in Section 4.3.
Next we describe how Strategy MDL directs the robot to behave. Initially, the set of hypothetical
locations used by Strategy MDL is the given set H . As the robot carries out the strategy,
hypothetical locations are eliminated from H . Thus in our description of Strategy MDL, we abuse
notation and use H to denote the shrinking set of active hypothetical locations; i.e. those that have
not yet been ruled out. Similarly, we use Q to denote the shrinking set of active reference points; i.e.
those that non-trivially partition the set of active hypothetical locations. We call a path path j (q)
active if p j 2 H and q 2 Q are both active. We let d 3 (q3) denote the minimum of f d j (q) j q 2 Q
and are active g and let path 3 (q3) denote an active path of length d 3 (q3).
From the initial H and Q, an initial path 3 (q3) can be selected. The strategy directs the robot
to travel along this path and to make a probe at its endpoint. The robot then uses the information
gained at the probe position to update H and Q. The strategy then directs the robot to retrace
its path back to the origin and repeat the process until the size of H shrinks to 1.
Note that Strategy MDL is well-suited to handling the problem of accumulation of errors caused
by successive motions in the estimates of orientation, distance and velocity made by the robot's
sensors. This is because the robot always returns to the origin after making a probe, so it can
recalibrate its sensors.
5.2 A performance guarantee for Strategy MDL
The following theorems show that Strategy MDL is correct and has a competitive ratio of k 0 1.
First we show that Strategy MDL never directs the robot to pass through a wall. Then we show that
Strategy MDL eliminates all hypothetical locations except the valid one while directing the robot
along a path no longer than k 0 1 times the length of an optimal verification tour. A corollary of
Theorem 4 is that the localizing decision tree associated with Strategy MDL has a weighted height
that is at most 2(k 0 1) times the weighted height of an optimal localizing decision tree.
Theorem 3 Strategy MDL never directs the robot to pass through a wall.
Proof: The proof is by contradiction. Suppose that p j is the true initial location of the robot and
x j is the point on the boundary of P j where the robot would first hit a wall. Furthermore, suppose
that when the robot attempts to pass through the wall at x j , the path it has been directed to
follow is path i (q). Let C denote the cell of arrangement A that contains the portion of path i (q)
just before x j . Since cell C is contained in P j , it contributes a reference point q C;j to the set Q of
reference points.
In order to arrive at a contradiction, it suffices to show that q C;j is active at the time Strategy
MDL chooses path i (q) for the robot to follow. This is because d j (q C;j by definition of
since the portion of path i (q) from the origin to x j is contained within P j , and
is an intermediate point on path i (q). Thus d j (q C;j
MDL would choose path j (q C;j ) rather than path i (q) if q C;j is active.
Point q C;j is active when path i (q) is selected because cell C distinguishes between the two active
hypothetical locations p i and p j . This is because the skeleton V 3
associated with C relative to
P j has a real edge through the point x j , whereas the skeleton V 3
associated with C relative to
does not have a real edge through x j . ut
Theorem 4 Strategy MDL localizes the robot by directing it along a path whose length is at most
and d is the length of an optimal verification tour for the robot's initial
position.
Proof: Let p t denote the true initial location of the robot. First we show by contradiction that
Strategy MDL eliminates all hypothetical initial locations in H except p t . Suppose Q becomes
empty before the size of H shrinks to one, and let p i be an active hypothetical location different
from p t at the time Q becomes empty. Translates P i and P t are not identical, so there is some
point x t on the boundary of P t that does not belong to the boundary of P i . Let C be the cell of
arrangement A contained in P t and containing x t . C distinguishes between p i and p t , so q C;t is still
in the active set Q - a contradiction.
Next we establish an upper bound on the length of the path determined by Strategy MDL.
Because the strategy always directs the robot to a probing site that eliminates one or more elements
from H , the robot makes at most k 0 1 trips from its initial location to a sensing point and back.
To show that each round trip has length at most d, we consider how a robot traveling along an
optimal verification tour L would rule out an arbitrary incorrect hypothetical location p i . Then we
consider how Strategy MDL would rule out p i .
Consider a robot traveling along tour L that eliminates each invalid hypothetical location at
the first point x on L where the visibility skeleton of x relative to the invalid hypothetical location
differs from the visibility skeleton of x relative to P t .
Let x be the first point on L where the robot can eliminate p i . The point x must lie on the
boundary of some cell C in the arrangement A that distinguishes p i from p t . Cell C generates a
reference point q C;t 2 Q, and d t (q C;t ) - d t (x). Since p t is the true initial location of the robot, the
distance d t (x) is no more than the distance along L of x from the origin, as well as the distance
along L from x back to the origin. Thus d t (q C;t ) is no more than half the length of L.
At the moment Strategy MDL directs the robot to move from the origin to the probing site where
it eliminates p i , both p i and p t are active, so point q C;t is active since it distinguishes between them.
At this time Strategy MDL directs the robot to travel along path 3 (q3). By definition, the length
d 3 (q3) of this path is the minimum over all d j (q) for active In particular, since
point q C;t is still active, d 3 (q3) - d t (q C;t ), which is no more than half the length of L. Therefore,
Strategy MDL directs the robot to travel along a loop from the origin to some probing position
where the robot eliminates p i and back, and the length of this loop is at most d. ut
Using the definition of competitive ratio given in Section 1, Theorem 4 can be stated as "Strategy
j". Note that if a verifying path is not required to return
to its starting point, the bound for Theorem 4 becomes 2(k 0 1)d. Note also that even if the robot
were continuously sensing rather than just taking a probe at the end of each path path 3 (q3), a
better bound could not be achieved. This is because the robot always goes to the closest point
that yields useful information, so no point on path 3 (q3) before q3 will allow it to eliminate any
hypothetical locations.
Corollary 5 The weighted height of the localizing decision tree constructed by Strategy MDL is at
most times the weighted height of an optimal localizing decision tree for the same problem.
Proof: Consider the decision tree of Strategy MDL. Let p h denote the initial location associated
with the leaf that defines the weighted height of the tree. The weighted height of the tree is thus
the distance Strategy MDL will direct the robot to travel to determine that p h is the correct initial
location, and by Theorem 4 this distance is at most k 0 1 times the minimum verification tour
length for p h . But the minimum verification tour length for p h is at most twice the weight of a path
from the root to p h in an optimal localizing decision tree, which is at most the weighted height of
the tree. The result follows from these inequalities. ut
If the robot is required to return to its initial position, the bound on the weighted height of the
localizing decision tree constructed by Strategy MDL drops to k 0 1.
It should be clear from the discussions in Sections 4 and 5 that Strategy MDL can be computed
and executed in polynomial time. In this paper, we do not comment further on computation time
as there are many ways to implement Strategy MDL. Also, if travel times are large compared to
computation times, the importance of our results is that they obtain good path lengths.
5.3 Lower bounds
In Corollary 5 we proved that the weighted height of the localizing decision tree built by Strategy
MDL is no greater than 2(k 0 1)d, where and d is the weighted height of an optimal
localizing decision tree. This bound is also a lower bound for Strategy MDL, as illustrated in
Figure
9. Consider a map polygon that is a staircase polygon with stairs, such as the one in
Figure
3, where each stairstep except the first and last one is similar to the one shown in Figure 9.
Each such stairstep has k protrusions placed in a circle, with the end of each protrusion a distance
d from the center of the circle. In each stairstep a different protrusion has its end extended,
which uniquely identifies the stairstep. Each stairstep also has a longer protrusion, with k smaller
protrusions sticking out of it. One of these smaller protrusions is extended to uniquely identify the
stairstep. The first small protrusion is a distance d + ffl from the center of the circle, and the last
one is a distance d from the center of the circle.
For this map polygon, if the robot is initially placed in the center of the circle on one stairstep,
Strategy MDL will direct it to travel up the k protrusions of length d until it finds one that has a
longer piece at the end, or until it has examined all but one of these protrusions. In the worst case
the robot will travel a distance 2(k 0 1)d. An optimal strategy would direct the robot to travel
down the protrusion of length examine all the small protrusions coming out of it until
it found one that was longer. In the worst case the robot would travel a distance d
and ffi can be made arbitrarily small, in the worst case Strategy MDL
travels\Omega# times as far as
the optimal strategy. Even if we used a strategy that weighted each potential probe location with
.
d
Figure
9: Part of the map polygon that gives lower bound. Not to scale: d ?? ffl; ffi.
the amount of information that could be gained from that location in the worst case, we would still
build the same decision tree because any probe location in the stairstep yields at most one piece of
information in the worst case.
Although there are map polygons for which Strategy MDL builds a localizing decision tree
whose weighted height is \Omega# times the weighted height of an optimal localizing decision tree,
there are other map polygons for which any localizing decision tree strategy builds a tree with
weighted height at least k 0 1 times the length of an optimal verification tour. Consider a map
polygon that is a staircase polygon with stairs, such as the one in Figure 3, where each
stairstep except the first and last one is similar to the one shown in Figure 10. Each such stairstep
has k protrusions placed in a circle, with the end of each protrusion a distance d from the center
of the circle, and has one protrusion extended at the end to uniquely identify the stairstep. The
vertical piece between adjacent stairsteps is longer than 2(k 0 1)d.
As with the map polygon shown in Figure 9, Strategy MDL will direct the robot to explore the
k protrusions of length d, and in the worst case the robot will travel a distance 2(k 0 1)d. Consider
any other localizing decision tree strategy. If it directs the robot to travel to any stairstep besides
the one where it starts, then the localizing decision tree that it builds will have weighted height
greater than 2(k 0 1)d. The only way to localize the robot while remaining on the initial stairstep
is to direct it to examine the protrusions, and in the worst case the robot must travel a distance
before it has localized itself (assuming that it must return to the origin at the end).
Since no localizing decision tree strategy can build a tree with weighted height less than k
times the length of an optimal verification tour for all map polygons, Strategy MDL is the best
possible strategy.
d
Figure
10: Part of the map polygon that shows Strategy MDL is best possible.
5.4 Creating a reduced set of reference points
The set Q of reference points has size upper bounded by k times the number of cells in the arrangement
A, which may be very large as shown in Section 4.2. In this subsection, we show that when
Strategy MDL is run with only a small subset Q of the original reference points, the (k 0 1)d
performance guarantee of Section 5.2 still holds. The size of Q 0 will be no more than k(k 0 1).
defined as the union of subsets Q there is one Q i for each p i 2 H and
Ignoring implementation issues, we define Q i as follows. Initially Q i is empty, and
the subset of Q consisting of reference points q C;i generated for translate P i is processed in order
of increasing d i (q C;i ). For each successive point q C;i , the partition of H induced by is
compared to that induced by Q i alone. If the subset of H containing location p i is further subdivided
by the additional reference point q C;i , then q C;i is added to Q i . Conceptually, the reference point
q C;i is added if it distinguishes another hypothetical initial location from p i . This process continues
contained in a singleton in the partition of H induced by Q i . Since there are only k 0 1
initial locations to be distinguished from p i , Q i will contain at most k 0 1 points.
We denote by Strategy MDLR, which stands for Minimum Distance Localization with Reduced
reference point set, the strategy obtained by replacing set Q with Q 0 in Strategy MDL.
Theorem 6 Strategy MDLR, which uses a set of at most k(k 0 1) reference points, localizes the
robot by directing it along a path whose length is at most (k 01)d, where and d is the length
of an optimal verification tour for the robot's initial position.
Proof: Both the proof that Strategy MDLR directs the robot along a path that determines its
initial location and the proof of the (k 0 1)d bound are essentially the same as the proofs of the
corresponding results in Theorems 3 and 4 of Section 5.2. The only additional observation needed
is that if a reference point q C;i is used in one of the previous proofs to distinguish between two
hypothetical initial locations, and if q C;i does not belong to set Q 0 , then Q 0 contains some reference
point q C 0 ;j that distinguishes the same pair of locations and that satisfies d j (q C
Hence, set Q 0 always contains an adequate substitute for any reference point of Q required by the
proofs of Theorems 3 and 4. ut
6 Conclusions and Future Research
We have shown that the problem of localizing a robot in a known environment by traveling a minimum
distance is NP-hard, and we have given an approximation strategy that achieves a competitive
ratio of k 0 1, where k is the number of possible initial locations of the robot. We have also shown
that this bound is the best possible.
The work in this paper is one part of a strategy for localizing a robot. The complete strategy
will preprocess the map polygon and store the decision trees for ambiguous initial positions so that
the robot only needs to follow a predetermined path to localize itself.
There are many variations to this problem which can be considered. If the robot must localize
itself in an environment with obstacles, then the map of the environment can be represented as a
simple polygon with holes. If these obstacles are moving, then the problem becomes more difficult.
In this paper we assigned a cost of zero for the robot to take a probe and analyze it. In a
more general setting we would look for an optimal decision tree, where the edges of a decision tree
associated with the outcome of a probe would be weighted with the cost to analyze that probe.
A pragmatic variation of the problem would weight reference locations so that those that produce
more reliable percepts would be selected first.
--R
Locating a Robot with Angle Measurements.
Decision Trees for Geometric Models.
Randomized Robot Navigation Algorithms.
Map learning with indistinguishable locations.
Mobile Robot Localization Using Landmarks.
Efficient Visibility Queries in Simple Polygons.
Visibility in Simple Polygons.
Homing using combinations of model views.
Navigating in Unfamiliar Geometric Terrain.
Probing Polygons Minimally is Hard.
Representing and Acquiring Geographic Knowledge.
Competitive Searching in a Generalized Street.
Map validation and self-location in a graph-like world
Performance Guarantees for Motion Planning with Temporal Uncertainty.
El Gindy and
Computers and Intractability
The Robot Localization Problem in Two Dimensions.
Constructing Optimal Binary Decision Trees is NP-Complete
A qualitative approach to robot exploration and map-learning
Walking an Unknown Street with Bounded Detour.
The Localization Problem for Mobile Robots.
Robot Motion Planning.
Euclidean Shortest Paths in the Presence of Rectilinear Barriers.
Shortest Paths without a Map.
Amortized Efficiency of List Update and Paging Rules.
Position estimation for an autonomous mobile robot in an outdoor environment.
--TR
--CTR
Sven Koenig , Apurva Mudgal , Craig Tovey, A near-tight approximation lower bound and algorithm for the kidnapped robot problem, Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, p.133-142, January 22-26, 2006, Miami, Florida
Rudolf Fleischer , Kathleen Romanik , Sven Schuierer , Gerhard Trippen, Optimal robot localization in trees, Information and Computation, v.171 n.2, p.224-247, December 15, 2001 | navigation;optimization;localization;visibility;positioning;NP-hard;competitive strategy;robot;sensing |
279140 | A Controlled Experiment to Assess the Benefits of Procedure Argument Type Checking. | Type checking is considered an important mechanism for detecting programming errors, especially interface errors. This report describes an experiment to assess the defect-detection capabilities of static, intermodule type checking. The experiment uses ANSI C and Kernighan&Ritchie (K&R) C. The relevant difference is that the ANSI C compiler checks module interfaces (i.e., the parameter lists calls to external functions), whereas K&R C does not. The experiment employs a counterbalanced design in which each of the 40 subjects, most of them CS PhD students, writes two nontrivial programs that interface with a complex library (Motif). Each subject writes one program in ANSI C and one in K&R C. The input to each compiler run is saved and manually analyzed for defects. Results indicate that delivered ANSI C programs contain significantly fewer interface defects than delivered K&R C programs. Furthermore, after subjects have gained some familiarity with the interface they are using, ANSI C programmers remove defects faster and are more productive (measured in both delivery time and functionality implemented). | Introduction
The notion of data type is an important concept in
programming languages. A data type is an interpretation
applied to a datum, which otherwise would just
be a sequence of bits. The early FORTRAN compilers
already used type information to generate efficient
code for expressions. For instance, the code produced
for the operator "+" depends on the types of
its operands. User-defined data types such as records
and classes in later programming languages emphasize
another aspect: Data types are a tool for modeling the
data space of a problem domain. Thus, types can simplify
programming and program understanding. A further
benefit is type checking: A compiler or interpreter
can determine whether a data item of a certain type is
permissible in a given context, such as an expression
or statement. If it is not, the compiler has detected a
defect in the program. It is the defect-detection capability
of type checking that is of interest in this paper.
There is some debate over whether dynamic type
checking is preferable to static type checking, how
strict the type checking should be, and whether explicitly
declared types are more helpful than implicit ones.
However, it seems that overall the benefits of type
checking are virtually undisputed. In fact, modern programming
languages have evolved elaborate type systems
and checking rules. In some languages, such as
C, the type-checking rules were even strengthened in
later versions. Furthermore, type theory is an active
area of research [3].
However, it seems that the benefits of type checking
are largely taken for granted or are based on personal
anecdotes. For instance, Wirth states [21] that the
type-checking facilities of Oberon had been most helpful
in evolving the Oberon system. Many programmers
can recall instances when type checking did or
could have helped them. However, we could find only
a single report on a controlled, repeatable experiment
testing the benefits of typing [9].
The cost-benefit ratio of type checking is far from clear,
because type checking is not free: It requires effort
on behalf of the programmer in providing type infor-
mation. Furthermore, there are good arguments why
relying on compiler type checking may be counter-productive
when doing inspections [12, pp. 263-268].
We conclude that the actual costs and benefits of type
checking are largely unknown. This situation seems to
be at odds with the importance assigned to the con-
cept: Languages with type checking are widely used
and the vast majority of practicing programmers are
affected by the technique in their day-to-day work. The
purpose of this paper is to provide initial, "hard" evidence
about the effects of type checking. We describe
a repeatable and controlled experiment that confirms
some positive effects: First, when applied to interfaces,
type checking reduced the number of defects remaining
in delivered programs. Second, when programmers use
a familiar interface, type checking helped them remove
defects more quickly and increased their productivity.
Knowledge about the effects of type checking can be
useful in at least three ways: First, we still lack a useful
scientific model of the programming process. Understanding
the types, frequencies, and circumstances of
programmer errors is an important ingredient of such
a model. Second, a better understanding of defect-
detection capabilities of type checking may allow us to
improve and fine-tune them. Finally, there are still
many environments where type checking is missing
or incomplete, and confirmed positive effects of type
checking may help close these gaps.
In this experiment we analyze the effects of type checking
when programming against an interface. Subjects
were given programming tasks that involve a complex
interface (the Motif library). One group of subjects
worked with the type checker, the other without. The
dependent variables were as follows:
Completion Time: The time taken from receiving the
task to delivering the program.
Functional Units: The number of complete and correct
functional units in a program. Each functional
unit interfaces to the library and corresponds to one
statement in the "gold program" (the model solution).
Interface Use Productivity: measured in Functional
Units per hour and by Completion Time.
Number of Interface Defects: The number of program
defects in applying the library interface. Such a defect
is either an argument missing, too many, of wrong
type, or at incorrect position; or it is the use of an inappropriate
function.
Interface Defect Lifetime: The total time a particular
interface defect is present in the solution during de-
velopment. Note that this time may be the sum of
one or more time intervals, since a defect may first be
eliminated and later reintroduced.
We conjecture that type checking makes type defect
removal quicker and more reliable, thus also speeding
up overall program development. More concretely, we
attempt to find support for, or arguments against, the
following three hypotheses.
checking increases Interface
Use Productivity.
ffl Hypothesis 2: Type checking reduces the number
of Interface Defects in delivered programs.
ffl Hypothesis 3: Type checking reduces Interface
Defect Lifetimes.
Related work
We are aware of only two closely related studies. One
is the Snickering Type Checking Experiment 1 with the
Mesa language. In this work, compiler-generated error
messages involving types were diverted to a secret
file. A programmer working with this compiler on two
different programs was shown the error messages after
he had finished the programs and was asked to estimate
how much time he would have saved had he
seen the messages right away. Interestingly, the programmer
had independently removed all the defects
detected by the type checker. He claimed that on one
program, which was entirely his own work, type checking
would not have helped appreciably. On another
program which involved interfacing to a complicated
library, he estimated that type checking would have
saved half of total development time. It is obvious
that this type of study has many flaws. But to our
knowledge it was never repeated in a more controlled
setting.
A different approach was taken by the second experi-
ment, performed by Gannon [9]. This experiment compares
frequencies of errors in programs written in a
statically typed and a "type-less" language. Each subject
writes the same program twice, once in each lan-
guage, but a different order of languages is used for
each half of the experiment group. The experiment
finds that the typed group has fewer distinct errors,
fewer error re-occurrences, fewer compilation runs, and
fewer errors remaining in the program (0.21 vs. 0.64
on average). The problem with the experiment is that
it was significantly harder to program with the typeless
language. The task to be programmed involved
strings and the typed language provided this data type,
while the type-less language did not. Gannon reports
that most of the difficulties encountered by the subjects
were actually due to the bit-twiddling required by lack
of typing and that "relatively few errors resulted from
uses of data of the wrong type" ([9], p.591). Hence the
experiment does not tell us how useful type checking
is.
There is some research on error and defect classifica-
tion, which has some bearing on our experiment. Several
publications describe and analyze the typical defects
in programs written by novices, e.g. [6, 18]. The
results are not necessarily relevant for advanced pro-
grammers. Furthermore, type errors do not play an
important role in these studies.
Defect classification has also been performed in larger
scale software development settings, e.g. [1, 10]. Type
checking was not an explicit concern in these stud-
ies, but in some cases related information can be de-
rived. For instance, Basili and Perricone [1] report
that 39 percent of all defects in a 90.000 line FORTRAN
project were interface defects. We conjecture
that some fraction of these could have been found by
type checking.
The defect-detection capabilities of testing methods
[2, 8, 22] have received some attention; the corresponding
psychological problems were also investigated [20].
There is also a considerable literature about debugging,
e.g. [7, 13, 16, 17], and its psychology, e.g. [17, 19].
However, the defects found by testing or debugging are
those that already passed the type checks. So the results
from these studies would be applicable here only
if they focused on defects detectable by type checking
which they do not.
Several studies have compared the productivity effects
of different programming languages, but they either
used programmers with little experience and very small
programming tasks, e.g. [6], or somewhat larger tasks
and experienced programmers, but lacked proper experimental
control, e.g. [11]. In addition, all such
studies have the inherent problem that they confound
too many factors to draw conclusions regarding type
checking, even if some of the languages provide type
checking and others do not.
It appears that the cost and benefits of interface type
checking have not yet been studied systematically.
3 Design of the Experiment
The idea behind the experiment is the following: Let
experienced programmers solve short, modestly complex
programming problems involving a complex li-
brary. To control for the type-checking/no-type-
checking variable, let every subject solve one problem
with K&R C, and another with Ansi C. Save the inputs
to all compiler runs for later defect analysis.
A number of observations regarding the realism of the
setup are in order. A short, modestly complex task
means that most difficulties observed will stem from
using the library, not from solving the task itself. Thus,
most errors will occur when interfacing to the library,
where the effects of type checking are thought to be
most pronounced. Furthermore, using a complex library
is similar to the development of a module within
a larger project where many imported interfaces must
be handled. To ensure that the results would not be
confounded by problems with the language, we used
experienced programmers familiar with the programming
language. However, the programmers had no experience
with the library - another similarity with
realistic software development, in which new modules
are often written within a relatively foreign context.
In essence, we used two independent variables: There
were two separate problems to be solved
as described below) and two alternative treatments
(Ansi C and K&R C, i.e., type checking and no type
checking).
To balance for learning effects, sequence effects, and
inter-subject ability differences, we used a counterbalanced
design: Each subject had to solve both problems,
each with a different language. The groups were balanced
with respect to the order of both problem and
language, giving a total of four experimental groups
(see
Table
1). Subjects were assigned to the groups
randomly.
The design also allows to study a third independent
variable, namely experience with the library: In his or
her first task the subject has no previous experience
while in the second task some experience from the first
task is present.
The following subsections describe the tasks, the sub-
jects, the experiment setup, and the observed variables
and discuss internal and external validity of the experi-
ment. Detailed information can be found in a technical
report [15].
3.1 Tasks
Problem A (2 \Theta 2 matrix inversion): Open a window
with four text fields arranged in a 2 \Theta 2 pattern
plus an "Invert" and a "Quit" button. See Figure 1.
"Quit" exits the program and closes the window. The
fields represent a matrix of real values. The values
can be entered and edited. When the "Invert" button
is pressed, replace the values by the coefficients of the
corresponding inverted matrix, or print an error message
if the matrix is not invertible. The formula for
Problem B (File Browser): Open a window with
a menubar containing a single menu. The menu entry
"Select file" opens a file-selector box. The entry "Open
Table
1: Tasks and compilers assigned to the four groups of subjects
first problem A first problem B
then problem B then problem A
first Ansi C Group 1 Group 2
then K&R C 8 subjects 11 subjects
first K&R C Group 3 Group 4
then Ansi C 8 subjects 7 subjects
Figure
1: Problem A (2 \Theta 2 matrix inversion)
selected file" pops up a separate, scrollable window and
displays the contents of the file previously selected in
the file selector box. "Quit" exits the program and
closes all its windows. See Figure 2.
Figure
2: Problem B (File browser)
For solving the tasks, the subjects did not use native
Motif, but a special wrapper library. The wrapper provides
operations similar to those of Motif, but with improved
type checking. For instance, all functions have
fixed-length parameter lists, while Motif often provides
variable-length parameter lists which are not checked.
The wrapper also defines types for resource-name con-
stants; in Motif, all resources are handled typelessly.
Furthermore, the wrapper provides some simplification
through additional convenience functions. For in-
stance, there is a single function for creating a Row-
ColumnManager and setting its orientation and packing
mode; Motif requires several calls.
The tasks, although quite small, were not at all trivial.
The subjects had to understand several important concepts
of Motif programming (such as widget , resource,
and callback function). Furthermore, they had to learn
to use them from abstract documentation only, without
example programs; we used no examples as we felt
that these would have made the programming tasks
too simple. Typically, the subjects took between one
and two hours for their first task and about half that
time for their second.
3.2 Subjects
A total of 40 unpaid volunteers participated in the
study. Of those, 6 were removed from the sample: One
deleted his protocol files, one was obviously too inexperienced
almost 10 times as long as the others),
and 4 worked on only one of the two problems. After
this mortality, the A/B groups had 8+8 subjects and
the B/A groups had 11+7 subjects. We consider this
to be still sufficiently balanced [4].
The 34 subjects had the following education. 2 were
postdocs in computer science (CS); 19 were PhD students
in CS and had completed an MS degree in CS;
another subject was also a CS PhD student but held an
MS in physics; 12 subjects were CS graduate students
with a BS in CS.
The subjects had between 4 and 19 years of programming
experience and all but 11 of them had
written at least 3000 lines in C (all but one at least
300 lines). Only 8 of the subjects had some programming
experience with X-Windows or Motif; only 3 of
them had written more than 300 lines in X-Windows
or Motif.
3.3 Setup
Each subject received two written documents and one
instruction sheet and was then left alone at a Sun-4
workstation to solve the two problems. The subjects
were told to use roughly one hour per problem, but no
time limit was enforced. Subjects could stop working
even if the programs were not operational.
The instruction sheet was a one-page description of the
global steps involved in the experiment: "Read sections
1 to 3 of the instruction document; fill in the
questionnaire in section 2; initialize your working environment
by typing make TC1; solve problem A by. "
and so on. The subjects obtained the following mate-
rials, most of them both on paper and in files:
1. a half-page introduction to the purpose of the ex-
periment
2. a questionnaire about the background of the sub-
ject
3. specifications of the two tasks plus the program
skeleton for them
4. a short introduction to Motif programming (one
page) and to some useful commands (for example
to search manuals online)
5. a manual that listed first the names of all types,
constants, and functions that might be required,
followed by descriptions of each of them including
the signature, semantic description, and several
kinds of cross-references. The document also included
introductions to the basic concepts of Motif
and X-Windows. This manual was hand tailored
to contain all information required to solve the
tasks and hardly anything else.
6. a questionnaire about the experiment (to be filled
in at the end)
Subjects could also execute a "gold" program for each
task. The gold program solved its task completely and
correctly and was to be used as a backup for the verbal
specifications. Subjects were told to write programs
that duplicated the behavior of the gold programs.
The subjects did not have to write the programs from
scratch. Instead, they were given a program skeleton
that contained all necessary #include commands, variable
and function declarations, and some initialization
statements. In addition, the skeleton contained pseudocode
describing step by step what statements had
to be inserted to complete the program. The subjects'
task was to find out which functions they had to use
and which arguments to supply. Almost all statements
were function calls.
The following is an example of a pseudostatement in
the skeleton.
/* Register callback-function 'button pushed'
for the 'invert' button with the number 1 as
'client data' */
It can be implemented thus:
XtAddCallbackF(invert, XmCactivateCallback,
button pushed, (XtPointer)1);
There were only few variations possible in the implementation
of the pseudocode.
The programming environment captured all program
versions submitted for compilation along with a time
stamp and the messages produced by the compiler and
linker. A time stamp for the start and the end of the
work phase for each problem was also written to the
protocol file.
The environment was set up to call the standard
C compiler of SunOS 4.1.3 using the command cc
-c -g for the K&R tasks and version 2.7.0 of the
GNU C compiler using gcc -c -g -ansi -pedantic
-Wimplicit -Wreturn-type for the Ansi C tasks.
3.4 Dependent variables
For hypotheses 2 and 3 we observed when each individual
defect in a program was introduced and removed.
We also divided the defects in a few non-overlapping
classes. We used the following procedure.
After the experiment was finished, each program version
in the protocol files was annotated by hand. Each
different defect that occurred in the programs was identified
and given a unique number. For instance, for the
call to XtAddCallbackF shown above, there were 15
different defect numbers, including 4 for wrong argument
types, 4 for wrong argument objects with correct
type, and another 7 for more specialized defects.
Each program version was annotated with the defects
introduced, removed, or changed into another defect.
Additional annotations counted the number of type de-
fects, other semantic defects, and syntactic defects that
actually provoked one or more error messages from the
compiler or linker. The time stamps were corrected
for work pauses that lasted more than 10 minutes in
order to capture pure programming time only. Summary
statistics were computed, for which each defect
was classified into one of the following categories:
ffl slight: Defects resulting in slightly wrong functionality
of the program, but so minor that a programmer
may feel no need to correct them. There-
fore, this class will also be ignored in order to avoid
artifacts in the results.
invis: Defects that are invisible, i.e., they do not
compromise functionality, but only because of unspecified
properties of the library implementation.
Changes in the library implementation may result
in a misbehaving program. Example: Supplying
the integer constant PACK COLUMN instead of the
expected Boolean value True works correctly, because
(and as long as) the constant happens to
have a non-zero value. This rare class of defects
will be ignored: invis defects can hardly be detected
and thus are not relevant for our experiment
ffl invisD: same as invis, except that the defects will
be detected by Ansi C parameter type checking
(but not by K&R C). The invis class excludes in-
visD.
severe: Defects resulting in significant deviations
from the prescribed functionality.
ffl severeD: same as severe, except that the defects
will be detected by Ansi C parameter type checking
(but not by K&R C). The severe class excludes
severeD.
These categories are mutually exclusive. Defects that
had to be removed before the program would pass even
only the K&R C compiler and linker will be ignored.
Unless otherwise noted, the defect statistics discussed
below are computed based on the sum of severe, sev-
ereD, and invisD.
Other metrics observed were the number of compilation
cycles (versions) and time to delivery, i.e., the time
spent by the subjects before delivering the program
(whether complete and correct or not).
From these metrics and annotations, additional statistics
were computed. For instance the frequency of defect
insertion and removal, the number of attempts
made before a defect was finally removed, the Interface
Defect Lifetime, and the number and type of defects
remaining in the final program version. See also
the definitions in Section 1.
For measuring productivity and unimplemented func-
tionality, we define a functionality unit (FU) to be a
single statement in the gold program. For example,
the call to XtAddCallbackF shown in Section 3.3 is
one FU. Using the gold programs as a reference normalizes
the cases in which subjects produce more than
one statement instead. FUs are thus a better measure
of program volume than lines of code. Gold program
A contains contains 11. We annotated the
programs with the number of gaps , i.e., the number of
missing FUs. An FU is counted as missing if a subject
made no attempt to implement it.
3.5 Internal and external validity
The following problems might threaten the internal validity
of the experiment, i.e., the correctness of the results
1. For defects where both the K&R and the Ansi C
compiler produce an error message, these messages
might differ and this might influence productivity.
Our subjective judgment here is that for the purposes
of this experiment the error messages of both
compilers, although sometimes quite different, are
overall comparable in quality. Furthermore, none
of our subjects were very experienced with one
particular compiler and would understand its messages
faster than others.
2. There may be annotation errors. To insure consis-
tency, all annotations were made by the same per-
son. The annotations were cross-checked first with
a simple consistency checker (looking whether errors
were introduced before removed, times were
plausible, etc.), and then some of them were
checked manually. The number of annotation mistakes
found in the manual check was negligible
(about 4%).
3. The learning effect from first to second task might
be different for K&R subjects than for Ansi C
subjects. This problem, and related ones, is accounted
for by the counter-balanced experiment
design.
The following problems might limit external validity of
the experiment, i.e., the generalizability of our results:
1. The subjects were not professional software en-
gineers. However, they were quite experienced
programmers and held degrees (many of them ad-
vanced) in computer science.
2. The results may be domain dependent. This objection
cannot be ruled out. This experiment
should therefore be repeated in domains other
than graphical user interfaces.
3. The results may or may not apply to situations
in which the subjects are very familiar with the
interfaces used. This question might also be worth
a separate experiment.
Despite these problems, we believe that the scenario
chosen in the experiment is nevertheless similar to
many real situations with respect to type-checking errors
Another issue is worth discussing here: The learning
effect (performance change from first task to second
task) is larger than the treatment effect (performance
change from K&R C to Ansi C). This would be a problem
if the learning reduced the treatment effect [16,
pages 106 and 113]. However, as we will see below, in
our case the treatment effect is actually increased by
learning, making our experiment results conservative
ones. We are explicitly considering programmers who
are not highly familiar with the interface used. Therefore
learning is a natural and necessary part of our
setting, not an artifact of improper subject selection.
4 Results and Discussion
Many of the statistics of interest in this study have
clearly non-normal distributions and sometimes severe
outliers. Therefore, we present medians (to be precise:
an interpolated 50% quantile) rather than arithmetic
means. Where most of the median values are zero,
higher quantiles are given.
The results are shown in Tables 2 through 4. There
are altogether ten different statistics, each appearing in
three main columns. The first column shows the statistics
for both tasks, independent of order. The second
and third columns reflect the observations for those
tasks that were tackled first and second, respectively.
These columns can be used to assess the learning ef-
fect. Each main column reports the medians (or higher
quantiles where indicated) for the tasks programmed
with Ansi C and K&R C plus the p-value. The p-value
is the result of the Wilcoxon Rank Sum Test (Mann-
Whitney U Test) and, very roughly speaking, represents
the probability (given the observations) that the
medians of the two samples are equal. If p - 0:05, the
test result is considered statistically significant and we
call the distributions significantly different. Significant
results are marked in boldface in the tables. When the
result is not significant, nothing can be said; there may
or may not be a difference.
4.1 Productivity
Table
shows three measures that describe the over-all
time taken and the productivity exhibited by the
subjects.
Statistic 1, time to delivery, shows no significant difference
between Ansi C and K&R C for the first task or
for both tasks taken together. Ignoring the programming
language, the time spent for the second task is
shorter than for the first (p = 0:0012, not shown in
the table), indicating a learning effect. In the second
task, Ansi C programs are delivered significantly faster
than K&R C programs. A plausible explanation is that
when they started, programmers did not have a good
understanding of the library and were struggling more
with the concepts than with the interface itself. This
explanation was confirmed by studying the compiler
inputs. Type checking is unlikely to help gain a better
understanding. Type checks became useful only after
programmers had overcome the initial learning hurdle.
Statistic 2, the number of program versions compiled,
does not show a significant difference; Ansi C programmers
compile about as often as K&R C programmers.
describes the productivity measured in functional
units per hour (FU/h). In contrast to time to de-
livery, this value accounts for functionality not implemented
by a few of the subjects. Again we find no significant
difference for the first task, but a (weakly) significant
difference for the second task. There, Ansi C
median productivity is about 20% higher than K&R C
productivity, suggesting that Ansi C is helpful for
programmers after the initial interface learning phase.
This observation supports hypothesis 1. The combined
(both languages) productivity rises very significantly
from the first task to the second task (p = 0:0001, not
shown in the table); this was also reported by the subjects
and confirms that there is a strong learning effect
induced by the sequence of tasks. The actual distri-
Ansi K&R
Figure
3: Boxplots of productivity (in FU/hour) over both
tasks for Ansi C (left boxplot) and K&R C (right boxplot).
The upper and lower whiskers mark the 95% and 5% quantiles,
the upper and lower edges of the box mark the 75% and 25%
quantiles, and the dot marks the 50% quantile (median). All
other boxplots following below have the same structure.
Ansi K&R
Figure
4: Boxplots of productivity (in FU/hour) for first task.
Figure
5: Boxplots of productivity (in FU/hour) for second
task.
butions of productivity measured in FU/h are shown
in
Figures
3 to 5. We see that Ansi C makes for a more
pronounced increase in productivity from the first task
to the second (about 78% for the median) than does
K&R C (about 26% for the median).
Table
2: Overall productivity statistics. Medians of statistics for Ansi C vs. K&R C versions of programs and p-values
for statistical significance of Wilcoxon Rank Sum Tests of the two. Values under 0.05 indicate significant differences of the
medians. Column pairs are for 1st+2nd, 1st, and 2nd problem tackled chronologically by each subject, respectively. All entries
include data points for both problem A and problem B.
both tasks 1st task 2nd task
Statistic Ansi K&R Ansi K&R Ansi K&R
1 hours to delivery 1.3 1.35 1.6 1.6 0.9 1.3
#versions 15
3 FU/h 8.6 9.7 7.2 8.5 12.8 10.7
Table
3: Statistics on internals of the programming process. See Table 2 for explanations.
both tasks 1st task 2nd task
Statistic Ansi K&R Ansi K&R Ansi K&R
4 accumul. interf. dfct. lifetime (median) 0.3 1.2 0.5 2.1 0.2 1.1
5 #right, then wrong again (75% quant.) 1.0 1.0 1.0 1.0 0.0 1.0
4.2 Defect lifetimes
Table
3 gives some insight into the programming process
Statistic 4 is the time from the introduction of an interface
defect to its removal (or the end of the experiment)
accumulated over all interface defects introduced by a
subject. The distributions of this variable over both
tasks are also shown as boxplots in Figure 6. As
accum.
severeD
error
lifetime2610
Ansi K&R
Figure
Boxplots of accumulated interface defect lifetime
(in hours) over both tasks.
we see, the K&R total defect lifetimes are higher and
spread over a much wider range; the difference is signifi-
cant. Note that the frequency of defect insertion (num-
ber of interface defects inserted per hour, not shown in
the table) does not show significant differences between
the languages, indicating that Ansi C is of little help
in defect prevention (as opposed to defect removal).
Taken together, these two facts support hypothesis 3:
Ansi C helps to remove interface defects quickly.
Statistic 5 indicates the number of defects, interface
or other, introduced in previously correct or repaired
statements of a program. While there is hardly any
difference in the first task, the value is significantly
higher for K&R C in the second task. We speculate
that this happens because the type error messages of
Ansi C allow some of the subjects to avoid the trial-
and-error defect removal techniques they would have
used in K&R C; the effect occurs only in the second
task, after the subjects have gained a basic understanding
of Motif concepts.
4.3 Defects in delivered programs
Table
4 describes the quality of the products delivered
by the subjects.
Statistic 6 says that there are not more unimplemented
functionality units ("gaps") in the K&R C programs.
Statistic 7 confirms that there are more defects in the
delivered K&R C programs than in the Ansi C pro-
grams; see also the distribution as shown in Figure 7.
The difference is much more pronounced in the second
task, though. Again the reason is probably that
the advantages of Ansi C become fully relevant only
after most other initial problems have been mastered.
Statistics 8 to 10 confirm that the reason for the difference
lies indeed in the type checking capabilities of
Ansi C: both the rare invisD defects (statistic 8) and
the severeD defects (statistic 10, see also Figure 8) are
much less frequent in delivered Ansi C programs than
in K&R C programs. These defects can be detected
by Ansi C type checking. On the other hand, severe
defects (statistic 9, see also Figure 9) are about as
frequent in delivered Ansi C programs as in K&R C
programs. These defects cannot be detected by type
checking.
As we see in the boxplots, the distributions for severe
Table
4: Statistics on the delivered program. See Table 2 for explanations. Lines 6 and 8 do not list medians but other
quantiles instead, as indicated.
both tasks 1st task 2nd task
Statistic Ansi K&R Ansi K&R Ansi K&R
6 #gaps (75% quantile) 0.25
7 #remaining errs in delivered program 1.0 2.0 1.0 2.0 1.0 2.0
9 - for severe only 1.0 1.0 1.0 0.0 1.0 1.0
all
remaining
errors
Figure
7: Boxplots of total number of remaining defects in
delivered programs over both tasks.
severeD
remaining
errors
Ansi K&R
Figure
8: Boxplots of number of remaining severeD defects
in delivered programs over both tasks.
defects differ only in the upper tail, whereas the distributions
for the severeD defects differ dramatically
in favor of Ansi C, resulting in a significant overall
advantage for Ansi C. These observations support hypothesis
2.
severe
remaining
errors
Ansi K&R
Figure
9: Boxplots of number of remaining severe defects in
delivered programs over both tasks.
Detailed analysis of the defects remaining in the delivered
programs indicates a slight, but not statistically
significant tendency that besides type defects other
classes of frequent defects also were reduced in the
Ansi C programs: using the wrong variable as a parameter
or an assignment target using
a wrong constant value as a parameter (p = 0:35). It
is unknown whether this is a systematic side-effect of
type checking and how it should be explained if it is.
There were no significant differences between the two
tasks; all of the above results hardly change if one considers
the tasks A and B separately.
4.4 Questionnaire results
Finally, the subjective impressions of the subjects as
reported in the questionnaires are as follows: 26 of the
subjects (79%) noted a learning effect from the first
program to the second. 9 subjects (27%) reported that
they found the Ansi C type checking very helpful, 11
(33%) found it considerably helpful, 4 (12%) found it
almost not helpful, 5 (15%) found it not at all helpful.
subjects could not decide and 1 questionnaire was
lost.
5 Conclusions and further work
The experiment results allow for the following statements
regarding our hypotheses:
ffl Hypothesis 1, Interface Use Productivity:
When programming an interface, type checking
increases productivity, provided the programmer
has gained a basic understanding of the interface.
ffl Hypothesis 2, Interface Defects in delivered
program:
Type checking reduces the number of Interface Defects
in delivered programs.
ffl Hypothesis 3, Interface Defect Lifetime:
Type checking reduces the time defects stay in the
program during development.
One must be careful generalizing the results of this
study to other situations. For instance, the experiment
is unsuitable for determining the proportion of interface
defects in an overall mix of defects, because it was
designed to prevent errors other than interface errors.
Hence it is unclear how large the differences will be if
defect classes such as declaration defects, initialization
defects, algorithmic defects, or control-flow defects are
included.
Nevertheless, the experiment suggests that for many
realistic programming tasks, type checking of interfaces
improves both productivity and program quality. Fur-
thermore, some of the resources otherwise expended
on inspecting interfaces might be allocated to other
tasks. As a corollary, library design should strive to
maximize the type-checkability of the interfaces by introducing
new types instead of using standard types
where appropriate. For instance Motif, on which our
experiment library was based, is a negative example in
this respect.
Further work should repeat similar error and defect
analyses in different settings (e.g. tasks with complex
data flow or object-oriented languages). In particu-
lar, it would be interesting to compare productivity
and error rates under compile-time type checking, run-time
type checking, and type inference. Other important
questions concern the influence of a disciplined
programming process such as the Personal Software
Process [12]. Finally, an analysis of the errors occurring
in practice might help devise more effective defect-
detection mechanisms.
Acknowledgments
We thank Paul Lukowicz for patiently guinea-pigging
the experimental setup, Dennis Goldenson for his detailed
comments on an early draft, and Larry Votta
for pointing out an important reference and providing
many suggestions on the report. Last, but not least,
we thank our subjects.
A Solution for Problem A
This is the program (ANSI C version) that represents the canonical solution for Problem A. Most of it, including
all of the comments, was given to the subjects from the start; they only had to insert the statements marked
here with /*FU 1*/ etc. at those places previously held by pseudocode comments as described in Section 3.3
above. The numbers in the FU comments count the functional units as defined in Section 1.
#include !stdio.h?
#include !stdlib.h?
#include "stdmotif.h"
void button-pushed (Widget widget, XtPointer client-data, XtPointer call-data);
fields for matrix coefficients: 0,1,2,3 for a,b,c,d */
int main (argc, argv)
int argc;
char *argv[];
manager, /* manager for square and buttons */
square, /* manager for 4 TextFields */
buttons, /* manager for 2 PushButtons */
quit; /* PushButton */
XtAppContext app;
XmString invertlabel, quitlabel;
/*- 1. initialize X and Motif -*/
/* (already complete, should not be changed) */
globalInitialize ("A");
&argc, argv, fallbacks, NULL);
/*- 2. create and configure widgets -*/
2, False); /*FU 1*/
2, True); /*FU 2*/
buttons
XmStringCreateLocalized ("Invert matrix"));
XmStringCreateLocalized ("Quit"));
/*- 3. register callback functions -*/
(invert, XmCactivateCallback, button-pushed,
/*- 4. realize widgets and turn control to X event loop -*/
/* (already complete, should not be changed) */
XtRealizeWidget (toplevel);
return (0);
Functions */
void button-pushed (Widget widget, XtPointer client-data, XtPointer call-data)
/* this is the callback function to be called when clicking on
the PushButtons occurs */
double mat[4], new[4], /* old and new matrix coefficients */
det; /* determinant */
String s;
if ((int)client-data == 99)
exit (0); /*FU 12*/
else if ((int)client-data == 1) -
int
for
XtGetStringValue (mw[i], XmCvalue, &s); /*FU 13*/
if (det !=
for
XtSetStringValue (mw[i], XmCvalue, ftoa(new[i],8,2)); /*FU 15*/
else
matrixErrorMessage("Matrix cannot be inverted",mat,8,2);/*FU 16*/
Solution for Problem B
See the description in Appendix A above.
#include !stdio.h?
#include !stdlib.h?
#include "stdmotif.h"
void handle-menu (Widget widget, XtPointer client-data, XtPointer call-data);
int main (int argc, char *argv[])
menubar, /* the one-entry menu bar */
menu, /* the pulldown menu */
label; /* the label displayed in the work window */
XtAppContext app;
/*- 1. initialize X and Motif -*/
/* (already complete, should not be changed) */
globalInitialize ("B");
&argc, argv, fallbacks, NULL);
/*- 2. create and configure widgets -*/
XmStringCreate ("File Browser", "LARGE"), 'F'); /*FU 2*/
XmStringCreate ("by Lutz Prechelt", "SMALL")); /*FU 3*/
XtSetWidgetValue (main-w, XmCworkWindow, label); /*FU 4*/
XmStringCreateLocalized ("Select file"), 'f',
XmStringCreateLocalized ("Open selected file"), 'O',
XmStringCreateLocalized ("Quit"), 'Q',
/*- 3. register callback functions -*/
/* (handle-menu was already registered above, nothing to be done) */
/*- 4. realize widgets and turn control to X event loop -*/
/* (already complete, should not be changed) */
XtRealizeWidget (toplevel);
return (0);
Functions */
void handle-menu (Widget widget, XtPointer client-data,
XtPointer call-data)
if ((int)client-data == first menu entry selected */
XtManageChild (fs); /*FU 8*/
else if ((int)client-data == 1) - /* second menu entry selected */
toplevel, 25, 80); /*FU 9*/
XtSetStringValue (scrolltext, XmCvalue,
readWholeFile (selectedFile())); /*FU 10*/
else if ((int)client-data == 2) - /* third menu entry selected */
exit (0); /*FU 11*/
--R
Software errors and complexity: An empirical investigation.
Software Testing Techniques.
Typing in object-oriented languages: Achieving expressibility and safety
Experimental Methodology.
Spohrer, editors. Empirical Studies of Program- mers: Fifth Workshop
Novice programmer errors: Language constructs and plan composition.
Tales of debugging from the front lines.
An experimental comparison of the effectiveness of branch testing and data flow testing.
An experimental evaluation of data type conventions.
Practical results from measuring software quality.
Haskell vs. Ada vs. C
A Discipline for Software Engi- neering
An analysis of the on-line debugging process
Empirical Studies of Program- mers: Second Workshop
A controlled experiment measuring the impact of procedure argument type checking on programmer productiv- ity
The psychological study of program- ming
Empirical Studies of Programmers.
Analyzing the high frequency bugs in novice programs.
Cognitive bias in software engineering.
Positive test bias in software testing by professionals: what's right and what's wrong.
Gedanken zur Software- Explosion
Certification of software components.
--TR
--CTR
Maurizio Morisio , Daniele Romano , Ioannis Stamelos, Quality, Productivity, and Learning in Framework-Based Development: An Exploratory Case Study, IEEE Transactions on Software Engineering, v.28 n.9, p.876-888, September 2002
Adrian Birka , Michael D. Ernst, A practical type system and language for reference immutability, ACM SIGPLAN Notices, v.39 n.10, October 2004
Matthew S. Tschantz , Michael D. Ernst, Javari: adding reference immutability to Java, ACM SIGPLAN Notices, v.40 n.10, October 2005
Robin Abraham , Martin Erwig, Type inference for spreadsheets, Proceedings of the 8th ACM SIGPLAN symposium on Principles and practice of declarative programming, July 10-12, 2006, Venice, Italy
Martin Erwig , Deling Ren, An update calculus for expressing type-safe program updates, Science of Computer Programming, v.67 n.2-3, p.199-222, July, 2007
Andreas Zendler, A Preliminary Software Engineering Theory as Investigated by Published Experiments, Empirical Software Engineering, v.6 n.2, p.161-180, June 2001 | controlled experiment;defects;productivity;quality;type checking |
279143 | Optimal Elections in Faulty Loop Networks and Applications. | AbstractLoop networks (or Hamiltonian circulant graphs) are a popular class of fault-tolerant network topologies which include rings and complete graphs. For this class, the fundamental problem of Leader Election has been extensively studied, assuming either a fault-free system or an upper-bound on the number of link failures. We consider loop networks where an arbitrary number of links have failed and a processor can only detect the status of its incident links. We show that a Leader Election protocol in a faulty loop network requires only O(n log n) messages in the worst-case, where n is the number of processors. Moreover, we show that this is optimal. The proposed algorithm also detects network partitions. We also show that it provides an optimal solution for arbitrary nonfaulty networks with sense of direction. | Introduction
1.1 Loop Networks
A common technique to improve reliability of ring networks is to introduce link redun-
dancy; that is, to have each node connected to two or more additional nodes in the
network. With alternate paths between nodes, the network can sustain several nodes and
links failures. Several ring networks, suggested in [3, 8, 27, 34, 40] are based on this prin-
ciples. The overall topological structure of these redundant rings is always highly regular;
in particular, the set of ring edges (regular) and additional edges (bypass) form a Loop
Network (since they have at least one hamiltonian cycle).
Figure
1: #2, 4# Loop Network (a) with Faulty Links (b)
Loop Networks are particular cases of Circulant Graph. Because of an uncoordinated
literature, numerous terms have been used to name this topology depending on the model;
Circulant Graph, Chordal Ring, or Distributed Loop Computer Networks are the more
common. A detailed survey of these topologies is presented in [5]. For sake of simplicity,
we will use the term loop network in the remaining of this paper.
A loop network C n #d 1 , d 2 , ., d k # of size n and k-chord structure #d 1 , d 2 , ., d k # is a
ring R n of n processors {p 0 , each processor is also directly connected
to the processors at distance d i and n - d i by additional incident chords. The link
connecting two nodes is labeled by the distance which separates these two nodes on the
ring, i.e., following the order of the nodes on the ring: the node p i is connected to the
node p i+d j mod n
through its link labeled d j (as shown in Figure 1(a)). In particular, if a
link, between two processors p and q, is labeled by distance d at processor p, this link is
labeled by n - d at the other incident processor q, where n is the number of processors.
Note that both rings and complete graphs are circulant graphs, denoted as C n # and
respectively. It is worth pointing out that some designs for redundant
meshes and redundant hypercubes are also circulant graphs, [7].
The distinction between regular and bypass links is purely a functional one. Typically,
the bypass links are used strictly for reconfiguration purposes when faults are detected;
in the absence of faults, only regular links are used. Special classes of loop networks have
been widely investigated to analyze their fault-tolerant properties [3, 7, 8, 9, 27, 33] and
solutions have been proposed for reconfiguration after links and/or node failures [30, 39].
In some applications (e.g., distributed systems), all the links (or chords) of a circulant
graph are always used to improve the performance of a computation.
1.2 Election
In distributed systems, one of the fundamental control problem is the Leader Election
[29]. Informally, election is the problem of moving the system from an initial situation
where the nodes are in the same computational state, to a final situation where exactly
one node is in a distinguished computational state (called leader) and all others are in the
same state (called defeated). The election process may be independently started by any
subset of the processors. The election problem occurs, for instance, in token-passing when
the token is lost or the owner has failed; in such a case, the remaining processors elect a
leader to issue a new token. Several other problems encountered in distributed systems
can be solved by election; for example: crash recovery (a new server should be found
to continue the service when the previous server has crashed), mutual exclusion (where
values for election can be defined as the last time the process entered the critical section),
group server (where the choice of a server for an incoming request is made through an
election among all the available servers managing a replicated resource), etc.
Following failures, the network might be partitioned into several disconnected components
(as shown in Figure 1(b)). With respect to the election process, a component will
be called active if at least one processor in that component independently starts the election
process. A leader election protocol must determine a unique element in each active
component; such distinguished elements can then determine any additional information
(e.g., size of component, etc.) which is needed for the particular application. The nature
of such applications is irrelevant to the election process.
It is assumed that every processor p i has a distinct id i chosen from some infinite totally
ordered set ID; each processor is only aware of its own identity (in particular, it does not
know the identities of its neighbours). The processors all perform the same distributed
algorithm. A distributed algorithm (or protocol) is a program that contains three types of
executable statements: local computations, message send and message receive statements.
We assume that the messages on each arc arrive with no error, in a unbounded but finite
delay and in a FIFO order. The complexity measure is the maximum number of messages
sent during any possible execution.
1.3 Election in a Faulty Loop Network
The Leader Election problem in loop networks has been extensively studied assuming that
there are no failures in the systems [4, 18, 23, 24, 32]. The problem becomes rather more
di#cult if there are failures in the system. In asynchronous systems, in particular, the
election problem is unsolvable (i.e., no deterministic solution protocol exists) if failures
are undetectable and can occur at any time; this impossibility result holds even if just
one processor may fail (in a fail-stop mode) and follows from the result of [10].
The research has thus focused on studying the problem in more restricted environments
. (r1) failures are detectable,
# Faults (r3) (r1) (r2)
Graph Links Nodes Detectability Occurrence Termination
Arbitrary Loop Network
Complete
Complete
Complete (r5) < N/2 per node 0 No intermittent Possible [2, 38]
Ring
Arbitrary Loop Network unbounded unbounded Yes Prior Possible (this paper)
Table
1: Impossibility versus Possibility Results (k and t are constants bounding the
number of Fail-Stop Faults).
. (r2) failure occurs prior to the execution of the election protocol,
. (r3) the number of failures is bounded by some constant,
. (r4) failures are fail-stop,
. (r5) every processor is directly connected to every processor.
All the existing results for Election in faulty loop networks have been developed under
assumptions (r2), (r3), (r4) and further assuming that the network is either a complete
graph (r5) [1, 16, 28, 31, 37] or a ring [14, 41, 42] (see table 1). So far, without detectability,
algorithms breaking free of the bounded number of failures assumption (r3) generate an
expensive communication complexity (O(n 2 ) messages of O(n) bits, [19]).
In this paper, we consider the Election Problem in asynchronous arbitrary loop networks
where an arbitrary number of links has failed and a processor can only detect the
status of its incident links. That is, we make assumptions (r2) and (r4), and a relaxed
version of assumption (r1). Thus, unlike all previous investigations, we do not restrict to
complete graphs; we do not make any a priori restriction on the number of failures; we
do however assume that a processor can detect the failure of its incidents links. Note that
this assumption, the detectability assumption (r1), is required to cope with an unbounded
number of faulty components (see table 1). We prove that, under these assumptions, a
Leader Election protocol in a faulty loop network requires only O(n log n) messages in
the worst-case, where n is the number of processors. Moreover, we show that this is
optimal. In case the failures have partitioned the network, the algorithm will detect it
and a distinctive element will be determined in each active component; depending on the
application, these distinctive elements can thus take the appropriate actions.
Both processors and links may fail. In the following, we will assume that if a processor
fails all its incident links fail. Thus, without any loss of generality, we can just consider
link failures. We emphasize the fact that both regular and bypass links can fail (as shown
in
Figure
1(b)). A processor can only detect the failure of its incident links. Knowledge
that a link is faulty can be either o# line or on line. In the o# line case, the hardware
subsystem provides directly such a knowledge to the processors; thus, this information is
a priori respect to the execution of the protocol. In the on line case, this knowledge can
only be acquired upon an attempt to transmit on a link; if the link is operational, the
message will be transmitted, otherwise an error signal will be issued by the system (see
Figure
2).
From a computational point of view, the on line case is more di#cult than the o# line
one. In particular, to transform it into a priori knowledge case (e.g., by a pre-processing
phase where each active processor tests its incident links) would cost an additional m
messages where is the number of non-faulty links. Thus, our O(n log n) so-
lution, for the case where faults are only detected upon transmission attempt, is all the
more important since fault-detection is performed only on these links which are used by
the computation. Furthermore, this solution can obviously be applied with the same
complexity to the case where there is a priori knowledge on the faulty links. Thus, in
following, we will only concentrate on the more di#cult case.
The algorithm presented here combines known techniques for election in non-faulty
networks ([13, 17, 21]) and original routing paradigms based on structural information [12]
in order to avoid the faulty components. The algorithm uses asynchronous promotion
steps to merge rooted spanning trees.
Election Algorithm
We present an Election algorithm in loop network, where an arbitrary number of links
have failed and where failure of a link is detectable only if an incident node attempts to
transmit on it. The full algorithm is given in the Appendix (see also [26]). Any node can
independently and spontaneously start the election process (we will model this by having
such a node receive a WAKEUP message). If the network is not partitioned, the algorithm
will detect it and will elect a leader. In case the failures have partitioned the network, a
distinctive element will be determined in each active component and will detect that a
partition has occurred; depending on the application, these distinctive elements can thus
take the appropriate actions. We will now describe the algorithm as executed in each
active component.
2.1 Description
In each active component, the algorithm builds a Rooted Spanning Tree or Kingdom by
repeatedly combining smaller spanning trees; the final root of the spanning tree is the
distinctive element of that component. In the following, we describe the algorithm as
executed in one component.
The algorithm proceeds in phases and rounds. Initially, each node is a king, and does
not know which of its links have crashed. At the end, all nodes are citizen except one
which is still a king. During each intermediate phase of the algorithm, each king tries to
expand its kingdom (a rooted directed tree) by attacking another kingdom. The attack
is carried out by a particular node: the warrior.
Each kingdom is a tree with two distinguished nodes: the king and the warrior. Each
king is assigned a level, initialized at zero. Each node p stores the identity king p and the
level level p of its king, as well as the label of the outgoing chord to its king and to its war-
rior. If a node is attacked, it stores the label of the incoming chord from which the attack
came. In the algorithm, each warrior p maintains a local view List p of all the others processors
with the indication of which of them belong to the kingdom. An attack message is
a request message defined by a request status ReqStatus = (reqking, reqlevel, reqList)
which contains such a local view reqList.
Informally, the attack is carried out only by a warrior; the warrior will select randomly
an outgoing link which leads to another kingdom (one connected to a processor which does
not belong to its kingdom). It then attempts to transmit a REQUEST message on that
link. If the link is faulty, a failure detection signal will notify the warrior of such a situation
and the appropriate action (see below) will be taken; otherwise, the REQUEST message
will carry the attack to the other kingdom, as shown in Figure 2.
Failure
ALGORITHM
SYSTEM
Attempt
Request
TRANSMISSION
Figure
2: Local Failure Detection
The attacks by a kingdom follow a Depth First Search strategy. A state S r for
each chord is defined to specify if the chord is unused (initially), branched (is part of the
spanning tree) or failed (determined after an attempt of transmission). For each branched
chord a substate SubS r is introduced to specify if the chord is closed (is faulty or does
not lead to another kingdom), or still opened (the incident node has not been completely
explored and thus can lead to nodes which have not been reached yet). Initially, all non-faulty
chords are opened. It is used to control the backtracking by closing a subtree whose
visit has been completed. If a warrior j cannot reach any node outside the kingdom
(this is locally determined by the state of its incident links and the local view List j ),
then the state of warrior, together with List j , is backtracked to its parent and the chord
between them became closed. This strategy has the main advantage to limit the amount
of backtracking after a combination compared to a Breadth First Search strategy. A
state transition diagram of a chord is shown in Figure 3(a). Each node saves the label
{W out , W in , K out } of the incident chord leading to warrior p , the warrior attacking p, and
king p respectively.
Define the status p of node p as (level p , king p , List p ). Following a lexicographic total
order, we say that status p > status j i#:
- either (a) level p > level j
- or (b) level king p > king j .
Our algorithm obeys two main rules:
Promotion Rule. A warrior p can only successfully attack a kingdom with status
less than its own. Let the attack by warrior p be successful. In case (a), each node in
the kingdom which lost is informed of the identity of the new king king p and updates its
level to level p (note that the value of level p is unchanged in the attacking kingdom). In
case (b), each node in the attacked kingdom receives the identity king p of the new king
and all nodes in both kingdoms increases their level by one (the level of a kingdom never
decreases). After a successful attack by a warrior p to a warrior j, the warrior of the new
kingdom is warrior j . We say that a processor enters a new round when its level changes,
(i.e., when its kingdom has been defeated or when its kingdom successfully attacked a
kingdom of an identical level).
Asynchronous Rule (controls the number of messages during each phase): three
di#erent cases are theoretically possible when an attack from a warrior p reaches a node
in another kingdom:
1. status p < status j : the warrior is not strong enough to attack this kingdom and,
thus, its attack fails: the message is killed and the attacking kingdom is just waiting
to get attacked.
2. status p > status j : the attack from p must be forwarded to warrior j . Any subsequent
attack by other kingdoms, if not killed, is delayed until this attack is resolved
at j (i.e., until j receives a new status).
When forwarding an attack, if node i on the path to warrior j has a greater status
(i.e., status i > status p ), the request is killed. This situation occurs when the
previously visited nodes have not yet been informed that they have become part of
a greater kingdom (i.e., the level has increased).
When the attack reaches warrior j, if it still has a lower status, then a surrender
message is sent back to warrior p and each node on the path waits for the new status.
3. status proved later, this case (i.e., an attack within the same king-
dom) cannot occur during the execution of the algorithm.
If warrior p receives a message of surrender, it broadcasts the new status to the absorbed
kingdom or to both kingdoms, depending on the promotion rule. The new local
view List is obtained by merging the two Lists. The initial local view is a list of bits
the list is initialized to 10 # (i.e., all bits are set to 0 except List[0] which is set to 1).
Concurrency. The number of concurrent incoming attacks in a kingdom must be
limited in order to guarantee a message complexity of O(n) for each round. A substate
Substate p for each node p is introduced to specify if the node is WaitingForSurrender (has
forwarded an attack message), is WaitingForStatus (has forwarded a surrender message
and is waiting for its new level), or is Regular (is ready to receive an attack). The state
transition diagram of a processor is shown in Figure 3(b).
Some substates are introduced to deal with two specific situations which may occur
due to the inherent concurrency of the model.
First of all, if a citizen j has forwarded an attack to warrior j a subsequent attack
with a greater status will be delayed (wait at j), but not killed (asynchronous rule 2).
Secondly, an incoming attack can be received before knowing that the kingdom has
already absorbed (or been absorbed by) another kingdom: the level may have increased.
In both cases, the citizen knows afterwards (when it receives the new status) if the
forwarded attack was successful. At this time, if the status of the forwarded attack is
smaller than the new received status, the attack will be killed; thus, the citizen can go
Branched
opened
closed
Branched
Failed
(a) CHORDS
Unused Regular
Status
WaitingFor
Surrender
WaitingFor
(b) NODES
Figure
3: State Transition Diagrams
back to regular substate. Otherwise, the current attack status is still legal; thus, the
inhibition waiting substate must be kept.
Progress. The problem occurs if a warrior q receives a surrender message from a warrior
when it is already engaged in a wait for status process from a warrior w (q has been
attacked by w while attacking p). Consistently with the asynchronous rule, the warrior
q has to wait for the new status of warrior w before it can send the new status to the
warrior p. The extreme case occurs if - a more complicate scenario involving more nodes
can be deduced - w is waiting for p (p has attacked w): a deadlock situation. As proved
later in Theorem 2.1, the total lexicographic order on the status forbids the creation of
such a waiting cycles.
Structural Information. The knowledge of the size of the network, the topology, a
globally consistent assignment of labels (or, labelings) to interconnection nodes and communication
links is used to reduce the communication cost. Since the loop network is a
node-symmetric graph (all its nodes are similar to one another), each node can represent
the other nodes by their relative distance along the cycle. This is actually available with
the edge labeling and can be used to pass the knowledge of the processors (represented by
their distances) that have been already reached: when node p 1 receives a message from
node p 2 by the incident chord labeled d 1 , it can unambiguously "decode" the information
about other nodes contained in the message. Namely, if the message contains information
about the node linked to p 2 by a chord d 2 , then this information refers to the node at
distance (d 1 +d 2 ) mod n from p 1 in the ring ordering. This fact will be used to determine
whether an unused chord (i.e., on which no messages have been sent) is outgoing or not
(that is connected to a di#erent kingdom or not). This function combined with the local
view of a processor provides the message with a consistent representation of the kingdom
which can be passed from processor to processor. This decoding function corresponds to
a circular bit shift by the length of the chord, denoted as transpose (the exact code of
the function is given at the end of the algorithm).
Termination and Partitioning. The algorithm terminates when the kingdom includes
all nodes in its connected non-faulty subgraph. The determination of this event may
di#er depending on whether the network is disconnected or not. Consider first the case
of a partitioned network. Once all reachable nodes have become part of the kingdom, the
king will become warrior (because of the bactracking inherent to the depth first search
strategy) and all its incident chords will be closed (there is no outgoing link towards a
node which does not belong to the kingdom). At this point, it will detect termination;
from its local view, it will also determined the size of its kingdom and that a disconnection
has occurred.
If the network is not disconnected, the termination detection can occur earlier: as
soon as a warrior determines, by its local view, that the kingdom includes all the nodes
in the network (the list is full, i.e., set to 1 # ).
In both cases, the warrior (which is possibly the king) broadcasts along the tree the termination
message. Since this message contains the view of the warrior upon termination,
every node in the component can determine whether or not the graph is disconnected as
well as which other nodes are in this component. In the case of a disconnection, depending
on the application, the king can take the appropriate action.
An example of an attack is shown in Figure 5, where the kingdom K has a greater status
than the kingdom K # (the corresponding loop network C 16 #3, 8# is shown in Figure 4).
The result of the successful attack is shown in Figure 6.
Messages Used:
. (REQUEST,Status): it is an attack by a warrior, and is forwarded to its adversary.
This message is also considered as the first ATTEMPT on the chord, and provides
the failure detection if the chord is faulty,
. (SURRENDER,Status): it is sent by a defeated warrior to inform the winner of its
success,
. (NEWSTATUS,Status): it is broadcast by the winner on the appropriate tree (de-
pending on promotion rule),
. (BRANCH): it is sent by a successful warrior on the chord connecting the two trees,
. (BACKTRACK,Status): it is sent by the warrior to its parent when all its chords
have been closed, that is when all the nodes reachable through this chord are part
of the kingdom or are faulty,
. (MOVEWARRIOR,Status): it is sent by the warrior to one of its opened chords
after a backtracking,
. (TERMINATION): it is broadcast by the sole remaining warrior of the connected
component to terminate the execution of the algorithm.
Any number of processors can spontaneously start the execution of the algorithm;
this is modeled by the reception of a WAKEUP message. The active components are
those where at least one processor spontaneously start the algorithm (i.e., it receives a
WAKEUP message).
KING REQUEST BRANCHED
a
l
c
e
d
f
Figure
4: Kingdoms in C
2.2 Correctness
The protocol is fully asynchronous, the messages received by each processor and the
order in which each processor receives its messages depends on the initial input but is
non-determinist. However the algorithm is event-driven with messages processed in first-
in-first-out order, the order in which each processor processes its communication relies on
tree structures and on the asynchronous and progress rules.
The correctness follows after establishing the safety (a warrior never attacks a node of
its kingdom), the progress (eventually a tree spans all the nodes of a connected compo-
nent), and the appropriate termination (there is exactly one elected node in a connected
component of the network). In the following, numbers between parentheses refer to corresponding
sections of the algorithm in the Appendix.
Lemma 2.1 A request message is initiated by a warrior through an unused opened chord.
The request message only traversed citizen nodes and branched chords leading to the warrior
of the kingdom traversed.
Proof The warrior sends the request (if the attempt is successful) through an unused
opened arc (4, 5, 7, and procedure attempt at the end of the description of the algorithm).
A citizen (or king) can send a request only upon receipt of a request (1) to forward it to
its warrior through links labeled W out , that is, a used chord of a citizen. 2
G
C O
A
J O
I
K'
F
A
A
O
C A G L
request
branched
warrior
king
G
Figure
5: Two Kingdoms in C
Corollary 2.1 The status of a chord becomes used if a warrior has previously sent a
request through it, or if the chord has been detected as faulty.
Lemma 2.2 The local view List p at a warrior p represents exactly the list of processors
which belong to the kingdom of the warrior p.
Proof By induction. Clearly, this is true at the initialization when the local view is
set to 10 # . Assuming the local view List p at a warrior w is correct and complete before
an attack, the warrior modifies its view either after a successful attack (while receiving a
surrender message (8): the warrior becomes a citizen, combines the two views, and pass
the warrior privilege of the new combined kingdom to the defeated warrior) or after being
defeated (while receiving a newstatus message (7): it receives the view of the winning
kingdom, and by combination obtains the complete view of the merged kingdom). In
both cases, the new local view contains the exact list of processors of the new kingdom,
which proves the induction. 2
Lemma 2.3 (Safety.) A warrior never attacks a node of its kingdom.
Proof As shown in Lemma 2.2, an attack can only be done upon receipt of a new
status which creates the new list of all the nodes which belong to the kingdom (7). All
the chords linked to these nodes are closed, any remaining unused chord, even randomly
chosen, leads to a processor of a di#erent kingdom. Therefore, no cycle can be created in
the kingdom. 2
Several facts and properties can be observed to clarify the correctness.
Fact 2.1 From (1) and the asynchronous rule, a waiting citizen, or king, does not process
request messages.
A
A
O
C A L
G
A
I
F
request
branched
warrior
king
G
Figure
of Attack in C
Fact 2.2 Eventually each node in a kingdom receives the status of its kingdom. Indeed,
at the end of any phase or after being defeated (8), the designated warrior broadcasts the
new status along the traversed chords.
waiting cycle of requests may be created.
Proof Immediate since sending a request does not change the regular state of the warrior
(7). Therefore, all the requests which wait on a non regular node do not block the warrior
which has initiated them. 2
Theorem 2.1 (Progress.) A deadlock may not be introduced by the waiting which arises
when some nodes must wait until some condition holds.
Proof The message sending is non-blocking. The only case for which a node is blocked
waiting for an event is when a warrior waits for a new status message after sending
a surrender (1). Similarly, such a surrender message can be deferred at the successful
warrior node if it has surrendered to another warrior attack (8). Repeating this setting, a
chain of waiting (on surrender) processors can occur. However, this chain cannot become
a circular wait: a surrender message is initialized only on a successful attack, that is when
the status of an attacking warrior j is strictly lexicographically larger than the status
of a defending warrior p. The total ordering on the status defined by the promotion
rule forbids such a waiting cycle of processors: status j < . < status p < . < status j
contradicts the definition. 2
Corollary 2.2 Eventually, no node is in a waiting substate.
Theorem 2.2 A kingdom is a rooted directed tree.
Proof By induction. Initially, each kingdom is a one node tree (0). The kingdom is
defined by the subgraph composed by the chords marked K out and their incident nodes,
and is rooted by the king. It can also be defined by the subgraph composed by the chords
marked W out and their incident nodes: in this case the tree is rooted at the warrior.
Following a successful attack, the chord connecting the two trees (the absorbing and
the absorbed ones) becomes part of the kingdom upon receipt of a NEWSTATUS message
initiated by the winner warrior and broadcast through the absorbed kingdom.
The outgoing chord to the king is stored in the K out label. The king has a nil value for
K out (0). A node (citizen and/or king (3), warrior (7)) changes its label K out only after
receiving a new status message announcing the absorption by another kingdom; in this
case K out is set to the incoming arc from which such a message is received. This change
of orientation guarantees that the tree is rooted at the new king. Note that a similar
observation can be repeated for the tree rooted at the warrior. 2
Lemma 2.5 (Appropriate Termination.) The algorithm terminates with a forest of,
at most, one rooted spanning tree for each connected components.
Proof By the safety Lemma 2.3, the progress Theorem 2.1, and Theorem 2.2. In each
connected component where at least one processor initiated the Election protocol, the
algorithm builds a rooted spanning tree. 2
The main Theorem is deduced:
Theorem 2.3 The algorithm correctly elects a leader.
Proof By Theorem 2.1, Theorem 2.2, and Lemma 2.5, the theorem holds. The Election
protocol is independently started by any subset of processors electing a particular node
in each active connected component (the king (10)). Each group of processors in a (par-
titioned or not) active component forms a consistent view (containing the exact list of
reachable processors) with a single elected node: the king. Depending on the application,
these distinctive elements can thus take the appropriate actions: e.g., promote themselves
leader on a majority basis, wait for the recovery of the faulty components, simulate the
non-faulty topology by embedding it into the active connected group, form a restricted
(connected) working group,. 2
2.3 Analysis
The measure of e#ciency analyzed here is the communication complexity (the number
and size of messages sent).
Lemma 2.6 The number of rounds is at most log k for each kingdom, if k independent
nodes start the algorithm.
Proof By the promotion rule, based on a tournament, at most n/2 i nodes enter phase i,
in fact k/2 i if k independent nodes start the algorithm. The maximum number of rounds
is the maximum value of the level of the winning kingdom, i.e., log k. 2
Corollary 2.3 The number of surrender messages sent by a warrior during a particular
execution is at most log k, if k independent nodes start the algorithm.
Lemma 2.7 For a given round and a given non-faulty chord l in a kingdom, at most two
requests will be transmitted through the chord l.
Proof For a given round and a given non-faulty chord l in a kingdom, a request passing
through this chord will face several possible outcomes:
1. The request is successful with an identical level: it will cause the round to increase in
both kingdom. Any forthcoming requests with this previous level will be discarded
at the incident node.
2. The request is successful with a di#erent (i.e., larger) level: the level value is updated
only in the absorbed kingdom. By Lemma 2.3, only requests sent by a di#erent
kingdom may occur. Another request with the same level will behave as described
in the case 1 limiting the number of such occurrences to two.
3. the request is unsuccessful: that is, the message has been killed further on the path
to the warrior. This implies that the level has been increased by another attack,
but the nodes incident on this chord does not know it yet. By the concurrency rule
enforcing delay, only one other request can wait at the incident node and will be
discarded when the newstatus arrives.
A similar argument can be used for a branched chord between two kingdoms. 2
Corollary 2.4 For a given round and a given non-faulty chord l in a kingdom, at most
two surrender (resp. new status) messages will be transmitted through the chord l.
More precisely,
Theorem 2.4 The total number of messages used by the algorithm does not exceed
Proof The number of messages of each kind is the following:
sent, at a given round, through at most n - 1 non-faulty chords (see
Lemma 2.7). Hence, the total number of such request messages sent during the
whole execution is bounded by 2 n log k.
sent through a path in a kingdom only before a modification of its
level. Hence, the total number of such messages sent during the whole execution is
also bounded by 2 n log k.
broadcast in the kingdom only to increase its level. Hence, the total
number of such messages sent during the whole execution is also bounded by
sent on each branched chord of the kingdom, i.e., at most n - 1 messages.
sent on a branched chord of the kingdom if the subtree cannot reach
further nodes. Hence, the total number of such messages is bounded by the size of
the spanning tree, i.e., at most n - 1.
sent on each opened-branched chord of the kingdom if the node
cannot reach further nodes. Hence, the total number of such messages is also
bounded by the size of the spanning tree, i.e., at most n - 1.
TERMINATION : at most n - 1 messages.Only seven di#erent types of message exists. The status is composed of: the identity of
the king which value is at most m, the level which takes at most log n values, and the List
which is a n bits array. Therefore, the size of each message is at most n
bits.
Theorem 2.5 The algorithm has an optimal worst-case message complexity.
Proof Given a loop network C, let F (C) denote the set of the possible combination
of links failures in C; clearly the cardinality of F (C) is 2 |E| where E is the set of chords
of C. Given f # F (C) denote by M(C, f) the number of messages required to solve the
election problem in C when the failures described by f have occurred. Then, the worst
case complexity WC(C) to solve the election problem in C after an arbitrary number of
link failures is
where n is the number of processors, and R n is the ring without bypass; the last equality
follows from the lower bound by [6] on rings. 2
2.4 Sensitivity to Absence of Failures
The algorithm we have presented uses O(n log n) messages in the worst case, regardless
of the amount of faults in the system.
Consider now the case where no faults have occurred in the system and an Election is
required. If all the nodes had a priori knowledge of this absence of failures, then they could
execute an optimal Election protocol for non-faulty networks. In this case, depending on
the chord structure, a lower complexity (in some cases, O(n)) can be achieved [4, 18, 23,
24, 32]. However, to achieve this complexity, it is required that the absence of failures is
a priori known (more specifically, it is common knowledge [15]) to all processors.
Now we show how to achieve the same result without requiring this common-knowledge.
First observe that the existing optimal algorithms for election in non-faulty loop networks
use only a specific subset of the chords to transmit messages. The basic idea is quite sim-
ple. A processor "assumes" that its specific incident arcs are non-faulty. Based on this
assumption, it starts the corresponding topology-dependent optimal election algorithm
A. If a processor x detects a failure when attempting to transmit a message of protocol
A, x will start the execution of the algorithm proposed in section 2. Thus, if there is no
failures, algorithm A terminates using MA messages; if there are failures, the overall cost
of this strategy is MA +O(n log n) which is O(n log n) since MA # O(n log n).
The approach actually leads to a stronger result. To obtain the topology-dependent
optimal bound MA for the non-faulty case is su#cient that the chords used by A are
fault-free.
3 Extensions and Applications
We will consider in this section the election problem in a di#erent setting. In fact, we
study arbitrary networks with sense of direction in absence of faults. We show how the
previous results presented in this paper can be immediately used to prove the positive
impact that the availability of "sense of direction" has on the message complexity of
distributed problems in arbitrary fault-free networks.
3.1 Sense of Direction
The sense of direction refers to the capability of a processor to distinguish between
adjacent communication lines, according to some globally consistent scheme [12, 36]. For
example, in a ring network this property is also usually referred to as orientation, which
expresses the processor's ability to distinguish between ''left'' and ''right'', where ''left''
means the same to all processors. In oriented tori (i.e., with sense of direction), labelings
"up" and "down" are added. The existence of an intuitive labeling based on the dimension
provides a sense of direction for hypercube, [11]: each edge between two nodes is labeled on
each node by the dimension of the bit of the identity in which they di#er. Similarly, the
natural labeling for loop networks discussed in the previous section is a sense of direction.
For these networks, the availability of sense of direction has been shown to have some
impact on the message complexity of the Election problem.
In an arbitrary network, we define a globally consistent labeling on the links by extending
in a natural way the existing definitions for particular topologies. Fix a cyclic
ordering of the processors. The network has a distance sense of direction if at each processor
each incident link is labeled according to the distance in the above cycle to the
other node reached by this link. In particular, if the link between processors p and q is
labeled by distance d at processor p, this link is labeled by n - d at processor q, where n
is the number of processors. An example of sense of direction for an arbitrary network is
shown in Figure 7. Note that such a definition intrinsically requires the knowledge of the
size n of the network, and it includes as special cases the definition of sense of direction
for the topologies referred above: the oriented ring ("left" and "right" correspond to 1
and n - 1, respectively), the oriented complete networks (n set to the number of links
plus one), and the oriented loop network or circulant graph. Furthermore, in hypercubes,
this sense of direction is derivable in O(N) messages from the traditional one [11].
3.2 Election in Fault-Free Arbitrary Networks
We now consider the impact of sense of direction on the message complexity of the Election
problem.
A
G
F
(a) (b)177A
F
G
Figure
7: Arbitrary Network (a) with Sense of Direction (b)
It is obvious that every graph is a subset of the complete graph; that is, any arbitrary
network is an "incomplete" complete graph. Less obvious is the fact that:
Every arbitrary network with sense of direction is an "incomplete" loop network.
That is, every arbitrary network is a loop network where some edges have been removed.
This simple observation have immediate important consequences. It implies that an
arbitrary graph with sense of direction is just a faulty loop network (compare Figure 1
and
Figure
7): the missing links correspond to the faulty ones. Moreover, in this setting,
every processor already know which links are faulty (i.e., missing).
As a consequence, the algorithm described in Section 2 is also a solution to the election
problem in fault-free arbitrary graphs with sense of direction [25].
By theorem 2.4, it follows that if there is sense of direction, a solution with O(n log n)
messages exists for the Election problem.
log n) is a lower bound on the
message complexity for the election problem in bidirectional ring with sense of direction
[6], it follows
log n) is also a lower bound on the general case. Thus, the bound
is tight. In contrast, in arbitrary networks of n processors where the links have no globally
consistent labeling (no sense of
log n) messages are required to elect
a leader [35], and such a bound is achievable [13].
The importance of the result is that it shows the positive impact of sense of direction on
the communication complexity of the Election problem in arbitrary network, confirming
the existing results for specific topologies. An interesting consequence of our result follows
when comparing it to those obtained assuming that each processor knows all the identities
of its neighbours [20, 22]. Namely, it shows that it is possible to obtain the same reduction
in message complexity requiring much less information (port labels instead of neighbour's
name).
Concluding Remarks
In this paper, we have presented a #(n log n) solution for the Election problem in loop
networks where an arbitrary number of links have failed and a processor can only detect
the status of its incident links. If the network is not partitioned, the algorithm will detect
it and will elect a leader. In case the failures have partitioned the network, a distinctive
element will be determined in each active component and will detect that a partition
has occurred; depending on the application, these distinctive elements can thus take the
appropriate actions. Moreover, the algorithm is worst-case optimal.
All previous results have been established only for complete graphs and have assumed
an a priori bound on the number of failures. No e#cient solution has been yet developed
for arbitrary circulant graphs when failures are bounded but undetectable.
Our result is quite general. In fact, our algorithm can be easily modified to solve the
Election problem with the same complexity for fault-free arbitrary networks with sense
of direction.
--R
Election in asynchronous complete networks with intermittent link failures.
Analysis of chordal ring.
Distributed loop computer networks: a survey.
New lower bound techniques for distributed leader finding and other problems on rings of processors.
Doubly link ring networks.
Designing fault-tolerant systems using automorphisms
Impossibility of distributed consensus with one faulty process.
Optimal elections in labeled hypercubes.
Sense of direction: formal definition and properties.
A distributed algorithm for minimum spanning tree.
Electing a leader in a ring with link failures.
Knowledge and common knowledge in a distributed environ- ment
Optimal distributed t-resilient election in complete networks
A distributed spanning tree algorithm.
Towards optimal distributed election on chordal rings.
A distributed election protocol for unreliable networks.
A modular technique for the design of e
Tight lower and upper bounds for a class of distributed algorithms for a complete network of processors.
A fully distributed (minimal) spanning tree algorithm.
Election in complete networks with a sense of direction.
Optimal distributed algorithms in unlabeled tori and chordal rings.
On the impact of sense of direction in arbitrary networks.
Optimal fault-tolerant leader election in chordal rings
Tolerance of double-loop computer networks to multinode failures
Optimal fault-tolerant distributed algorithms for election in complete networks with a global sense of direction
Distributed Systems.
On reliability analysis of chordal rings.
Comments on tolerance of double-loop computer networks to multinode failures
Reliable loop topologies for large local computer networks.
On the message complexity of distributed problems.
Sense of direction
Optimal asynchronous agreement and leader election algorithm for complete networks with byzantine faulty links.
Leader election in the presence of link failures.
A multiple fault-tolerant processor network architecture for pipeline computing
Design of a distributed fault-tolerant loop network
Faults and Fault-Tolerance in Distributed Systems: the Election problem
Election on faulty rings with incomplete size information.
--TR
--CTR
Paola Flocchini , Bernard Mans , Nicola Santoro, Sense of direction in distributed computing, Theoretical Computer Science, v.291 n.1, p.29-53, 4 January | fault tolerance;loop networks;interconnection networks;leader election;sense of direction;distributed algorithms |
279154 | Logic Testing of Bridging Faults in CMOS Integrated Circuits. | AbstractWe describe a system for simulating and generating accurate tests for bridging faults in CMOS ICs. After introducing the Primitive Bridge Function, a characteristic function describing the behavior of a bridging fault, we present the Test Guarantee Theorem, which allows for accurate test generation for feedback bridging faults via topological analysis of the feedback-influenced region of the faulty circuit. We present a bridging fault simulation strategy superior to previously published strategies, describe the new test pattern generation system in detail, and report on the system's performance, which is comparable to that of a single stuck-at ATPG system. The paper reports fault coverage as well as defect coverage for the MCNC layouts of the ISCAS-85 benchmark circuits. | Introduction
In the search for increased quality of integrated circuits, manufacturers must ensure that shipped
parts are actually good. To do this, manufacturers must test for the defects that are likely to occur.
Shen, Maly, and Ferguson have performed defect simulation experiments showing that the majority
of spot defects in MOS technologies cause shorts and opens [13, 23], and Feltham and Maly have
shown that the majority of spot defects in current MOS technologies cause changes in the circuit
description that result in shorts [11].
The single stuck-at fault model was adopted because it is powerful and simple, but it was never
meant to represent the manner in which circuits behave in the presence of defects. A test set
that detects 100% of single stuck-at faults may not detect a high percentage of the manufacturing
defects. Ferguson and Shen reported that complete single stuck-at test sets failed to detect up to
10% of the probable shorts in the circuits they examined [13].
The need for tests that detect the electrical behavior exhibited by shorts requires a bridging fault
model. The first step to generating bridging fault tests is to decide for which of the approximately
potential bridging faults to target (where n is the number of nodes in the circuit). Also necessary
is a theoretical foundation for bridging fault simulation and test generation that is simple, general,
and easily incorporated into current automatic test pattern generation (ATPG) systems, as well as
When this work was performed, Brian Chess was with the Computer Engineering Board, University of California,
Santa Cruz CA 95064. His current address is Hewlett-Packard Company, 1501 Page Mill Road MS-6UJ, Palo Alto
y Tracy Larrabee's address is Computer Engineering Board of Studies, University of California, Santa Cruz 95064.
This work was supported by the Semiconductor Research Corporation under Contract 93-DJ-315 and the National
Science Foundation under grant MIP-9011254.
implementation techniques that will take the theoretical vision to a complete and accurate system
comparable in efficiency to single stuck-at ATPG systems. The rest of this paper will describe a
theoretical foundation and a practical system that meets these needs.
The next section will define crucial terms and briefly review the differences between the single
stuck-at fault model and the bridging fault model, and it will describe the work of previous
researchers. Section 2 will present the theoretical foundation that makes the implementation possi-
ble. After describing the implementation in detail in Section 3 and reporting on its performance in
Section 4, Section 5 will finish by summarizing the new work and describing interesting problems
that remain open.
1.1 Definitions and Terms
A faulty circuit is an isomorphic copy of an associated fault-free circuit except for the introduction
of a change known as a fault. Some input combinations, when applied both to the fault-free circuit
and to the faulty circuit, will produce identical outputs: in this case the input combination does not
produce a logic error on a circuit output, and it is not a logic test for the introduced fault. If there
is no input combination that produces an error, the fault, considered in isolation, can never change
the logic function of the circuit: in this case the fault is logically undetectable (it is a redundant
fault).
In the popular stuck-at fault model, it is assumed that a circuit becomes faulty because a wire
has lost its ability to switch values; the wire is stuck high (stuck-at 1) or stuck low (stuck-at 0). If
this wire has a permanent value of 0 in the faulty circuit, and an input set causes the corresponding
wire in the fault-free circuit to take on the value 0, the input set will create no fault effect. Any
wire that has a different value in the faulty circuit and the fault-free circuit carries an error, or has
been activated, but this may not cause an error at a circuit output. If the input set produces an
error on a circuit output, the activated fault has been propagated to a circuit output. A successful
test must activate and propagate a fault.
Defects are fabrication anomalies. This paper is concerned with local defects-defects affecting
only a small portion of the IC. Local or spot defects are often the result of specks of contaminates on
the IC or photolithography during manufacturing. The way a defect affects the circuit's behavior
is a fault. It is common for local defects to cause a circuit to behave as if the outputs of two
gates, which are not connected in the fault-free circuit, are connected. This model of faulty circuit
behavior is the bridging fault model [19].
Changes in behavior can be detected as changes in logical function, excess propagation delay,
or excess quiescent power supply current (or any combination). This paper is primarily concerned
with faults that cause changes in the logical function of the circuit, but bridging fault detection by
monitoring excess quiescent power supply current (I DDQ testing) is an important adjunct to logic
testing [2]. I DDQ tests for bridging faults are easy to generate but expensive to apply. The results
in Section 4 suggest that it would be appropriate to produce I DDQ test patterns for the bridging
faults that are either proved untestable or are aborted. This would provide a small number of I DDQ
tests that would significantly increase the percentage of tested defects without the cost of the time
on the tester that would be necessary to provide I DDQ tests for all bridging faults.
A combinational test for a bridging fault shares the same basic characteristics as a test for
a stuck-at fault. To introduce an error, the bridge value must be different from one of the gate
outputs in the fault-free circuit; to propagate the error, at least one path of fault effects from the
bridge to a circuit output must exist. However, the process of activating and propagating the fault
is complicated by the possibility of feedback.
If a bridging fault creates a feedback loop, a formerly stable combinational circuit may oscillate
or take on sequential characteristics that mask the detection of the fault. It is possible to detect
some feedback bridging faults that create sequential behavior with sequences of test vectors [19],
but in this case extensive analysis may be required to ensure not only that the feedback element
can hold state, but that it is guaranteed to hold state. It can be dangerous to assume that the state
element introduced by the fault will achieve a stable digital value. As reported by Abramovici and
Menon [1], the vast majority of feedback bridging faults can be detected with single combinational
tests.
When discussing feedback bridging faults, it is useful to refer to the two bridged wires by their
locations in the circuit. Given any path that goes from a circuit input to a circuit output and
contains the two bridged wires, the back wire is the wire closest to the circuit inputs on this path,
and the front wire is the other bridged wire.
1.2 Previous work
When the idea of test generation for bridging faults was new, the assumption that the bridging faults
caused wired-AND or wired-OR behavior was good. In the dominant technologies of the time (such
as TTL), bridging faults did create wired logic. Abramovici and Menon detailed complete theories
and techniques to perform ATPG on bridging faults (including bridging faults that introduced
in combinational circuits exhibiting wired-logic behavior [1]. However, wired-logic does
not accurately reflect the behavior of bridges in static CMOS circuits [4, 12, 20].
The wired-logic model (wired-AND or wired-OR) is the easiest model to implement for simulation
and test pattern generation; with the exception of feedback, the wired-logic model is almost
as easy for an ATPG system to deal with as the single stuck-at model. A more exact model would
assume that the circuit value at the fault site is described in general by a Boolean function of
the inputs to the gates driving the bridged wires. This function could be derived in a number of
ways-two notable methods are analog simulation [12, 22] and the voting model [3, 4].
Deriving the Boolean function by simulating the two components with the bridged outputs
works well at modeling the upstream components from the fault site, but fails to take into account
the possible sensitive behavior of downstream components. An optimistic model assumes that the
bridge value is always digitally resolvable (in which case the model might not always be correct). A
pessimistic model describes the fault behavior with an incomplete Boolean function, where some of
the bridge's behavior falls within a gray region within which the model fails to give an answer [12].
Both of these approaches have been implemented in bridging fault simulators and test pattern
generators [12, 20].
A more general model assumes that the analog behavior induced by the fault extends for a
certain distance beyond the fault site, after which the circuit behavior is digitally resolvable. The
idea that a bridge voltage can be interpreted differently by different downstream gates is known as
the Byzantine Generals Problem for bridging faults [5]. The EPROOFS simulator [14] implements
this via mixed-mode simulation, where a SPICE-like analog simulation of the region around the
fault site is incorporated into a digital simulation of the rest of the circuit. This method provides
correct answers when previous models might have failed, in particular for many cases involving
feedback bridging faults. EPROOFS results are promising, but EPROOFS is slow compared to
stuck-at fault simulation, and the use of a mixed-mode simulator precludes adaptation of the
technique for test pattern generation. Although EPROOFs is much more accurate than previous
simulators, it still may make errors when accurately predicting the behavior of the faulty circuit
requires a timing analysis of the digital logic. There are faster simulators that do EPROOFS-like
simulation, although they sacrifice some accuracy for speed [18, 22]. There is currently no test
pattern generator that implements such sophisticated models.
A feedback bridging fault may create an asynchronous sequential circuit in a formerly combinational
network. The state of the circuit may prevent stimulation of the fault, or a stimulated fault
may cause oscillation, which may prevent a tester from detecting an error at the circuit outputs.
Feedback faults cannot be ignored as they can comprise a sizable percentage of realistic bridging
faults. Between 10% and 50% of the realistic bridging faults for the MCNC layouts of the ISCAS-85
circuits are feedback bridging faults.
Most approaches to generating tests for feedback bridging faults check for tests invalidated by
oscillation or sequential behavior by analyzing the inversion parity between the two bridged wires
[1, 20]. Because of reconvergent fanout, the inversion parity may change from one input vector to
the next. This means that the inversion parity must be recalculated for every input vector, which
is inefficient. Previous successful bridging fault test pattern systems-notably that of Millman and
Garvey [20]-generate a test as if there is no feedback and then check to make sure that feedback
will not invalidate the test. This can be wasteful: a fault that is undetectable because of feedback
could have numerous legitimate tests unless feedback is taken into consideration. It is much more
efficient to consider feedback as part of the test generation process.
The next section describes the theoretical foundation for the Nemesis ATPG system. Nemesis
incorporates arbitrary logical behavior of bridged components via the primitive bridge function
and prevents feedback complications during test simulation and generation via the Test Guarantee
Theorem.
Theoretical Foundations
Realistic faults have historically been unpopular candidates for test pattern generation. Modeling
the behavior of realistic faults frequently requires the circuit to be treated as an electrical entity
rather than a logical one; this is not amenable to standard test generation techniques. This section
will describe the theoretical foundation for a practical realistic bridging fault ATPG system.
2.1 The Primitive Bridge Function
A bridging fault transforms a portion of the circuit around the bridged wires into a single fault
block in the faulty circuit. The extent of the circuit replaced by the fault block is a question of
the sophistication of the bridging fault model to be used for test pattern generation. The fault
block can range from being a replacement for only the two gates with bridged outputs to being
a replacement for the two gates with bridged outputs as well as many downstream gates (and
perhaps even gates lying along any possible feedback paths). Figure 1 shows how a bridging fault
between the outputs of two NAND gates can create a simple two-component fault block in the
faulty circuit, and Figure 2 shows a more inclusive fault block for the same fault that will do a
better job of modeling varying logic thresholds of downstream gates.
A
Faulty Circuit
Z
Y
Y
Fault-free Circuit
Figure
1: A bridging fault between X and Y creates a simple fault block.
F
A
F
A
Fault-free Circuit Faulty Circuit
Y
Figure
2: The same bridging fault between X and Y creates a more general fault block.
The function of the fault block depends on its size and on the behavior of the bridged components
in the chosen technology. The characteristic function of the fault block is the Primitive Bridge
Function or PBF. The PBF can be specified as a truth table or other Boolean representation.
Table
shows three possible PBFs for the introduced fault block from Figure 1. The column
labeled ZWAND shows the fault block output if the technology in question follows the wired-AND
model, the column labeled ZWOR shows the fault block output if the technology in question follows
the wired-OR model, and the one labeled Z SPICE shows the fault block output derived from circuit
analysis of the CMOS standard cell components from the MCNC library.
This analysis of the two cells driving the bridge to create the PBF is known as two component
simulation. Depending on the accuracy required, the fault block may actually have to replace more
than two components; it may need to include downstream gates in order to make sure that the
outputs of the fault block are digitally resolved [5]. Two-component simulation can also model
arbitrary bridge resistance values by treating discrete bridge resistances as separate faults.
For bridging faults that do not introduce any feedback, the output of the PBF is computed
with wire values from the fault-free circuit. As presented in the next Section, the PBF for bridging
faults that do introduce feedback is computed twice: once with fault-free circuit values, and once
Table
1: PBFs from wired-AND, wired-OR, and SPICE simulation of MCNC cells
with feedback-influenced values.
2.2 The Test Guarantee Theorem
Figure
3 shows a feedback bridging fault with the potential for oscillation when using the SPICE-
derived PBF from Table 1. In fact, we know that this circuit, implemented with the MCNC cell
library, will not oscillate for any set of inputs because the feedback path is too short. Instead, the
bridge will settle to an intermediate voltage favoring the back wire's fault-free value. This result
is not predicted by the PBF for the bridge and is dependent on the length of the feedback path.
The actual behavior of the bridge in this situation is immaterial: because the PBF does not model
the behavior, we cannot reliably use it for detection of the fault. When the circuit
has the potential for oscillation. Figure 4 shows a feedback bridging fault with the potential for
a test being invalidated because of a previous state. For example, when and the
SPICE-derived PBF from Table 1 is used, the outputs of the faulty circuit would be different if the
feedback loop had a previous value of 0 than if it had a previous value of 1. In this case, if input X
is set to 0, the feedback path is broken, and no previous state could invalidate a test.
A
Z
Figure
3: A feedback bridging fault that might oscillate
Z
A
Figure
4: A feedback bridging fault that may hold state
Figure
3 illustrates a situation in which, for certain input values, the back wire will not affect
the value on the front wire in the fault-free circuit but will affect the output of the fault block in the
faulty circuit. Using the SPICE-derived PBF from Table 1, the potential test should
be rejected because it may cause the circuit to oscillate, but if the PBF was for the wired-AND
model, the circuit could never oscillate.
The method for preventing oscillation is the same as the method for preventing sequential
behavior-if an error can be propagated from the back wire without altering the inputs to the PBF
such that the PBF changes the value on the bridge, then neither oscillation or sequential behavior
will prevent a test regardless of which wire carries the fault. This observation leads to:
The Test Guarantee Theorem for feedback bridging faults. If a test creates a situation
in which the result of propagating either Boolean value from the back wire causes the PBF to assign
the same value to the bridge, the test will not be invalidated because of feedback.
Given that the PBF correctly models the behavior of a bridge in the absence of feedback,
the PBF can be guaranteed to correctly model the the behavior of a bridge in the
presence of feedback only when the feedback does not influence the result of the PBF
computation. Since the fault-free circuit is acyclic, the sole source of feedback in the
faulty circuit is the back wire of the bridge. If the value on the back wire does not
affect the result of the computation of the PBF, then no source of feedback can affect
the result of the computation of the PBF, and the PBF correctly models the behavior
of the bridge. 2
Like the wired-logic theorems of Abramovici and Menon [1], the Test Guarantee Theorem
requires that the feedback loop created by the bridge be broken. But unlike the theorems of
Abramovici and Menon, this requirement may not be satisfied simply by stipulating that the back
wire not sensitize the front wire in the fault-free circuit; the back wire must not be allowed to
sensitize the output of the fault block. If the PBF in use is wired-AND, the new theorems will
agree with the Abramovici and Menon theorems; if the PBF in use is more complicated, the
theorem provides additional accuracy. Enforcing the additional constraints imposed by the Test
Guarantee Theorem involves an analysis of the feedback-influenced region of the circuit. A wire is
feedback-influenced if it is on any path between the two bridged wires.
If an error is to be propagated from the back wire, the feedback influenced region is a subsection
of the faulty region, shown in Figure 5. Analysis of the region consists of applying the PBF to
faulty circuit values as well as fault-free circuit values and making sure that the results of the
two PBF computations agree. If an error is to be propagated from the front wire, the feedback
influenced region is disjoint from the faulty region, as shown in Figure 6. Analysis of the feedback
region involves propagating the compliment of the fault-free value of the back wire, and applying
the PBF to the resulting values.
Oscillation and sequential behavior do not need to be prevented by performing a check after
test generation. Instead, independence of previous state and the absence of oscillation can be
established as a requirement for test generation. Because the method of preventing oscillation and
sequential invalidation are the same, there is no need for an analysis of the inversion parity between
the bridged wires.
Back Wire Front Wire
Faulted Region
Feedback Region
Circuit Inputs Circuit Outputs
Figure
5: Error on the back wire: the feedback region is a subset of the faulty region.
Carafe, an Inductive Fault Analysis tool, produces a list of realistic bridging faults-bridging faults
that could be caused by a single defect connecting two gate outputs. Carafe considers the layout of
the circuit and lists the nodes that are adjacent on the same conducting layer of the circuit or that
cross each other on layers separated by a single layer of insulating material[15, 16]. This paper is
only concerned with Carafe-extracted faults in the interconnect: shorts involving internal cell lines
can also be extracted by Carafe, and they present interesting problems [7], but they are beyond
the scope of this paper.
Previously, bridging fault ATPG was thought to be unwieldy because of the number of feasible
bridging faults and the complexity of the bridging fault model. While the number of possible bridging
faults is O(n 2 ) where n is the number of nodes in the circuit, the number of realistic bridging
faults is a much more manageable O(n) [2]. Also, if the PBFs are derived from two component
simulation, the number of different PBFs needed for fault block analysis is not prohibitive because
only one PBF is needed for each type of fault block (and the number of different types will be
small for synthesized layouts). Section 4 will compare numbers of stuck-at faults, realistic bridging
faults, and two-component PBFs for the MCNC layouts of the ISCAS-85 circuits [6].
Carafe reports the likelihood of occurrence for each fault it extracts. This likelihood indicates
how likely the fault is to occur relative to all of the other faults in the list. This means that the
ATPG system can report not only what percentage of the realistic bridging faults are tested, but
what percentage of the probable bridging defects are tested. The defect coverage should be much
more indicative than the fault coverage when it comes to relating test quality to defects per million
parts shipped (DPM) [25].
After Carafe determines the realistic bridging faults, SPICE simulation is used to determine
the PBF for each fault, and then the Nemesis ATPG system[17] generates tests. Figure 7 shows
the organization of the total system.
e
Feedback Region
Circuit Inputs Circuit Outputs
Faulted Region
Front Wire
Figure
on the front wire: the feedback region is disjoint from the faulty region.
Fault
Characterization
Cell Descriptions
Defect Coverage
Test Patterns
Fault Types
PBFs
Layout
Defect Statistics
Carafe Nemesis
Figure
7: System organization
3.1 Simulator
Unlike the bridging fault simulator of Abramovici and Menon [1], for which pseudocode is given
in
Figure
8, the Nemesis method of bridging fault simulation, for which pseudocode is given in
Figure
9, does not associate bridging faults with wires; instead, wires are tagged with Boolean
values representing whether or not an error can be propagated to a primary output [8]. After
attempting to propagate an error from a wire, a field in the wire's data structure is set to reflect
the success or failure of the propagation. If a bridging fault further down the fault list introduces
an error onto the same wire, it can immediately be determined whether or not the fault can be
propagated.
Nemesis bridging fault simulation is modeled after the Parallel Pattern Single Fault Propagation
(PPSFP) simulator of Waicukauski et al. [24]. Note that, given the PBF for a bridge, the bridge
value for each of the parallel patterns is evaluated in the same fashion as that of any other gate (each
of which can perform an arbitrary combinational function). In parallel bridging fault simulation,
faults can be propagated from both of the wires involved in the bridge at the same time. The fault
block does not introduce an error on both of the wires for any input pattern: the error is always on
one wire or the other. This means that each bit-slice in the pair of faulty and fault-free wire values
may represent an error on one wire or the other, but not both. A wire is placed on the simulation
Simulate the fault-free circuit with test vector T
foreach wire (W ) involved in a bridging fault
if T detects a stuck-at fault on W
foreach bridging fault (BF ) associated with W
if the PBF for BF places an error on W and meets the TGT
Accept test T : BF is detected
else
test T does not detect BF from wire W
Figure
8: Pseudo-code for Abromovici and Menon bridging fault simulation
Simulate the fault-free circuit with test vector T
foreach bridging fault (BF )
if the PBF for BF places an error on a wire (W ) and meets the TGT
if a previous simulation of a fault on W can be used
use previous simulation data
else
simulate fault on W introduced by fault block
if the fault introduced by the fault block is detectable
accept test T for this fault: BF is detected
record results of simulation of fault on W for future use
Figure
9: Pseudo-code for Wire Memory bridging fault simulation
event queue if its faulty and fault-free values differ in any bit-position-regardless of whether the
difference represents a value propagated from the fault on the first wire or the second wire. If the
two bridged wires share a significant number of downstream components, the number of individual
component simulations can be as little as half the number required by simulators that associate
bridging faults with wires.
For purposes of comparison, we implemented not only the new method of bridging fault simu-
lation, but also the method of Abramovici and Menon, and we used each of them in the Nemesis
ATPG system. Comparing the two simulation methods, there are a number of reasons for the
greater success of the Wire Memory method. The ability to abort simulations when no errors are
moving forward, which can only be done in the Wire Memory method, saves a great deal of time.
Also, the data structures needed for the Wire Memory method were easily integrated into a system
(such as Nemesis) that treats many different types of faults (such as bridge, I DDQ , stuck-at, and
delay) in a similar fashion. Data structure manipulation in the older method is more complex
because each fault appears twice (once for each wire that may carry an error).
3.2 Test pattern generator
There are two types of feedback faults: given a fault, if every path from the back wire to a
primary output goes through the front wire, the fault is a feedback fault with no fanout; If some but
foreach feedback bridging fault
front wire fault-free
if test generation is successful (sequential behavior must be prevented via the TGT)
FBF is covered, move to the next fault
else front wire fault-free
if test generation is successful (sequential behavior must be prevented via the TGT)
FBF is covered, move to the next fault
else if FBF is a feedback fault with no fanout
FBF is undetectable, move to the next fault
else back wire fault-free
if test generation is successful (oscillation must be prevented via the TGT)
FBF is covered, move to the next fault
else back wire fault-free
if test generation is successful (oscillation must be prevented via the TGT)
FBF is covered, move to the next fault
else FBF is undetectable
Figure
10: Pseudo-code for ATPG for bridging faults that may induce feedback
not all of the paths from the back wire to a primary output go through the front wire, the fault is
a feedback fault with fanout. It is a consequence of the Test Guarantee Theorem that a feedback
fault with no fanout can only be detected with the error placed on the front wire. Pseudo-code for
the feedback bridging fault test generator is shown in Figure 10.
Each attempt to generate a test for a bridging fault enforces constraints on fault-free values for
all wires, on faulty values for wires in between the bridge and a circuit output, and, for feedback
bridging faults, on feedback-influenced values [9].
Figures
11 through 14 show a sample bridging fault and demonstrate how Nemesis will show that
there is no test that will detect the fault. Notice that the inversion parity between the back and front
wire can change depending on circuit input values. This makes it crucial to identify both potential
oscillation and sequential behavior for the same fault. The Nemesis ATPG system uses Boolean
satisfiability, so constraints are not enforced in a particular order [17]. However, for illustration of
each of the four test generation attempts, first initial constraints are shown, then derived activation,
justification, and propagation values, and finally-if they are required-constraints having to do
with the feedback-influenced values.
First,
Figure
11 shows the attempt to generate a test such that the fault-free value of the front
wire is 0 and the fault block output is 1. The first drawing shows the constraints imposed by the
values in the attempted test, and the second drawing shows the direct implications of these values,
including the value that the application of the PBF to the fault-free circuit values would place on
the bridge (the value shown on the dashed line in the illustration). It is not possible to generate
a test because the fault-free circuit values cause the PBF to assign a 0 to the bridge, and the first
attempt requires a test with a 1 on the bridge. Similarly, Figure 12 shows the attempt to generate
a test such that the fault-free value of the front wire is 1 and the fault block output is 0. Once again
the first drawing shows the constraints imposed by the values in the attempted test, and the second
drawing shows the direct implications of these values, including the value that the application of
Figure
11: Front wire stuck-at 11/01/00
0/000
Figure
12: Front wire stuck-at 0
the PBF to the fault-free circuit values would place on the bridge. It is not possible to generate a
test here because the fault effect cannot be propagated through the final NAND gate.
The test cannot be generated with an error on the front wire. The test generation process must
continue because the fault is a feedback with fanout fault, and the fault effect can be propagated
to either circuit output using paths that do not include the front wire.
Figure
13 shows the attempt to generate a test such that the fault-free value of the back wire
is 0 and the fault block output is 1. Once again the first two drawings show the initial constraints
and the direct implications of the constrained values, and the added third drawing shows the results
of applying the PBF the second time to the feedback-influenced values, as required by the Test
Guarantee Theorem. This second PBF application causes the fault block output to change, which
causes the test to be rejected. Figure 14 shows the attempt to generate a test such that the fault-free
value of the back wire is 1 and the fault block output is 0. The three drawings are analogous to
those in Figure 13, and again a test cannot be found because, just as in Figure 13, the circuit has
Figure
13: Back wire stuck-at 1
the potential for oscillation. Because each of the four categories of potential tests for this bridging
fault is unworkable, the fault is untestable for the given PBF.
The Test Guarantee Theorem fits into an ATPG framework elegantly because it allows the
check for feedback or sequential invalidation to occur as a requirement of test generation and not
as a postprocessing consistency check.
4 Experimental Results
This section presents the results for the UCSC system for testing bridging faults. The two-component
PBFs were obtained by SPICE simulation. Bridge voltages were converted to digital
values by using the logic threshold of the smallest inverter in the MCNC cell library.
We compare the performance of the Nemesis bridging fault ATPG system to that of the Nemesis
single stuck-at fault ATPG system, and we compare the performance of our simulator to that of
our implementation of the Abramovici and Menon simulator. All times given are CPU times in
seconds on a Digital Equipment Corporation Decstation 5000/240.
Table
2 shows the number of PBFs, the number of stuck-at faults, the number of total realistic
bridging faults, the number of bridging faults with no feedback, and the number of feedback bridging
faults for the layouts of the ISCAS-85 benchmark circuits using the MCNC cell library. There are
three to nine times as many bridging faults as there are stuck-at faults for the given circuits, so
an efficient bridging fault ATPG system might take up to 10 times as long to produce tests for all
Figure
14: Back wire stuck-at 0
realistic bridging faults as to produce tests for all single stuck-at faults. The number of PBFs is
small compared to the number of faults, and in fact, only 309 different PBFs were used in all ten
of the MCNC layouts of the ISCAS-85 benchmarks.
The number of feedback bridging faults is a significant percentage of the number of bridging
faults: Useful fault coverage could not be achieved without accurate tests for the feedback bridging
faults.
Table
3 compares the new Wire Memory simulation algorithm with the Abramovici and Menon
simulation algorithm for PPSFP random test simulation. The comparison is fair, because any of
the optimizations that can possibly be applied to advantage for the Abramovici and Menon method
is included in our implementation of their method. The Wire Memory method is almost always
faster, and the improvement becomes more striking as the size of the circuits increase. Neither
method uses much memory: either implementation can run all of the benchmarks on a machine
with megabytes of RAM.
Tables
4 and 5 show the number of bridging faults covered, proved untestable, or aborted by
the bridging fault and single stuck-at fault ATPG systems, as well as the time in seconds necessary
to achieve the reported coverage 1 . For all ten circuits, the bridging fault ATPG system takes
an average of 4=3 the time per fault as the single stuck-at system takes, but for most circuits
1 The number of single stuck-at faults for each circuit differs from that reported in the literature because the
MCNC versions of the ISCAS-85 circuits are technology-mapped implementations using standard cells. Note that
not only will the number of single stuck-at faults change, but the number of untestable faults will also be different.
Circuit Stuck-At PBFs Bridging faults
Total No Feedback Feedback
Table
2: Number of faults and PBFs for each circuit
Time in Seconds
Circuit Faults covered A & M Wire Mem.
C1908 4,684 15.3 13.7
C7552 53,271 435.2 287.2
Table
3: Nemesis random parallel simulation
(including four of the five largest circuits), the time per processed bridging fault is less than the
time per processed single stuck-at fault. This shows that realistic bridging fault ATPG is an efficient
and valuable complement to single stuck-at ATPG.
Table
4 also shows the fault coverage and bridging defect coverage for the benchmark circuits.
For the ten circuits, Nemesis covers an average of 99.39% of the faults and 99.33% of the defects.
Nemesis fails to generate tests for or prove untestable very few of the defects. For example, for the
C0432, it generated tests for 98.32% of the realistic defects, it proved 1.62% of the realistic defects
combinationally untestable, and it failed to process 0.06% of the defects. But many of these faults
can still be tested. For example, 1.68% of realistic defects for the C0432 were untestable or were
not processed. The addition of only five I DDQ test patterns will leave only 0.19% of the realistic
defects for the C0432 untested (the remaining faults are both logically untestable and untestable
via detecting excess I DDQ ). This is fewer than one half of the I DDQ test patterns that would be
required to test all of the realistic bridging faults.
The fault coverage and the defect coverage generally track each other, but they can differ by
significant amounts. Using the C7552 as an example, the difference between 99.46% covered faults
Circuit Number of Faults Time % Coverage
Covered Untestable Aborted (Secs.) Faults Defects
Table
4: Bridging fault test pattern generation coverage
Circuit Number of Faults Time
Covered Untestable Aborted (Secs.)
Table
5: Single stuck-at test pattern generation coverage
and 98.94% could make a significant difference in DPM estimation [21].
5 Summary and conclusions
The integrated circuit industry changes at a rapid pace, but one element that does not change is
the need for quality. The bridging fault model offers additional rigor to the manufacturing test
process by modeling the behavior of faults that are likely to occur. In this paper, we have presented
the Primitive Bridge Function-a characteristic function describing the behavior of bridged
components; we have provided a theoretical foundation for test pattern generation that correctly
handles all bridging faults; we have described and reported on the performance of a test pattern
simulator that is faster than previously reported simulators and that accurately simulates all realistic
bridging faults; and finally, we have described and reported on the performance of our complete
ATPG system, which generates tests that cover at least 98.32% of the realistic bridging defects
and an average of 99.33% of the realistic bridging defects in our layouts of the MCNC ISCAS-85
benchmark circuits. The time it takes to generate these tests is comparable to the time necessary to
generate single stuck-at test sets for the same circuits. We have shown ATPG for realistic bridging
faults to be viable and significant.
Future experimentation will involve different and more accurate methods for calculating PBFs-
methods that address indeterminate logic values and differing downstream gate input thresholds [5].
We are also investigating shorts on the inside of the cell [7] and bridging fault diagnosis [10].
--R
A practical approach to fault simulation and test generation for bridging faults.
Testing for bridging faults (shorts) in CMOS circuits.
Deriving Accurate Fault Models.
Accurate modeling and simulation of bridging faults.
Fault model evolution for diagnosis: Accuracy vs precision.
A neutral netlist of 10 combinatorial benchmark circuits and a target translator in FORTRAN.
Testing CMOS logic gates for realistic shorts.
Bridge fault simulation strategies for CMOS integrated circuits.
Generating test patterns for bridge faults in CMOS ICs.
Diagnosis of realistic bridging faults with single stuck-at information
Physically realistic fault models for analog CMOS neural networks.
Test pattern generation for realistic bridge faults in CMOS ICs.
A CMOS fault extractor for inductive fault analysis.
EPROOFS: a CMOS bridging fault simulator.
Carafe: An inductive fault analysis tool for CMOS VLSI circuits.
Carafe: An inductive fault analysis tool for CMOS VLSI circuits.
Test pattern generation using Boolean satisfiability.
Biased voting: a method for simulating CMOS bridging faults in the presence of variable gate logic thresholds.
IEEE Transactions on Computers
An accurate bridging fault test pattern generator.
Limitations in predicting defect level based on stuck-at fault coverage
Fast and accurate CMOS bridging fault simulation.
Inductive fault analysis of MOS integrated circuits.
Fault simulation for structured VLSI.
Defect level as a function of fault coverage.
--TR
--CTR
Ilia Polian , Piet Engelke , Michel Renovell , Bernd Becker, Modeling Feedback Bridging Faults with Non-Zero Resistance, Journal of Electronic Testing: Theory and Applications, v.21 n.1, p.57-69, January 2005
Baradaran Tahoori, Using satisfiability in application-dependent testing of FPGA interconnects, Proceedings of the 40th conference on Design automation, June 02-06, 2003, Anaheim, CA, USA
Application-dependent testing of FPGAs, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.14 n.9, p.1024-1033, September 2006
Baradaran Tahoori, Application-Specific Bridging Fault Testing of FPGAs, Journal of Electronic Testing: Theory and Applications, v.20 n.3, p.279-289, June 2004
M. Favalli , C. Metra, Bridging Faults in Pipelined Circuits, Journal of Electronic Testing: Theory and Applications, v.16 n.6, p.617-629, Dec. 2000
Donald Shaw , Dhamin Al-Khalili , Come Rozon, Automatic generation of defect injectable VHDL fault models for ASIC standard cell libraries, Integration, the VLSI Journal, v.39 n.4, p.382-406, July 2006 | realistic faults;test pattern generation;fault simulation;fault models;bridging faults |
279239 | An Unbiased Detector of Curvilinear Structures. | AbstractThe extraction of curvilinear structures is an important low-level operation in computer vision that has many applications. Most existing operators use a simple model for the line that is to be extracted, i.e., they do not take into account the surroundings of a line. This leads to the undesired consequence that the line will be extracted in the wrong position whenever a line with different lateral contrast is extracted. In contrast, the algorithm proposed in this paper uses an explicit model for lines and their surroundings. By analyzing the scale-space behavior of a model line profile, it is shown how the bias that is induced by asymmetrical lines can be removed. Furthermore, the algorithm not only returns the precise subpixel line position, but also the width of the line for each line point, also with subpixel accuracy. | Introduction
Extracting curvilinear structures, often simply called lines, in digital images is an important low-level
operation in computer vision that has many applications. In photogrammetric and remote
sensing tasks it can be used to extract linear features, including roads, railroads, or rivers, from
satellite or low resolution aerial imagery, which can be used for the capture or update of data
for geographic information systems [1, 2]. In addition it is useful in medical imaging for the
extraction of anatomical features, e.g., blood vessels from an X-ray angiogram [3] or the bones
in the skull from a CT or MR image [4].
The published schemes for line detection can be classified into three categories. The first
approach detects lines by considering the gray values of the image only [5, 6, 7] and uses purely
local criteria, e.g., local gray value differences. Since this will generate many false hypotheses
for line points, elaborate and computationally expensive perceptual grouping schemes have to
be used to select salient lines in the image [8, 9, 10, 7]. Furthermore, lines cannot be extracted
with sub-pixel accuracy.
The second approach is to regard lines as objects having parallel edges [11, 12, 13]. In a
first step, the local direction of a line is determined for each pixel. Then two edge detection
filters are applied in the direction perpendicular to the line, where each filter is tuned to detect
either the left or right edge of the line. The responses of each filter are combined in a non-linear
way to yield the final response of the operator [11]. The advantage of this approach is that
since the edge detection filters are based on the derivatives of Gaussian kernels, the procedure
can be iterated over the scale-space parameter oe to detect lines of arbitrary widths. However,
because special directional edge detection filters that are not separable have to be constructed,
the approach is computationally expensive.
The final approach is to regard the image as a function z(x; y) and extract lines from it by
using various differential geometric properties of this function. The basic idea behind these
algorithms is to locate the positions of ridges and ravines in the image function. These methods
can be further divided according to which property they use.
The first sub-category defines ridges as the point on a contour line of the image, often also
called isohypse or isophote, where the curvature of the contour line has a maximum [4, 14, 15].
One way to do this is to extract the contour lines explicitly, to find the points of maximum
curvature on them, and to then link the extracted points into ridges [14]. However, this scheme
suffers from two main drawbacks. Firstly, since no contour lines will be found for perfectly flat
ridges, such ridges will be labeled as an extended peak. Furthermore, for ridges that have a very
low gradient the contour lines will become widely separated, and thus hard to link. Another
way to extract the maxima of curvature on the contour lines is to give an explicit formula for
that curvature and its direction, and to search for maxima in a curvature image [4, 15]. However,
this procedure will also fail for perfectly flat ridges. While Sard's theorem [16] tells us that for
generic functions such points will be isolated, they occur quite often in real images and lead
to fragmented lines without a semantic reason. Furthermore, the ridge positions found by this
operator will often be in wrong positions due to the nature of the differential geometric property
used, even for images without noise [4, 17].
In the second sub-category, ridges are found at points where one of the principal curvatures
of the image assumes a local maximum [18, 15], which is analogous to the approach taken
to define ridges in advanced differential geometry [19]. For lines with a flat profile it has the
problem that two separate points of maximum curvature symmetric to the true line position will
be found [15]. This is clearly undesirable.
In the third sub-category, ridges and ravines are detected by locally approximating the image
function by its second or third order Taylor polynomial. The coefficients of this polynomial are
usually determined by using the facet model, i.e., by a least squares fit of the polynomial to the
image data over a window of a certain size [20, 21, 22, 23, 24, 25]. The direction of the line is
determined from the Hessian matrix of the Taylor polynomial. Line points are then found by
selecting pixels that have a high second directional derivative perpendicular to the line direction.
The advantage of this approach is that lines can be detected with sub-pixel accuracy without
having to construct specialized directional filters. However, because the convolution masks that
are used to determine the coefficients of the Taylor polynomial are rather poor estimators for
the first and second partial derivatives, this approach usually leads to multiple responses to a
single line, especially when masks larger than 5 \Theta 5 are used to suppress noise. Therefore, the
approach does not scale well and cannot be used to detect lines that are wider than the mask
size. For these reasons, a number of line detectors have been proposed that use Gaussian masks
to detect the ridge points [26, 27, 15]. These have the advantage that they can be tuned for a
certain line width by selecting an appropriate oe. It is also possible to select the appropriate oe
for each image point by iterating through scale space [26]. However, since the surroundings
of the line are not modeled the extracted line position becomes progressively inaccurate as oe
increases.
Evidently, very few approaches to line detection consider the task of extracting the line
width along with the line position. Most of them do this by an iteration through scale-space
while selecting the scale, i.e., the oe, that yields the maximum value to a certain scale-normalized
response as the line width [11, 26]. However, this is computationally very expensive, especially
if one is only interested in lines in a certain range of widths. Furthermore, these approaches
will only yield a relatively rough estimate of the line width since, by necessity, the scale-space
is quantized in rather rough intervals. A different approach is given in [28], where, lines and
edges are extracted in one simultaneous operation. For each line point two corresponding edge
points are matched from the resulting description. This approach has the advantage that lines
and their corresponding edges can in principle be extracted with sub-pixel accuracy. However,
since a third order facet model is used, the same problems that are mentioned above apply.
Furthermore, since the approach does not use an explicit model for a line, the location of the
corresponding edge of a line is often not meaningful because the interaction between a line and
its corresponding edges is neglected.
In this paper, an approach to line detection is presented that uses an explicit model for lines,
and various types of line profile models of increasing sophistication are discussed. A scale-space
analysis is carried out for each of the models. This analysis is used to derive an algorithm
in which lines and their widths can be extracted with sub-pixel accuracy. The algorithm uses
a modification of the differential geometric approach described above to detect lines and their
corresponding edges. Because Gaussian masks are used to estimate the derivatives of the im-
age, the algorithm scales to lines of arbitrary widths while always yielding a single response.
Furthermore, since the interaction between lines and their corresponding edges is explicitly
modeled, the bias in the extracted line and edge position can be predicted analytically, and can
thus be removed. Therefore, line position and width will always correspond to a semantically
meaningful location in the image.
The outline of the paper is as follows. In Section 2 models for lines in 1D and 2D images
are presented and algorithms to extract individual line points are discussed. Section 3 presents
an algorithm to link the individual line points into lines and junctions. The extraction of the
width of the line is discussed in Section 4. Section 5 describes an algorithm to correct the line
position and width to their true values. Finally, Section 6 concludes the paper.
2 Detection of Line Points
2.1 Models for Line Profiles in 1D
Many approaches to line detection consider lines in 1D to be bar-shaped, i.e., the ideal line of
width 2w and height h is assumed to have a profile given by
(1)
However, due to sampling effects of the sensor lines often do not have this profile. Figure 1
shows a typical profile of a line in an aerial image, where no flat bar profile is apparent. There-
fore, let us first consider lines with a parabolically shaped profile because it will make the
derivation of the algorithm clearer and provide us with criteria that should be fulfilled for arbitrary
line profiles. The ideal line of width 2w and height h is given by
(2)
The line detection algorithm will be developed for this type of profile, but the implications of
applying it to bar-shaped lines will be considered later on.
Image data
Approximating profile
Figure
1: Profile of a line in an aerial image and approximating parabolic line profile.
2.2 Detection of Lines in 1D
In order to detect lines with a profile given by (2) in an image z(x) without noise, it is sufficient
to determine the points where z 0 (x) vanishes. However, it is usually convenient to select only
salient lines. A useful criterion for salient lines is the magnitude of the second derivative z 00 (x)
in the point where z 0 Bright lines on a dark background will have z 00 (x) - 0 while
dark lines on a bright background will have z 00 (x) AE 0. Please note that for the ideal line profile
Real images will contain a significant amount of noise, and thus the scheme described above
is not sufficient. In this case, the first and second derivatives of z(x) should be estimated by
convolving the image with the derivatives of the Gaussian smoothing kernel since, under certain,
very general, assumtions, it is the only kernel that makes the inherently ill-posed problem of
estimating the derivatives of a noisy function well-posed [29, 30]. The Gaussian kernels are
given by:
oe (x) =p
2-oe
oe
The responses, i.e., the estimated derivatives, will then be:
oe
oe
oe (x) f p (x)
s
s
Figure
2: Scale-space behaviour of the parabolic line f p when convolved with the derivatives of
Gaussian kernels for x 2 [\Gamma3; 3] and oe 2 [0:2; 2].
\Gamma2x(OE oe
oe
oe
oe 4
r 00
oe
\Gamma2(OE oe
oe
oe
oe 4 (g 000
oe
oe
where
x
Z
Equations (6)-(8) give a complete scale-space description of how the parabolic line profile
will look like when it is convolved with the derivatives of Gaussian kernels. Figure 2 shows
the responses for an ideal line with bright line on a dark background,
for x 2 [\Gamma3; 3] and oe 2 [0:2; 2]. As can be seen from this figure, r 0
for all oe. Furthermore, r 00
takes on its maximum negative value at
Hence it is possible to determine the precise location of the line for all oe. In addition it can be
seen that the ideal line will be flattened out as oe increases as a result of smoothing. This means
that if large values for oe are used, the threshold to select salient lines will have to be set to an
accordingly smaller value.
Let us now consider the more common case of a bar-shaped profile. For this type of profile
without noise no simple criterion that depends only on z 0 (x) and z 00 (x) can be given since z 0 (x)
and z 00 (x) vanish in the interval [\Gammaw; w]. However, if the bar profile is convolved with the
derivatives of the Gaussian kernel, a smooth function is obtained in each case. The responses
will be:
r- b (x,s,1,1)
s
r- b (x,s,1,1)
s
-22
Figure
3: Scale-space behaviour of the bar-shaped line f b when convolved with the derivatives
of Gaussian kernels for x 2 [\Gamma3; 3] and oe 2 [0:2; 2].
oe
r 00
oe
oe
oe
Figure
3 shows the scale-space behaviour of a bar profile with
is convolved with the derivatives of a Gaussian. It can be seen that the bar profile gradually
becomes "round" at its corners. The first derivative will vanish only at
because of the infinite support of g oe (x). However, the second derivative r 00
b (x; oe; w; h) will not
take on its maximum negative value for small oe. In fact, for oe - 0:2w it will be very close
to zero. Furthermore, there will be two distinct minima in the interval [\Gammaw; w]. It is, however,
desirable for r 00
to exhibit a clearly defined minimum at salient lines are
detected by this value. After some lengthy calculations it can be shown that
has to hold for this. Furthermore, it can be shown that r 00
b (x; oe; w; h) will have its maximum
negative response in scale-space for
3. This means that the same scheme as described
above can be used to detect bar-shaped lines as well. However, the restriction on oe must be
observed.
In addition to this, (11) and (12) can be used to derive how the edges of a line will behave
in scale-space. Since this analysis involves equations which cannot be solved analytically, the
calculations must be done using a root finding algorithm [31]. Figure 4 shows the location of
the line and its corresponding edges for w 2 [0; 4] and oe = 1. Note that the ideal edge positions
are given by From (12) it is apparent that the edges of a line can never move closer
than oe to the real line, and thus the width of the line will be estimated significantly too large for
narrow lines. However, since it is possible to invert the map that describes the edge position,
the edges can be localized very precisely once they are extracted from an image.
The discussion so far has assumed that lines have the same contrast on both sides, which is
rarely the case for real images. For simplicity, only asymetrical bar-shaped lines
f a (x) =? !
x
Line position
Edge position
True line width
Figure
4: Location of a line with width w 2 [0; 4] and its edges for oe = 1.
are considered (a 2 [0; 1]). General lines of height h can be obtained by considering a scaled
asymmetrical profile, i.e., hf a (x). However, this changes nothing in the discussion that follows
since h cancels out in every calculation. The corresponding responses are given by:
r a (x; oe; w;
a
r 00
a
The location where r 0
a
the position of the line, is given by
This means that the line will be estimated in a wrong position whenever the contrast is significantly
different on both sides of the line. The estimated position of the line will be within the
actual boundaries of the line as long as
a
The location of the corresponding edges can again only be computed numerically. Figure 5
gives an example of the line and edge positions for It can be
seen that the position of the line and the edges is greatly influenced by line asymmetry. As a
gets larger the line and edge positions are pushed to the weak side, i.e., the side that posseses
the smaller edge gradient.
Note that (18) gives an explicit formula for the bias of the line extractor. Suppose that we
knew w and a for each line point. Then it would be possible to remove the bias from the line
detection algorithm by shifting the line back into its proper position. Section 5 will describe the
solution to this problem.
Because the asymmetrical line case is by far the most likely case in any given image it is
adopted as the basic model for a line in an image. It is apparent from the analysis above that
failure to model the surroundings of a line, i.e., the asymmetry of its edges, can result in large
errors of the estimated line position and width. Algorithms that fail to take this into account
will fail to return very meaningful results.
a
x
Line Position
Edge Position
True line width
True line position
Figure
5: Location of an asymmetrical line and its corresponding edges with width
and a 2 [0; 1].
2.3 Lines in 1D, Discrete Case
The analysis so far has been carried out for analytical functions z(x). For discrete signals only
two modifications have to be made. The first is the choice of how to implement the convolution
in discrete space. Integrated Gaussian kernels were chosen as convolutions masks, mainly
because the scale-space analysis of Section 2.2 directly carries over to the discrete case. An additional
advantage is that they give automatic normalization of the masks and a direct criterion
on how many coefficients are needed for a given approximation error. The integrated Gaussian
is obtained if one regards the discrete image z n as a piecewise constant function
In this case, the convolution masks will be given by:
n;oe
For the implementation the approximation error is set to 10 \Gamma4 in each case because for images
that contain gray values in the range [0; 255] this precision is sufficient. Of course, other
schemes, like discrete analogon of the Gaussian [32] or a recursive computation [33], are suitable
for the implementation as well. However, for small oe the scale-space analysis will have to
be slightly modified because these filters have different coefficients compared to the integrated
Gaussian.
The second problem that must be solved is the determination of line location in the discrete
case. In principle, one could use a zero crossing detector for this task. However, this would
yield the position of the line only with pixel accuracy. In order to overcome this, the second
order Taylor polynomial of z n
is examined. Let r, r 0 , and r 00 be the locally estimated derivatives
at point n of the image that are obtained by convolving the image with g n
, and g 00
. Then the
Taylor polynomial is given by
The position of the line, i.e., the point where
The point n is declared a line point if this position falls within the pixel's boundaries, i.e., if
and the second derivative r 00 is larger than a user-specified threshold. Please note
that in order to extract lines, the response r is unnecessary and therefore does not need to be
computed. The discussion of how to extract the edges corresponding to a line point will be
deferred to Section 4.
2.4 Detection of Lines in 2D
Curvilinear structures in 2D can be modeled as curves s(t) that exhibit a characteristic 1D line
profile, i.e., f a
, in the direction perpendicular to the line, i.e., perpendicular to s 0 (t). Let this
direction be n(t). This means that the first directional derivative in the direction n(t) should
vanish and the second directional derivative should be of large absolute value. No assumption
can be made about the derivatives in the direction of s 0 (t). For example, let z(x; y) be an image
that results from sweeping the profile f a
along a circle s(t) of radius r. When this image is convolved
with the derivatives of a Gaussian kernel, the second directional derivative perpendicular
to s 0 (t) will have a large negative value, as desired. However, the second directional derivative
along s 0 (t) will also be non-zero.
The only remaining problem is to compute the direction of the line locally for each image
point. In order to do this, the partial derivatives r x , r y , r xx , r xy , and r yy of the image will have
to be estimated, and this can be done by convolving the image with the following kernels
x;oe (x;
oe
oe (y)g oe (x) (26)
xx;oe (x;
oe
xy;oe (x;
oe (x) (28)
oe
The direction in which the second directional derivative of z(x; y) takes on its maximum absolute
value will be used as the direction n(t). This direction can be determined by calculating
the eigenvalues and eigenvectors of the Hessian matrix
r xy r yy
The calculation can be done in a numerically stable and efficient way by using one Jacobi
rotation to annihilate the r xy term [31]. Let the eigenvector corresponding to the eigenvalue of
maximum absolute value, i.e., the direction perpendicular to the line, be given by (n x
1. As in the 1D case, a quadratic polynomial will be used to determine whether
the first directional derivative along (n x ; n y ) vanishes within the current pixel. This point will
be given by
(a) Input image (b) Line points and their response (c) Line points and their direction
Figure
points detected in an aerial image (a) of ground resolution 2m. In (b) the line
points and directions of (c) are superimposed onto the magnitude of the response.
where
r xx n 2
y
Again, (p x is required in order for a point to be declared a line point. As
in the 1D case, the second directional derivative along (n x ; n y ), i.e., the maximum eigenvalue,
can be used to select salient lines.
2.5 Example
Figures
6(b) and (c) give an example of the results obtainable with the presented approach.
Here, bright line points were extracted from the input image given in Fig. 6(a) with
This image is part of an aerial image with a ground resolution of 2 m. The sub-pixel location
of the line points and the direction (n x ; n y ) perpendicular to the line are symbolized
by vectors. The strength of the line, i.e., the absolute value of the second directional derivative
along (n x ; n y ) is symbolized by gray values. Line points with high saliency have dark gray
values.
From figure 6 it might appear, if an 8-neighborhood is used, that the proposed approach
returns multiple responses to each line. However, when the sub-pixel location of each line point
is taken into account it can be seen that there is always a single response to a given line since
all line point locations line up prefectly. Therefore, linking will be considerably easier than in
approaches that yield multiple responses, e.g., [27, 21, 22], and no thinning operation is needed
[34].
3 Linking Line Points into Lines
After individual line pixels have been extracted, they need to be linked into lines. It is necessary
to do this right after the extraction of the line points because the later stages of determining line
width and removing the bias will require a data structure that uses the notion of a left and right
side of an entire line. Therefore, the normals to the line have to be oriented in the same manner
as the line is traversed. As is evident from Fig. 6, the procedure so far cannot do this since line
points are regarded in isolation, and thus preference between two valid directions n(t) is not
made.
3.1 Linking Algorithm
In order to facilitate later mid-level vision processes, e.g., perceptual grouping, the data structure
that results from the linking process should contain explicit information about the lines as
well as the junctions between them. This data structure should be topologically sound in the
sense that junctions are represented by points and not by extended areas as in [21] or [23]. Fur-
thermore, since the presented approach yields only single responses to each line, no thinning
operation needs to be performed prior to linking. This assures that the maximum information
about the line points will be present in the data structure.
Since there is no suitable criterion to classify the line points into junctions and normal line
points in advance without having to resort to extended junction areas another approach has been
adopted. From the algorithm in Section 2 the following data are obtained for each pixel: the
orientation of the line (n x sin ff), a measure of strength of the line (the second
directional derivative in the direction of ff), and the sub-pixel location of the line (p x
Starting from the pixel with maximum second derivative, lines will be constructed by adding
the appropriate neighbor to the current line. Since it can be assumed that the line point detection
algorithm will yield a fairly accurate estimate for the local direction of the line, only three
neighboring pixels that are compatible with this direction are examined. For example, if the
current pixel is and the current orientation of the line is in the interval [\Gamma22:5
only the points (c x +1; c y
\Gamma1), are examined. The choice regarding
the appropriate neighbor to add to the line is based on the distance between the respective sub-pixel
line locations and the angle difference of the two points. Let
be the
distance between the two points and
j, such that fi 2 [0; -=2], be the angle
difference between those points. The neighbor that is added to the line is the one that minimizes
In the current implementation, used. This algorithm will select each line point
in the correct order. At junction points, it will select one branch to follow without detecting the
junction, which will be detected later on. The algorithm of adding line points is continued until
no more line points are found in the current neighborhood or until the best matching candidate
is a point that has already been added to another line. If this happens, the point is marked as a
junction, and the line that contains the point is split into two lines at the junction point.
New lines will be created as long as the starting point has a second directional derivative that
lies above a certain, user-selectable upper threshold. Points are added to the current line as long
as their second directional derivative is greater than another user-selectable lower threshold.
This is similar to a hysteresis threshold operation [35].
The problem of orienting the normals n(t) of the line is solved by the following procedure.
Firstly, at the starting point of the line the normal is oriented such that it is turned \Gamma90 ffi to the
direction the line is traversed, i.e., it will point to the right of the starting point. Then at each line
point there are two possible normals whose angles differ by 180 ffi . The angle that minimizes the
difference between the angle of the normal of the previous point and the current point is chosen
(a) Linked lines and junctions (b) Lines and oriented normals
Figure
7: Linked lines detected using the new approach (a) and oriented normals (b). Lines are
drawn in white while junctions are displayed as black crosses and normals as black lines.
as the correct orientation. This procedure ensures that the normal always points to the right of
the line as it is traversed from start to end.
With a slight modification the algorithm is able to deal with multiple responses if it is assumed
that no more than three parallel responses are generated. For the facet model, for exam-
ple, no such case has been encountered for mask sizes of up to 13 \Theta 13. Under this assumption,
the algorithm can proceed as above. Additionally, if there are multiple responses to the line in
the direction perpendicular to the line, e.g., the pixels
in the example
above, they are marked as processed if they have roughly the same orientation as
termination criterion for lines has to be modified to stop at processed line points instead of line
points that are contained in another line.
3.2 Example
Figure
7(a) shows the result of linking the line points in Fig. 6 into lines. The results are overlaid
onto the original image. In this case, the upper threshold was set to zero, i.e., all lines, no matter
how faint, were selected. It is apparent that the lines obtained with the proposed approach are
very smooth and the sub-pixel location of the line is quite precise. Figure 7(b) displays the way
the normals to the line were oriented for this example.
3.3 Parameter Selection
The selection of thresholds is very important to make an operator generally useable. Ideally,
semantically meaningful parameters should be used to select salient objects. For the proposed
line detector, these are the line width w and its contrast h. However, as was described above,
salient lines are defined by their second directional derivative along n(t). To convert thresholds
on w and h into thresholds the operator can use, first a oe should be chosen according to (13).
Then, oe, w, and h can be plugged into (12) to yield an upper threshold for the operator.
Figure
8 exemplifies this procedure and shows that the presented line detector can be scaled
(a) Aerial image (b) Detected lines
Figure
8: Lines detected (b) in an aerial image (a) of ground resolution 1m.
arbitrarily. In Fig. 8(a) a larger part of the aerial image in Fig. 7 is displyed, but this time with a
ground resolution of 1 m, i.e., twice the resolution. If 7 pixel wide lines are to be detected, i.e.,
3:5, according to (13), a oe - 2:0207 should be selected. In fact, oe = 2:2 was used for
this image. If lines with a contrast of h - 70 are to be selected, (12) shows that these lines will
have a second derivative of - \Gamma5:17893. Therefore, the upper threshold for the absolute value
of the second derivative was set to 5, while the lower threshold was 0:8. Figure 8(b) displays the
lines that were detected with these parameters. As can be seen, all of the roads were detected.
4 Determination of the Line Width
The width of a line is an important feature in its own right. Many applications, especially in
remote sensing tasks, are interested in obtaining the width of an object, e.g., a road or a river, as
precisely as possible. Furthermore, the width can, for instance, be used in perceptual grouping
processes to avoid the grouping of lines that have incompatible widths. However, the main
reason that width is important in the proposed approach is that it is needed to obtain an estimate
of the true line width such that the bias that is introduced by asymmetrical lines can be removed.
4.1 Extraction of Edge Points
From the discussion in Section 2.2 it follows that a line is bounded by an edge on each side.
Hence, to detect the width of the line, for each line point the closest points in the image, to
the left and to the right of the line point, where the absolute value of the gradient takes on
its maximum value need to be determined. Of course, these points should be searched for
exclusively along a line in the direction n(t) of the current line point. Only a trivial modification
Figure
9: Lines and their corresponding edges in an image of the absolute value of the gradient.
of the Bresenham line drawing algorithm [36] is necessary to yield all pixels that this line will
intersect. The analysis in Section 2.2 shows that it is only reasonable to search for edges in a
restricted neighborhood of the line. Ideally, the line to search would have a length of
3oe. In
order to ensure that almost all of the edge points are detected, the current implementation uses
a slightly larger search line length of 2.5oe.
In an image of the absolute value of the gradient of the image, the desired edges will appear
as bright lines. Figure 9 exemplifies this for the aerial image of Fig. 8(a). In order to extract the
lines from the gradient image
x
y
where
the following coefficients of a local Taylor polynomial need to be computed:
e
e
xx
xy
x
e
e
xy
yy
y
e
This has three main disadvantages. First of all, the computational load increases by almost a
factor of two since four additional partial derivatives with slightly larger mask sizes have to be
Figure
10: Comparison between the locations of edge points extracted using the exact formula
(black crosses) and the 3 \Theta 3 facet model (white crosses).
computed. Furthermore, the third partial derivatives of the image would need to be used. This
is clearly undesirable since they are very susceptible to noise. Finally, the expressions above
are undefined whenever e(x; However, since the only interesting characteristic of the
Taylor polynomial is the zero crossing of its first derivative in one of the principal directions,
the coefficients can be multiplied by e(x; y) to avoid this problem.
It might appear that an approach to solve these problems would be to use the algorithm to
detect line points described in Section 2 on the gradient image in order to detect the edges of
the line with sub-pixel accuracy. However, this would mean that some additional smoothing
would be applied to the gradient image. This is undesireable since it would destroy the correlation
between the location of the line points and the location of the corresponding edge points.
Therefore, the edge points in the gradient image are extracted with a facet model line detector
which uses the same principles as described in Section 2, but uses different convolution masks
to determine the partial derivatives of the image [21, 20, 34]. The smallest possible mask size
(3 \Theta 3) is used since this will result in the most accurate localization of the edge points while
yielding as little of the problems mentioned in Section 1 as possible. It has the additional benefit
that the computational costs are very low. Experiments on a large number of images have
shown that if the coefficients of the Taylor polynomial are computed in this manner, they can,
in some cases, be significantly different than the correct values. However, the positions of the
edge points, especially those of the edges corresponding to salient lines, will only be affected
very slightly. Figure 10 illustrates this on the image of Fig. 6(a). Edge points extracted with
the correct formulas are displayed as black crosses, while those extracted with the 3 \Theta 3 facet
model are displayed as white crosses. It is apparent that because third derivatives are used in
the correct formulas there are many more spurious responses. Furthermore, five edge points
along the salient line in the upper middle part of the image are missed because of this. Finally,
it can be seen that the edge positions corresponding to salient lines differ only minimally, and
therefore the approach presented here seems to be justified.
4.2 Handling of Missing Edge Points
One final important issue is what the algorithm should do when it is unable to locate an edge
point for a given line point. This might happen, for example, if there is a very weak and wide
gradient next to the line, which does not exhibit a well defined maximum. Another case where
this typically happens are the junction areas of lines, where the line width usually grows beyond
the range of 2:5oe. Since the algorithm does not have any other means of locating the edge
points, the only viable solution to this problem is to interpolate or extrapolate the line width
from neighboring line points. It is at this point that the notion of a right and a left side of the
line, i.e., the orientation of the normals of the line, becomes crucial.
The algorithm can be described as follows. First of all, the width of the line is extracted for
each line point. After this, if there is a gap in the extracted widths on one side of the line, i.e.,
if the width of the line is undefined at some line point, but there are some points in front and
behind the current line point that have a defined width, the width for the current line point is
obtained by linear interpolation. This can be formalized as follows. Let i be the index of the
last point and j be the index of the next point with a defined line width, respectively. Let a be
the length of the line from i to the current point k and b be the total line length from i to j. Then
the width of the current point k is given by
a
This scheme can easily be extended to the case where either i or j are undefined, i.e., the line
width is undefined at either end of the line. The algorithm sets w in this case, which
means that if the line width is undefined at the end of a line, it will be extrapolated to the last
defined line width.
4.3 Examples
Figure
11(b) displays the results of the line width extraction algorithm for the example image
of Fig. 8. This image is fairly good-natured in the sense that the lines it contains are rather
symmetrical. From Fig. 11(a) it can be seen that the algorithm is able to locate the edges of the
wider line with very high precision. The only place where the edges do not correspond to the
semantic edges of the road object are in the bottom part of the image, where nearby vegetation
causes a strong gradient and causes the algorithm to estimate the line width too large. Please
note that the width of the narrower line is extracted slightly too large, which is not surprising
when the discussion in Section 2.2 is taken into account. Revisiting Fig. 4 again, it is clear that
an effect like this is to be expected. How to remove this effect is the topic of Section 5. A final
thing to note is that the algorithm extrapolates the line width in the junction area in the middle
of the image, as discussed in Section 4.2. This explains the seemingly unjustified edge points
in this area.
Figure
12(b) exhibits the results of the proposed approach on another aerial image of the
same ground resolution, given in Fig. 12(a). Please note that the line in the upper part of the
image contains a very asymmetrical part in the center part of the line due to shadows of nearby
objects. Therefore, as is predictable from the discussion in Section 2.2, especially Fig. 5, the
line position is shifted towards the edge of the line that posesses the weaker gradient, i.e., the
(a) Aerial image (b) Detected lines and their width
Figure
11: Lines and their width detected (b) in an aerial image (a). Lines are displayed in white
while the corresponding edges are displayed in black.
(a) Aerial image (b) Detected lines and their width
Figure
12: Lines and their width detected (b) in an aerial image (a).
upper edge in this case. Please note also that the line and edge positions are very accurate in the
rest of the image.
5 Removing the Bias from Asymmetric Lines
5.1 Detailed Analysis of Asymmetrical Line Profiles
Recall from the discussion at the end of Section 2.2 that if the algorithm knew the true values of
w and a it could remove the bias in the estimation of the line position and width. Equations (15)-
give an explict scale-space description of the asymmetrical line profile f a
. The position l
of the line can be determined analytically by the zero-crossings of r 0
a (x; oe; w; a) and is given
in (18). The total width of the line, as measured from the left to right edge, is given by the
zero-crossings of r 00
a (x; oe; w; a). Unfortunately, these positions can only be computed by a root
finding algorithm since the equations cannot be solved analytically. Let us call these positions
e l
and e r
. Then the width to the left and right of the line is given by v l
The total width of the line is . The quantities l, e l , and e r have the following useful
property:
Proposition 1 The values of l, e l
, and e r
form a scale-invariant system. This means that if both
oe and w are scaled by the same constant factor c the line and edge locations will be given by
cl, ce l
, and ce r
Proof: Let l 1
be the line location for oe 1
and w 1
for an arbitrary, but fixed a. Let oe
and
. Then l
Hence we have l
cl 1
Now let e 1
be one of the two solution of r 00
a
or e r
, and likewise for
, with oe 1;2
and w 1;2
as above. This expression can be transformed to (a \Gamma 1)(e 2
=oe 2). If we plug in oe 2
and w 2
, we see that this expression can
only be fulfilled for e ce 1
since only then will the factors c cancel everywhere. 2
Of course, this property will also hold for the derived quantities v l
, and v.
The meaning of Proposition 1 is that w and oe are not independent of each other. In fact,
we only need to consider all w for one particular oe, e.g., oe = 1. Therefore, for the following
analysis we only need to discuss values that are normalized with regard to the scale oe, i.e.,
v=oe, and so on. A useful consequence is that the behaviour of f a can be
analyzed for oe = 1. All other values can be obtained by a simple multiplication by the actual
scale oe.
With all this being established, the predicted total line width v oe
can be calculated for all w oe
and a 2 [0; 1].
Figure
13 displays the predicted v oe for w oe 2 [0; 3]. It be seen that v oe can grow
without bounds for w oe
# 0 or a " 1. Furthermore, it can be proved that v oe
Therefore,
in Fig. 13 the contour lines for v oe 2 [2; 6] are also displayed.
Section 4 gave a procedure to extract the quantity v oe
from the image. This is half of the
information required to get to the true values of w and a. However, an additional quantity is
needed to estimate a. Since the true height h of the line profile hf a
is unknown this quantity
needs to be independent of h. One such quantity is the ratio of the gradient magnitude at e r
and
e l , i.e., the weak and strong side. This quantity is given by
a
a
It is obvious that the influence of h cancels out. Furthermore, it is easy to convince oneself that r
also remains constant under simultaneous scalings of oe and w. The quantity r has the advantage
that it is easy to extract from the image. Figure 13 displays the predicted r for w oe
[0; 3]. It is
Predicted line width v s5.65.24.84.443.63.22.82.40 0.51.5 23
line width
Predicted gradient ratio r0.80.60.40.20123
(b) Predicted gradient ratio
Figure
13: Predicted behaviour of the asymmetrical line f a
for w oe 2 [0; 3] and a 2 [0; 1]. (a)
Predicted line width v oe . (b) Predicted gradient ratio r.
obvious that r 2 [0; 1]. Therefore, the contour lines for r in this range are displayed in Figure 13
as well. It can be seen that for large w oe
, r is very close to 1 \Gamma a. For small w oe
it will drop to
near-zero for all a.
5.2 Inversion of the Bias Function
The discussion above can be summarized as follows: The true values of w oe
and a are mapped to
the quantities v oe
and r, which are observable from the image. More formally, there is a function
From the discussion in Section 4 it
follows that it is only useful to consider v oe
However, for very small oe it is possible
that an edge point will be found within a pixel in which the center of the pixel is less than 2:5oe
from the line point, but the edge point is farther away than this. Therefore, v oe
[0; 6] is a
good restriction for v oe
. Since the algorithm needs to determine the true values (w oe ; a) from the
observed (v oe ; r), the inverse f \Gamma1 of the map f has to be determined. Figure 14 illustrates that
f is invertible. It displays the contour lines of v oe 2 [2; 6] and r 2 [0; 1]. The contour lines of v oe
are U-shaped with the tightest U corresponding to v 2:1. The contour line corresponding to
actually only the point (0; 0). The contour lines for r run across with the lowermost
visible contour line corresponding to 0:95. The contour line for lies completely on
the w oe
-axis. It can be seen that, for any pair of contour lines from v oe
and r, there will only be
one intersection point. Hence, f is invertible.
To calculate f \Gamma1 , a multi-dimensional root finding algorithm has to be used [31]. To obtain
maximum precision for w oe and a, this root finding algorithm would have to be called at each line
point. This is undesirable for two reasons. Firstly, it is a computationally expensive operation.
More importantly, however, due to the nature of the function f , very good starting values are
required for the algorithm to converge, especially for small v oe
. Therefore, the inverse f \Gamma1 is
computed for selected values of v oe
and r and the true values are obtained by interpolation. The
step size of v oe
was chosen as 0:1, while r was sampled at 0:05 intervals. Hence, the intersection
points of the contour lines in Fig. 14 are the entries in the table of f \Gamma1 . Figure 15 shows the
true values of w oe and a for any given v oe and r. It can be seen that despite the fact that f is very
a
Figure
14: Contour lines of v oe 2 [2; 6] and r 2 [0; 1].
True w s
r0.51.52.5(a) True w oe
True a2.53.54.55.5v s0.20.610.20.61
(b) True a
Figure
15: True values of the line width w oe (a) and the asymmetry a (b).
ill-behaved for small w oe
f \Gamma1 is quite well-behaved. This behaviour leads to the conclusion that
linear interpolation can be used to obtain good values for w oe
and a.
One final important detail is how the algorithm should handle line points where v oe ! 2, i.e.,
f \Gamma1 is undefined. This can happen, for example, because the facet model sometimes gives
a multiple response for an edge point, or because there are two lines very close to each other. In
this case the edge points cannot move as far outward as the model predicts. If this happens, the
line point will have an undefined width. These cases can be handled by the procedure given in
Section 4.2 that fills such gaps.
5.3 Examples
Figure
shows how the bias removal algorithm is able to succesfully adjust the line widths in
the aerial image of Fig. 11. Please note from Fig. 16(a) that because the lines in this image are
fairly symmetrical, the line positions have been adjusted only minimally. Furthermore, it can
be seen that the line widths correspond much better to the true line widths. Figure 16(b) shows
a) Lines detected with bias removal
(b) Detail of (a) (c) Detail of (a) without bias removal
Figure
Lines and their width detected (a) in an aerial image of resolution 1m with the bias
removed. A four times enlarged detail (b) superimposed onto the original image of resolution
m. (c) Comparison to the line extraction without bias removal.
a four times enlarged part of the results superimposed onto the image in its original ground
resolution of 0.25 m, i.e., four times the resolution in which the line extraction was carried out.
For most of the line the edges are well within one pixel of the edge in the larger resolution.
Figure
16(c) shows the same detail without the removal of the bias. In this case, the extracted
edges are about 2-4 pixels from their true locations. The bottom part of Fig. 16(a) shows that
sometimes the bias removal can make the location of one edge worse in favor of improving the
location of the other edge. However, the position of the line is affected only slightly.
a) Lines detected with bias removal
(b) Detail of (a) (c) Detail of (a) without bias removal
Figure
17: Lines and their width detected (a) in an aerial image of resolution 1m with the bias
removed. A four times enlarged detail (b) superimposed onto the original image of resolution
m. (c) Comparison to the line extraction without bias removal.
Figure
17 shows the results of removing the bias from the test image of Fig. 12. Please note
that in the areas of the image where the line is highly asymmetrical the line and edge locations
are much improved. In fact, for a very large part of the road the line position is within one
pixel of the road markings in the center of the road in the high resolution image. Again, a four
times enlarged detail is shown in Fig. 17(b). If this is compared to the detail in Fig. 17(c) the
significant improvement in the line and edge locations becomes apparent.
The final example in the domain of aerial images is a much more difficult image since it
contains much structure. Figure 18(a) shows an aerial image, again of ground resolution 1 m.
This image is very tough to process correctly because it contains a large area where the model
of the line does not hold. There is a very narrow line on the left side of the image that has a very
strong asymmetry in its lower part in addition to another edge being very close. Furthermore,
in its upper part the house roof acts as a nearby line. In such cases, the edge of a line can only
move outward much less than predicted by the model. Unfortunately, due to space limitations
this property cannot be elaborated here. Figure 18(b) shows the result of the line extraction
algorithm with bias removal. Since in the upper part the line edges cannot move as far outward
as the model predicts, the width of the line is estimated as almost zero. The same holds for the
lower part of the line. The reason that the bias removal corrects the line width to near zero is
that small errors in the width extraction lead to a large correction for very narrow lines, i.e., if v oe
is close to 2, as can be seen from Fig. 13(a). Please note, however, that the algorithm is still able
to move the line position to within the true line in its asymmetrical part. This is displayed in
Figures
18(c) and (d). The extraction results are enlarged by a factor of two and superimposed
onto the original image of ground resolution 0.25 m. Please note also that despite the fact that
the width is estimated incorrectly the line positions are not affected by this, i.e., they correspond
very closely to the true line positions in the whole image.
The next example is taken from the domain of medical imaging. Figure 19(a) shows a
magnetic resonance (MR) image of a human head. The results of extracting bright lines with
bias removal are displayed in Fig. 19(b), while a three times enlarged detail from the left center
of the image is given in Fig. 19(c). The extracted line positions and widths are very good
throughout the image. Whether or not they correspond to "interesting" anatomical features
is application dependent. Note, however, that the skull bone and several other features are
extracted with high precision. Compare this to Fig. 19(d), where the line extraction was done
without bias removal. Note that the line positions are much worse for the gyri of the brain since
they are highly asymmetrical lines in this image.
The final example is again from the domain of medical imaging, but this time the input
is an X-ray image. Figure 20 shows the results of applying the proposed approach to a coronary
angiogram. Since the image in Fig. 20(a) has very low contrast, Fig. 20(b) shows the
same image with higher contrast. Figure 20(c) displays the results of extracting dark lines from
Fig. 20(a), the low contrast image, superimposed onto the high contrast image. A three times
enlarged detail is displayed in Fig. 20(d). In particular, it can be seen that the algorithm is very
succesful in delineating the vascular stenosis in the central part of the image. Note also that
the algorithm was able to extract a large part of the coronary artery tree. The reason that some
arteries were not found is that very restrictive thresholds were set for this example. Therefore, it
seems that the presented approach could be used in a system like the one described in [3] to extract
complete coronary trees. However, since the presented algorithm does not generate many
false hypotheses, and since the extracted lines are already connected into lines and junctions,
no complicated perceptual grouping would be necessary, and the rule base would only need to
eliminate false arteries, and could therefore be much smaller.
a) Input image (b) Lines detected with bias removal
(c) Detail of (b) (d) Detail of (b) without bias removal
Figure
18: Lines and their width detected (b) in an aerial image of resolution 1m (a) with bias
removal. A two times enlarged detail (c) superimposed onto the original image of resolution
m. (d) Comparison to the line extraction without bias removal.
6 Conclusions
This paper has presented an approach to extract lines and their widths with very high precision.
A model for the most common type of lines, the asymmetrical bar-shaped line, was developed
from simpler types of lines, namely the parabolic and symmetrical bar-shaped line. A scale-space
analysis was carried out for each of these model profiles. This analysis shows that there
is a strong interaction between a line and its two corresponding edges which cannot be ignored.
The true line width influences the line width occuring in an image, while asymmetry influ-
a) Input image (b) Lines detected with bias removal
(c) Detail of (b) (d) Detail of (b) without bias removal
Figure
19: Lines and their width detected (b) in a MR image (a) with the bias removed. A three
times enlarged detail (c) superimposed onto the original image. (d) Comparison to the line
extraction without bias removal.
ences both the line width and its position. From this analysis an algorithm to extract the line
position and its width was derived. This algorithm exhibits the bias that is predicted by the
model for the asymmetrical line. Therefore, a method to remove this bias was proposed. The
resulting algorithm works very well for a range of images containing lines of different widths
and asymmetries, as was demonstrated by a number of test images. High resolution versions
of the test images were used to check the validity of the obtained results. They show that the
proposed approach is able to extract lines with very high precision from low resolution images.
The extracted line positions and edges correspond to semantically meaningful entities in the im-
(a) Input image (b) Higher contrast version of (a)
(c) Lines and their widths detected in (a) (d) Detail of (c)
Figure
20: Lines detected in the coronary angiogram (a). Since this image has very low con-
trast, the results (c) extracted from (a) are superimposed onto a version of the image with better
contrast (b). A three times enlarged detail of (c) is displayed in (d).
age, e.g., road center lines and roadsides or blood vessels. Although the test images used were
mainly aerial and medical images, the algorithm can be applied in many other domains as well,
e.g., optical character recognition [23]. The approach only uses the first and second directional
derivatives of an image for the extraction of the line points. No specialized directional filters are
needed. The edge point extraction is done by a localized search around the line points already
found using five very small masks. This makes the approach computationally very efficient. For
example, the time to process the MR image of Fig. 19 of size 256 \Theta 256 is about 1.7 seconds
on a HP 735 workstation.
The presented approach shows two fundamental limitations. First of all, it can only be used
to detect lines with a certain range of widths, i.e., between 0 and 2:5oe. This is a problem if the
width of the important lines varies greatly in the image. However, since the bias is removed by
the algorithm, one can in principle select oe large enough to cover all desired line widths and the
algorithm will still yield valid results. This will work if the narrow lines are relatively salient.
Otherwise they will be smoothed away in scale-space. Of course, once oe is selected so large
that neighboring lines will start to influence each other the line model will fail and the results
will deteriorate. Hence, in reality there is a limited range in which oe can be chosen to yield good
results. In most applications this is not a very significant restriction since one is usually only
interested in lines in a certain range of widths. Furthermore, the algorithm could be iterated
through scale-space to extract lines of very different widths. The second problem is that the
definition of salient lines is done via the second directional derivatives. However, one can plug
semantically meaningful values, i.e., the width and height of the line, as well as oe, into (12) to
obtain the desired thresholds. Again, this is not a severe restriction of the algorithm, but only a
matter of convenience.
Finally, it should be stressed that the lines extracted are not ridges in the topographic sense,
i.e., they do not define the way water runs downhill or accumulates [17, 37]. In fact, they are
much more than a ridge in the sense that a ridge can be regarded in isolation, while a line needs
to model its surroundings. If a ridge detection algorithm is used to extract lines, the asymmetry
of the lines will invariably cause it to return biased results.
--R
Update of roads in GIS from aerial imagery: Verification and multi-resolution extraction
An artificial vision system for X-ray images of human coronary trees
Detection of roads and linear structures in low-resolution aerial imagery using a multisource knowledge integration technique
Tracking roads in satellite images by playing twenty questions.
An active testing model for tracking roads in satellite images.
The perception of linear structure: A generic linker.
Linear delineation.
Shape recognition and twenty questions.
Multiscale detection of curvilinear structures in 2-d and 3-d image data
From step edge to line edge: Combining geometric and photometric information.
In So Kweon and Takeo Kanade.
Ridges for image analy- sis
Curves and singularities: A geometrical introduction to singularity theory.
Thin nets and crest lines: Application to satellite data and medical images.
Geometric differentiation for the intelligence of curves and surfaces.
Curve finding by ridge detection and grouping.
Fast recognition of lines in digital images without user-supplied param- eters
Direct gray-scale extraction of features for character recog- nition
Detection of curved and straight segments from gray scale topography.
Computer and Robot Vision
The topographic primal sketch.
Edge detection and ridge detection with automatic scale selection.
Logical/linear operators for image curves.
A common framework for the extraction of lines and edges.
ter Haar Romney
ter Haar Romney
Numerical Recipes in C: The Art of Scientific Computing.
Discrete derivative approximations with scale-space properties: A basis for low-level feature extraction
Recursively implementing the Gaussian and its derivatives.
Extracting curvilinear structures: A differential geometric approach.
A computational approach to edge detection.
Procedural Elements for Computer Graphics.
Tracing crease curves by solving a system of differential equations.
Automatic Extraction of Man-Made Objects from Aerial and Space Images
--TR
--CTR
Jian Chen , Yoshinobu Sato , Shinichi Tamura, Orientation Space Filtering for Multiple Orientation Line Segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.5, p.417-429, May 2000
Markus Mller , Wolfgang Krger , Gnter Saur, Robust image registration for fusion, Information Fusion, v.8 n.4, p.347-353, October, 2007
Nassir Navab , Yakup Genc , Mirko Appel, Lines in One Orthographic and Two Perspective Views, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.7, p.912-917, July
Jan-Mark Geusebroek , Arnold W. M. Smeulders , Hugo Geerts, A Minimum Cost Approach for Segmenting Networks of Lines, International Journal of Computer Vision, v.43 n.2, p.99-111, July 1, 2001
Thierry Graud , Jean-Baptiste Mouret, Fast road network extraction in satellite images using mathematical morphology and Markov random fields, EURASIP Journal on Applied Signal Processing, v.2004 n.1, p.2503-2514, 1 January 2004
Jong Kwan Lee , Timothy S. Newman , G. Allen Gary, Oriented connectivity-based method for segmenting solar loops, Pattern Recognition, v.39 n.2, p.246-259, February, 2006
Andrew K. C. Wong , Peiyi Niu , Xiang He, Fast acquisition of dense depth data by a new structured light scheme, Computer Vision and Image Understanding, v.98 n.3, p.398-422, June 2005
G. J. Streekstra , R. Van Den Boomgaard , A. W. M. Smeulders, Scale Dependency of Image Derivatives for Feature Measurement in Curvilinear Structures, International Journal of Computer Vision, v.42 n.3, p.177-189, May-June 2001
Antonio M. Lpez , Felipe Lumbreras , Joan Serrat , Juan J. Villanueva, Evaluation of Methods for Ridge and Valley Detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.4, p.327-335, April 1999
E. Cernadas , M. L. Durn , T. Antequera, Recognizing marbling in dry-cured Iberian ham by multiscale analysis, Pattern Recognition Letters, v.23 n.11, p.1311-1321, September 2002
Derek C. Stanford , Adrian E. Raftery, Finding Curvilinear Features in Spatial Point Patterns: Principal Curve Clustering with Noise, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.6, p.601-609, June 2000 | lines;medical images;contour linking;feature extraction;aerial images;low-level processing;scale-space;curvilinear structures |
279243 | A Generic Grouping Algorithm and Its Quantitative Analysis. | AbstractThis paper presents a generic method for perceptual grouping quality. The grouping method is fairly general: It may be used the grouping of various types of data features, and to incorporate different grouping cues operating over feature sets of different sizes. The proposed method is divided into two parts: constructing a graph representation of the available perceptual grouping evidence, and then finding the "best" partition of the graph into groups. The first stage includes a cue enhancement procedure, which integrates the information available from multifeature cues into very reliable bifeature cues. Both stages are implemented using known statistical tools such as Wald's SPRT algorithm and the Maximum Likelihood criterion. The accompanying theoretical analysis of this grouping criterion quantifies intuitive expectations and predicts that the expected grouping quality increases with cue reliability. It also shows that investing more computational effort in the grouping algorithm leads to better grouping results. This analysis, which quantifies the grouping power of the Maximum Likelihood criterion, is independent of the grouping domain. To our best knowledge, such an analysis of a grouping process is given here for the first time. Three grouping algorithms, in three different domains, are synthesized as instances of the generic method. They demonstrate the applicability and generality of this grouping method. | Introduction
This work proposes a generic algorithm for perceptual grouping. The paper presents the new
approach, and focuses on analyzing the relation between the information available to the
grouping process and the corresponding grouping quality. The proposed generic algorithm
may serve to generate domain specific grouping algorithms in different domains, and we
implement and test three of them. However, the analysis is domain independent, and thus
applies for the specific cases.
Visual processes deal with analyzing images and extracting information from them. One
reason that makes these processes hard is that only a few subsets of the data items contain the
useful information while all others are not relevant. Grouping processes, which rearrange
the given data by eliminating the irrelevant data items and sorting the rest into groups
each corresponding to a certain object, are indispensable in computer vision [WT83, Gri90,
The Gestalt psychologists already noticed that humans use some basic properties, which
may be called grouping cues ([Low85]), to recognize the existence of certain structures in a
scene and to extract the image elements associated with such structures, even before it is
recognized as a meaningful object [Wer50, WT83, Low85, Gor89]. In the field of computer
vision, Witkin and Tenenbaum [WT83], suggested that grouping processes should be part
of all processing levels. Indeed , grouping was used in many levels, starting from low-level
processes such as smoothness based figure-ground discrimination [SU88, HH93], through
motion based [AW94, JC94, Sha94] grouping, which may be considered to be mid-level
processes, to high-level vision processes such as object recognition [HMS94, ZMFL95].
The proposed method separates between two components of the grouping method: the
grouping cues that are used and the grouping mechanism that combines them into a partition
of the data set. Like most grouping methods, the grouping mechanism used here is formally
defined as the maximization of some consistency function between the group assignments and
the given data. This maximization is usually done by various methods, including dynamic
programming [SU90], relaxation labeling [PZ89, TJA92], simulated annealing [HH93], and
graph clustering [WL93], and may also be done hierarchically [HMS94, MN89, DR92]. The
proposed algorithm also maximizes a function: the likelihood of the data available relative
to the grouping decision. The crucial difference which exists however between the proposed
algorithm and all previous work is in the analysis we provide, which predict the grouping
performance based on the reliability of the data.
Our specific grouping process is based on representing the unknown partition into groups
as in the form of a special graph, in which the vertices are the observed data elements
(edges,pixels, etc.) and the arcs contain the grouping information and are estimated by
cues. (Others, e.g. [HH93, SU88, WL93], have used graphs for grouping algorithms, but we
use it differently here.) The grouping task is divided into two parts: constructing the graph
by applying the geometric knowledge on the data, and then finding the "best" partition of
the graph into groups. Both stages are implemented using known statistical tools such as
Wald's SPRT algorithm and the Maximum Likelihood criterion.
Grouping cues are the building blocks of any grouping process, and shall be treated as
the only source of information available for this task. They are used, in the first stage of
our algorithm, for constructing the graph. In general, they are domain specific and rely
on the assumed properties of the sought-for groups. Their choice is essentially made by
taste and intuition although more rigorous statistical properties are sometimes taken into
account [Low85, Jac88, Cle91, CRH93]. The well-studied task of grouping edge points lying
on a smooth boundary, is a good example for the variety of perceptual grouping cues. The
typically used cues are collinearity, co-circularity [Sau92], curvature and length [SU90, PZ89],
proximity, and some combinations of those [DR92, HH93, GM92]. In other domains, or
under different assumptions, other cues are used (e.g. motion based cues [JC94, AW94],
symmetry [HMS94], or 3D symmetry-based invariance [ZMFL95]). Although good cues are
essential for successful grouping, finding them is not our aim here. Instead we consider
the cues as given, and focus on quantifying their reliability, and its relation to the expected
grouping quality. We model the cues as random variables, and quantify their reliability using
the properties of the corresponding distribution. Moreover, we suggest a general method,
denoted cue enhancement, for improving the reliability of a cue, and show a tradeoff between
computational efforts and the achieved reliability of the enhanced cue.
Although many grouping methods were already suggested and tested, it seems that no
solid theoretical background was established. So far, the performance of the grouping algorithms
has been assessed by implementing the algorithm, testing it on a small number
of simulated or real examples, and then visually evaluating the results. This methodology
shows that some of the grouping methods perform well on the examples tested and
indeed succeed in partitioning the image elements into seemingly correct subsets. It does
not allow us, however, to predict the performance of these algorithms on other images or
to compare the algorithms whose assessment was carried out using varied examples. The
proposed quantitative analysis of the expected grouping performance, provides some relations
between the quality of the available data, the computational effort invested, and the
grouping performance, quantified by several measures.
The grouping algorithm we propose is generic and domain specific grouping algorithms
may be synthesized as instances of it by inserting the appropriate cue and topology specification
(in the form of a graph). The analysis, which applies to the generic abstract algorithm,
may be useful for predicting the results in the specific domains.
The main new contributions of this paper are:
a. A fairly general approach to grouping, which is applicable to several domains, and tested
in three of them. Most, if not all, previous algorithms were domain specific.
b. A quantification of the expected quality of the grouping results. To our best knowledge,
such an analysis was not done before.
c. A cue enhancement procedure, capable to significantly improve the reliability of many
existing grouping cues.
The rest of this paper is organized as follows: It starts with a formulation of the grouping
task as a graph clustering problem. The graph-based grouping algorithm is follows, and its
theoretical analysis is described in Section 4. Section 5 is concerned with the cue enhancement
procedure, inluding a short review of Wald's SPRT algorithm. We tested our approach
also experimentally, providing three instances of the generic algorithm in three domains, and
some comparisons to the theoretical predictions. Some open questions and further research
directions are considered in the discussion.
2 The Grouping Task and its Graph Representation
2.1 The Grouping Task
be the set of data elements. This data set may consist of the
coordinates and grey levels of all pixels in the image, the boundary points in an image, etc.
S is naturally divided into several groups (disjoint subsets) so that all data elements in the
same group belong to the same object, lie on the same smooth curve, or associated with
each other in some other manner. In the context of the
grouping task the data set is given but its partition is unknown and should be inferred from
indirect information given in the form of grouping cues.
Often, only the elements in the last L groups satisfy this description while the elements in
the first group, S 0 , are considered as a non-important background. We should also mention,
that according to another grouping concept the hypothesized groups are not necessarily
disjoint. We do not consider this different task here but we believe that at least some of the
tools developed here are useful for analysing it too.
2.2 Grouping Cues
Grouping cues are the building blocks of the grouping process and shall be treated as the
only source of information available for this task. The grouping cues are domain-dependent
and may be regarded as scalar functions C(A) defined over subsets A ae S of the data feature
set. Such cue functions should be discriminative. For example, it should be high if the data
features in the subset A belong to the same object and low if they do not. Preferably,
they should also be invariant to change of the viewing transformation and robust to noise
[Low85]. Most of the grouping cues considered in the literature were functions defined over
data subsets including only two data elements. (Some exceptions are a convexity cue [Jac88]
and an U-shape cue [MN89].) Later, in section 5, we consider Multi-feature cues, defined over
data subsets including three data features or more, and show how to integrate the evidence
available from them into very reliable bi-feature cues. At this stage, however, we consider
only bi-feature cues which may be either the cues used by common grouping processes or
the result of the cue enhancement process described later.
Following the main goal of this work, to provide a general, domain independent, frame-work
for grouping processes, we would like to predict the grouping performance, not relying
on a detailed domain-dependent knowledge about a cue, but using only some measure of
its reliability. Such reliability measure may be defined by considering the cue function to
be a random variable, the distribution of which depends on the features set being in the
same group or not. For binary cues, this dependency is simply quantified by two error prob-
abilities: p miss is the probability that the cue C(A) indicates that the data features in A
do not belong to the same group while in fact they do. p fa is the probability
that the cue function indicates that the features of A belong to the same group, while in
fact they do not (false alarm). If both is an ideal cue.
(For the more general and not necessarily binary cue, this reliability is quantified by the
average log likelihood ratio of the cue.) This characterization can sometimes be calculated
using analytical models (e.g. [Low85]), and can always be approximated using Monte-Carlo
experimentations ([Jac88]).
2.3 Representing Groups and Cues Using Graphs
Our approach to the grouping process is based on representing both the unknown partition
into groups and the data available from the cues using graphs. The nodes of all the graphs
are the observed data elements, but the arcs may take different meanings.
The unknown partition, which is to be determined, is represented by the target graph,
composed of several disconnected complete subgraphs (cliques). Every such
clique represents a different object (or group) and there is no connection (arcs) between
nodes which belong to different cliques. A graph with this characterization is called a clique
graph and the class of such graphs is denoted G c . The nodes of this graph are available to
the grouping algorithm, but its arcs, which contain the grouping information, are hidden
and are not directly observable. Knowing that G t belongs to the class of clique graphs, G c ,
the grouping algorithm should provide a hypothesis graph, G should
be as close as possible to G t .
The cue information is described by two graphs. The underlying graph, G
specifies, by its arcs, the feature pairs which are evaluated (for being in the same group) by
the cue function and are available to the grouping algorithm. The second graph, denoted
measured graph Gm , specifies the information provided by these cues. That is an arc belongs
to Gm iff it belongs to G u and the result of the cue function indicates that the feature
pair belongs to the same group. While the underlying graph is specified by the designer,
depending on the domain and the computational effort limitations, the measured graph is a
result of the cue evaluation process and a part of the grouping process.
3 The Generic Grouping Algorithm
The generic grouping algorithm described in this section consists of two main stages: cue
evaluation for (many) feature pairs and maximum likelihood graph partitioning. The two
stages are general and do not depend on the particular domain in which the grouping is done,
except from the obvious choice of a domain-dependent cue and some associated decisions
made before the process.
3.1 Some Decisions To Be Made By The Designer
The first thing is to choose a grouping cue which naturally depends on the domain and on
the assumed characterization of the sought-for groups. The performance analysis, described
latter, provide some quantitative means to choose between alternatives cues, which may
differ, for example, by the tradeoff between false alarm and miss errors. In principle, all
feature pairs, corresponding to a complete underlying graph, (V; E c (V )), should be evaluated.
Hypothesis graph
G
G u
G
G
G
(set of groups)
Grouping cue
underlying
Create
graph
Data features set
Grouping by Graph Clustering
Underlying graph Measured graph
by
Graph clustering
max. likelihood
Decide for each
edge
Desired target graph
Figure
1: The proposed grouping process: The image is a set of data features (edgels in this
every one of which is represented by a node of a graph. The first step is to decide
about a cue and about the set of feature-pairs to be evaluated using this cue. This set of
feature-pairs is specified by the arcs of the underlying graph G The second step
is to use grouping cues to decide, for every feature pair in G u , if both data features belong
to the same group. These decisions are represented by the a measured graph
every arc corresponds to a positive decision (hence Em ' E u ). The known reliability of these
decisions is used in the last step to find a maximum likelihood partitioning of the graph,
which is represented by the hypothesized (clique) graph G h . A main issue considered in
this paper is the relation between this hypothesis G h and the ground truth target graph, G t ,
which is unknown.
Some cues are meaningful, however, only for near or adjacent data elements and are not
adequate for evaluating every feature pair. Therefore, the cue evaluation is restricted only a
subset of the feature pairs, specified by the spatial extent of the available cue. For example,
in order to detect long and smooth curves using co-circularity and proximity cues we may test
only close data feature pairs. On the other hand, when testing global cues like affine motion,
all feature pairs may be tested and contribute useful information. Another consideration
which affects the choice of the underlying graph is the reliability of the grouping process
and the computational effort invested in it. As we shall see, the reliability increases with
the density of the graph, but so does the computational effort, so some compromise should
be made. In this paper we don't investigate the optimal decisions at this stage but just
assume that both the cue and the associated adequate "topology" are either given or chosen
intuitively.
3.2 First Stage: Evaluate Grouping Cues
In the first stage of the grouping process, all feature pairs corresponding to arcs in G
are considered, one arc at a time, and the cue function is used to decide whether
the two data features belong to the same group (and the arc corresponding to them is in
the unobservable graph G t ). A simple decision may be obtained by any binary cue. A more
sophisticated and reliable process is to rely on multiple evidence based on other features, as
done in our cue enhancement procedure (section 5). The positive decisions are represented
by the measured graph decisions are made, Em is an estimate of
the projection of the target graph G t on the underlying graph G u . This measured
graph carries the information accumulated in the first stage to the second one.
Note that it is also possible to postpone the decisions, and mark every arc of the underlying
graph with the likelihood of the corresponding pair to be in the same group. Then,
the maximum likelihood partition stage proceed similarly. While this approach may yield
better results, due to the larger amount of information carried to the second stage, it requires
that the actual non-binary cue distributions are given, which is rarely the case, and is not
considered further.
3.3 Second Stage: Maximum Likelihood Partition of The Graph
Recall that every decision made in the first stage is modeled as a binary random variable,
the statistics of which depends on whether the two data features belong to the same group,
or whether they not. Therefore, the likelihood that this decision is indeed correct depends
on the true and unknown grouping.
Therefore, the decisions made in the first stage (and represented by the measured graph
Gm ), specify some likelihood for every partition of the graph into subgraphs. Choosing the
partition (or a clique graph) which maximizes this likelihood yields an approximation to the
required unknown target graph G t , which is one of the clique graphs. In the context of this
paper, the cue decisions assumed to be independent and are subject to two types of errors
specified uniformly by two error probabilities:
ffl miss
The error probability pair (ffl miss ; ffl fa ) is identical to the cue probability pair (p miss
the common direct use of bi-feature cues and is equal to the error probability of the cue enhancement
process (see section 5), which is usually much better. (Making these probabilities
nonuniform, and thus associating every arc of G u with an individual pair of error probabil-
ities, may be a more accurate model but requires much more accurate knowledge about
the error mechanism.) The likelihood of the measurement graph, Gm , for every candidate
hypothesis E) 2 G c , is then given by
Y
LfejEg (2)
where the likelihood of each edge is
ffl miss if e 2 EnEm
We propose now to use the maximum likelihood principle, and to hypothesize the most
likely (but not necessarily unique) graph
G2Gc
LfGm jGg: (4)
The maximum likelihood criteria defined by eq. (4) specifies the grouping result, G h , but
is not a constructive algorithm. Moreover, this class of optimization problems is known to
have high computational complexity (exponential), in the worst case. We therefore address
the theoretical aspect and the practical side separately.
From the theoretical point of view, we shall now assume that the hypothesis which
maximizes the likelihood may be found, and address our main question: "what is the
relation between the result G h , and the unknown target graph G t ?" This question
is interesting because it is concerned with predicting the grouping performance. If we can
show that these two graphs are close in some sense, then it means that algorithms which
use the maximum likelihood principle have predictable expected behavior and that even we
can't know G t , then the grouping hypothesis G h they produces is close enough to the true
partitioning. This question is considered in the next section.
From the practical point of view, one should ask if this optimization problem can be solved
in a reasonable time. Some people use simulated annealing, or other annealing methods, to
solve similar problems [HH93]. Others use heuristic algorithms [Vos92]. We developed a
heuristic algorithm which is based on finding seeds of the groups, which form (almost) a
clique in Gm . (Random graphs theory [Pal85] implies that cliques of a certain size are most
likely to be found inside an object, and are very unlikely to be found elsewhere in the graph).
Seeds are found as the highest entries in the square of the adjacency matrix of Gm . Then,
these seeds are iteratively modified by making small changes (such as moving one element
from one group to another, merging two groups, etc.), using a greedy policy, until a (local)
maximum of the likelihood function is obtained. In our experiments (described in section
6), this algorithm performs nicely. More details can be found in [AL95].
4 Analysis of The Grouping Quality
This section quantifies some aspects of the similarity between the unknown scene grouping
(represented by G t ), and the hypothesized grouping suggested by our algorithm (represented
by G h ). As we shall see, the dissimilarity depends on the error probabilities of the individual
arcs, ffl miss ; ffl fa , and on the connectivity , or the density, of G u .
The first result demonstrates that good solutions are not rejected.
LfGm jG
G2Gc
provided that ffl miss
Proof: For every clique graph, G(V; E) 2 G:
LfGm jGg
LfGm jG g =@ Y
ffl miss
e2E u "(E nE)
(arcs of E u , which exist in both (or none) of the two sets, E and E , do not affect that ratio,
and therefore are not counted) 2
Borrowing the terminology of parameter estimation, this claim shows that maximum
likelihood partition is a consistent estimator. That is, arbitrarily reliable labeling of the
underlying graph, associated with very good cues, leads to a correct decision. From now on
we assume that ffl miss consistency is not ensured. In the more realistic
case, where some hypotheses regarding arcs of the underlying graphs may be wrong, we
shall show that grouping performance degrades gracefully with the quality (reliability) of
the cues and that this performance may be predicted. In general, grouping performance is
good for groups which are densely connected within the underlying graph, and is expected
to be worse for loosely connected groups. If, for example, a node (data feature) is connected
to its group by only one edge in the underlying graph, it may be separated from this group
in the hypothesized partition with probability ffl miss , which may be quite high.
We now turn into proving a fundamental claim on which most of the other results rely.
It is a necessary condition satisfied by any partition selected according to the maximum
likelihood principle. Consider two nodes-disjoint subsets of the graph
and denote their cut by J(V g. Let
l u denote the cut width relative to the underlying graph.
Similarly, let l m denote the cut width relative to the measurement
graph
necessary condition: Let G
be the maximum likelihood hypothesis (satisfying eq. (4)), and let
log(ffl miss
Then,
1. For any disjoint partition of any group V
2. For any two groups
l m
Proof: The proof technique is similar to that of claim 1. For proving the first part,
consider the likelihood ratio between two hypotheses: One is G h and the other, denoted
~
G h , is constructed from G h by separating V i into two different groups, V 0
Y
ffl miss
l
This likelihood ratio is a non-decreasing function of l m
Figure
2: The cut involved in
splitting a group into two (proof
of claim 2)
and is larger than 1, for l m - ffl u . Therefore, if the
claim is not satisfied, then ~
G h is more likely than G h
which contradicts the assumption that (4) holds. The
second part of the claim is proved in a similar manner.Qualitatively, the claim shows that a maximum likelihood
grouping must satisfy local conditions between
many pairs of feature subsets. It further implies that
a grouping error, either in the form of adding an alien
data feature to a group or deleting its member, requires
more than a single false alarm or a single miss, provided
that the "connectivity" of the underlying graph is high
enough. An addition error, for example, merging a group
with an alien node v , requires that a substantial fraction
of the edges in J(V i ; fv g), which none of them is
in E t , will be included in Em . That is, it requires many false alarms. The parameter ff,
specifying the fraction of cut edges required to merge two subsets reflects the expected error
if the false alarm probability is equal to the miss probability, then
the false alarm probability is higher, so is ff.
This condition is now used to show that choosing a sufficiently dense underlying graph
can significantly improve the grouping performance. We shall consider two cases: a complete
underlying graph, and a locally connected underlying graph.
4.1 Complete Underlying Graphs
A complete underlying graph connects every data feature with all others and provides the
maximal information to the graph clustering stage. Therefore, it may lead to excellent
grouping accuracy. On the other hand, as mentioned before, it is useful only for global
grouping cues, such as being on the same straight line, being consistent with an affine
motion model, etc. There are many types of grouping inaccuracies, and the following claims
consider some of them.
be a true data feature group. Then, the probability that a maximum likelihood
process will hypothesize a group V containing k nodes of S i and a particular additional node,
i=kmin
Proof: Use claim 2 with note that l Merging these
subsets requires that at least ffl u of the edges connecting then are included in Em . This
event happens with a binomial distribution. 2
true group and a maximum likelihood hypothesized group
containing at least k nodes of S i . Then, the probability that V contains k 0 nodes or more
which are alien to S i , is at most
\Gammaaliens -
Proof: Use claim 2 with and V to find the probability that a
particular data subset V
merges. Then, take a worst case approach, and sum these
probabilities over all subsets of a certain size j, and over all sizes higher than k 0 . 2
k10203040
Figure
3: Two predictions of the analysis: Left: A k-connected curve-like group (e.g. smooth
curve) is likely to brake into a number of sub-groups. The graph shows an upper bound on
the expected number of sub-groups versus the minimal cut size in the group, k (eq. 13). Here
the group size (length) is 400 elements, ffl miss = 0:14 and ffl (typical values for images
like
Figure
8). It shows how increasing connectivity quickly reduces the false division of this
type of groups. Right: Upper bound on the probability for adding any k 0 alien data features
to a group of size k, using a complete underlying graph (claim 4). The error probability is
negligible
true group and the maximum likelihood hypothesized group
containing the maximal number of data features from S i . Then, the probability that V
contains jS data features from S i is at most
\Gammadeletions -
i=kmin
miss
Proof: For any particular deleted subset S 0 ae use claim 2 with
note that l Such a split of S i requires that l m - ffl u . This
event happens with a binomial distribution. To find the probability that some subset of size
k 0 is deleted, we sum over all subsets, ignoring the dependency between the events which
can only decrease this probability. 2
Claims 3,4,5 simply state that if the original group S i is big enough and the miss and false
alarm probabilities are small enough, it is very likely that the maximum likelihood partition
will include one group for each object, containing most of S i , and very few aliens. The crude
bound, plotted in Figure 3(right) shows, as an example, that even for substantial cue errors
the probability for hypothesizing highly mixed subsets is small, provided that the group is
large enough (k - 15).
An even more practical performance measure, which we calculated using some approximations
is the expected number of addition and deletion errors.
Efk delete
i=kmin
miss
i=kmin
where k is the group size, k dffke. Experimental results for
these two grouping error types are given in Figure 9(c) and Figure 9(d).
The major difficulty we see with the use of a complete underlying graph is that it does
not apply to all the cues. It's especially concerns with cues that are meaningful only locally,
such as co-circularity for smooth curve detection. Therefore, another option, the locally-
dense underlying graph is also proposed.
4.2 Locally Dense Underlying Graphs
An intuitive choice of an underlying graph which is less dense than the complete graph is
to connect every data feature only to those data features in its neighborhood, either to the
closest k data features, or to all data features in a certain radius. When specifying such
a graph, it is important to keep a substantial connectivity between the data features of
objects so that accidental deletion will be less likely. This connectivity demand is quantified
by requiring the projection of every group on the underlying graph, to be
k-connected. That is, if any k \Gamma 1 nodes are eliminated, then this projected subgraph remains
connected. A nice property of k-connected graphs is that every cut in them contains at least
edges. Therefore, a deletion of a node requires at least ffk miss errors. Alien data features
are either densely connected to a group, implying that their incorrect addition to a group is
prevented with high confidence, or are not connected enough and are not considered at all
as candidates for addition.
A significant change from the case of complete graph is that ffk miss errors can cause the
deletion of a subgroup containing more than one data feature, a fact that demonstrates the
relative weakness of the locally connected underlying graph. Being aware of this weakness, we
choose to characterize the grouping performance by another measure: the expected number
of "large" subgroups to which the group decomposes. Consider a particular cut of size k in
the projection of some object on the underlying graph. The probability that the object is
divided in this cut into two parts is exactly
divide in
i=kmin
miss
Suppose now that we can estimate the number of "potential cuts", and denote this number
by N cut . Then, the expected number of group separation will be simply N cut p divide in k\Gammacut .
Fortunately, such an estimate may be done for the interesting case of curve like groups. Let
S i be a k-connected curve-like group in which the data features are ordered along some
curve. A separation of the curve into significant "large" parts, is associated with cuts which
separate a group of consecutive curve points from another group of consecutive curve points.
The number of such cuts is N Therefore, if we can guarantee that the number
of arcs in every one of these cuts is not less than k, then the expected number of parts into
which the curve decomposes is not higher than
divide in
i=kmin
miss This number, plotted in Figure
generally decrease with increasing the cut size k, but due to the non-constant
and non-monotonic nature of the ratio kmin
ff, it is not strictly monotonic.
Locally connected underlying graphs are used in the 2nd demonstrated instance of the
algorithm, which considers grouping of curve like groups based on proximity and smoothness.
5 Cue Enhancement
The performance of the grouping algorithm depends very much on the reliability of the
cues available to it. In many situations this reliability is predetermined and the grouping
algorithm designer can only prefer the more reliable cues from the available variety. This
section, however, shows how the reliability of a grouping cue can be significantly improved
by using statistical evidence accumulation techniques. This method is not restricted only
to our grouping algorithm, and can be used also in other grouping algorithms. Two of the
three domain specific grouping algorithm that we implement as examples (the co-linearity
and the smoothness) use this procedure.
5.1 The Cue Enhancement Procedure - Overview
The cue enhancement procedure considers one pair of data features at a time, and tries to
use the other data features in order to estimate the consistency of this pair. We shall say that
a subset of data features A is consistent if it is a subset of some true group. The idea behind
the following process of evidence accumulation is that a random data subset A that contains
the data pair may be consistent only (but not necessarily) if e itself is consistent.
Therefore, a multi-feature cues operating on a feature subset A (e 2 carries statistical
information on the consistency of e. Although bi-feature cues are easier to calculate and
are more straightforward to use, cues which test larger data subsets have several significant
advantages: Several useful cues are simply not defined when only one pair of elements is
considered (e.g. convexity). Bi-feature cues usually have corresponding multi-feature cues
associated with improved reliability. (Observe, for example, that accidental collinearity is
less likely if more points are considered while the miss probability should decrease only
slightly in this case. More generally, the reliability of the shape-based multi-feature cue of
"consistent with some instance of a particular object" clearly increases with the number of
data features [GH91, Lin94].)
The algorithm is conceptually simple: for every data pair, in the underlying
graph, the algorithm draws several random data subsets, A 1 ; A
contain the pair e. Then, the corresponding multi-feature cues, are ex-
tracted. The cue values are deterministic functions of the subsets A 1 ; A but may be
also considered as instances of a random variable, the statistics of which depend on the data
pair e, and in particular, on its consistency. The number of random data subsets and their
associated cues, required for a conclusive reliable decision on the consistency of e, is determined
adaptively and efficiently by a well-known method for statistical evidence integration:
Wald's SPRT test.
5.1.1 Wald's SPRT Algorithm and its Application for Cue Enhancement
Consider a random variable, x, the distribution of which depends on an unknown binary
parameter, which takes the value of ! 0 or ! 1 . Every instance of the random variable carries
statistical information on this parameter and integrating this information corresponding
to a sequence of the random variable instances will eventually lead to a reliable inference
about it. An efficient and accurate procedure for integration the statistical evidence is
the Sequential Probability Ratio Test (SPRT) suggested by Wald [Wal52]. This procedure
quantifies the evidence obtained from each trial by the log likelihood ratio function of its
are the probability functions of the two
different populations and x is the value assigned to the random variable in this trial.
The log likelihood ratio is high when the value of the random variable x is likely for
one hypothesis (! 1 ) and is not likely for the other (! 0 ). It is negative and low when the
situation is reversed. If the probabilities of seeing x under both hypotheses are close, then x
carries only little information and h(x) - 0. When several trials are taken, the log likelihood
function of the composite event should be considered. If, however, the
trials are independent then this composite log likelihood function is equal to the sum of the
individual log likelihood functions, oe
The sum oe n serves as the statistics by
which the decision is made. Wald's procedure specifies two limits, upper and lower. If the
cumulative log likelihood function crosses one of these limits, a decision is made. Otherwise,
more trials are carried out. More formally, denote the decision made by the procedure by
let the allowed probabilities of a decision error be
The algorithm is given simply by this iterative rule:
else test for another subset
The upper and lower limits, a depend only on the allowed probability of error
(defined in eq. 1), and do not depend on the distribution of the random variable x.
We calculate a; b using a practical approximation, proposed by Wald [Wal52], which is very
accurate when ffl miss ; ffl fa are small:
The basic SPRT algorithm terminates with probability one and is optimal in the sense
that it provides the minimum expected number of tests necessary to obtain the required
decision error [Wal52]. This expected number of tests is given by:
are the conditional expected amounts of evidence from a single
its average case optimality, the worst case number of trials
required by the SPRT algorithm is not bounded. To deal with this disadvantage, the modified
Truncated SPRT [Wal52], which uses a predefined upper bound n 0 on the number of tests,
is used. We set n 0 to be few times larger than Efng.
In the context of the Cue enhancement procedure, the cue value is regarded as a random
variable. Apart from specifying the desired reliability (ffl miss ; ffl fa ) and using equation 16 to
calculate the two thresholds a and b, one must supply the two distributions (for consistent
and inconsistent feature pairs), from which the log-likelihood ratio can be determined. These
distribution should be evaluated carefully: The distributions of the cues taken over the consistent
and inconsistent populations and denoted respectively by P con (C(A)) and P incon (C(A)),
are usually quite different. It is important however to observe that even if a feature pair
(u; v) is consistent, a random set including it may not be. Therefore, these distributions
should be modified as follows: A random set containing a
feature pair fu; vg ae S i and additional randomly selected data features, v
consistent with probability
where
Therefore, the modified cue distributions, conditioned relative to
the consistency of the first two feature are
Unfortunately, these distributions are more similar and difficult to distinguish (see Figure
9(a) for such a pair of distributions considered in our experiments). Restricting ourselves to
binary cues, the distribution of which is specified by the probabilities
the conditional distributions become are
.) The log likelihood ratio of
the i th randomly-selected subset, A i , becomes:
log( p
log( 1\Gammap
The SPRT based cue enhancement procedure is summarized in Figure 5.1.1.
2. Randomly choose data features x
3. Calculate
4. Update the evidence accumulator
For every feature pair (u; v) in the underlying
1. Set the evidence accumulator, oe, and the trials counter, n, to
5. if oe - a or if n - n 0 and oe ? 0, output: (u; v) is consistent.
if oe - b or if n - n 0 and oe ! 0, output: (u; v) is inconsistent.
else, repeat (2)-(5)
Figure
4: The cue enhancement algorithm
The success of the cue enhancement procedure relies on the validity of the statistical
model, and in particular, on the following two assumptions
assumption a: The statistics of the cue values evaluated over all data subsets containing
a consistent (inconsistent) arc is approximately the same.
assumption b: The cues extracted from two random subsets including the same feature
are independent identically distributed random variables.
If the assumptions are satisfied, then
6 The cue enhancement procedure described above can identify the consistency of the
feature pair within any specified error tolerance irrespective of the reliability of the basic cue
and provided that assumptions a and b hold.
This surprising conclusion seems to contradict intuition according to which arbitrarily low
identification errors are impossible as the amount of data in the image is finite. Indeed,
arbitrarily high performance is not possible as it requires a large number of trials leading to
a contradiction of the independence assumption. Therefore, the reliability of the basic cue is
important because it leads to a lower number of trials, which is both computationally advantageous
and important to the validity of the statistical independence assumption. Indeed,
our experiments show that the SPRT significantly improves the cue reliability but that the
achievable error rate is not arbitrarily small (see experimental results in the next section).
For a constant specified reliability (ffl miss ; ffl fa ), the expected running time of the cue
enhancement procedure is constant. The total running time for evaluating all the arcs of the
underlying graph, G u , is, therefore, linear in the number of arcs. We emphasize here that this
enhancement method is completely general and may use any cue that satisfies some benign
assumptions as stated in this section. It relies on the distributions of the cues, which should
be calculated before and involve certain technicalities described in the full version [AL94].
6 Simulation and experimentation
This section presents three different grouping applications, implemented in different domains,
as instances of the generic grouping algorithm described above. To our best knowledge, it
is the first time that a generic grouping algorithm is used in multiple domains. For each
implementation, the domain, the data features, and the grouping cue are different, but the
same grouping mechanism (and computer program) is used (see Table 1). The aim of these
examples is to show that useful grouping algorithms may be obtained as instances of the
generic approach and to examine the performance predictions against experimental results.
We do not expect that our general algorithm will perform as good as domain specific
algorithm which were tailored for that domain. Still, in all tested domains, we got grouping
results comparable to those obtained from existing, domain specific methods. This is
remarkable, because except from the choice of the cues (and the associated underlying graph
determined by their extent), the process did not depend on the domain. Moreover, although
some of the analysis may help in selecting between different available cues, we did not focus
on choosing the best cues, but more on testing our approach using reasonable cues. There-
fore, we expect that even better performance will be possible by optimizing cues and their
corresponding underlying graphs. (See more results and examples in [AL94, AL95].)
6.1 Example 1: Grouping points by co-linearity cues
Given a set of points in R 2 , the algorithm should partition the data into co-linear groups
(and one background set). To remove any doubt, we do not intend to propose our grouping
approach as an efficient (or even reasonable) method for detecting co-linear clusters. Several
common solutions (e.g., Hough transform, RANSAC) exist for this particular task. We have
Table
1: The three instances of the generic grouping algorithm
The 1st example The 2nd example The 3rd example
data elements points in R 2 edgels patches of Affine
optical flow
grouping cues co-linearity co-circularity consistency with
and proximity Affine motion
Cue's extent global local global
Enhanced cue subsets of 3 points subsets of 3
underlying graph complete graph locally connected graph a complete graph
grouping mechanism maximum likelihood graph clustering (same program)
chosen this example because it is a characteristic example of grouping tasks associated with
globally valid cues (and complete underlying graphs). Moreover, it provides a convenient
way for measuring grouping performance, the quantification and prediction of which is our
main interest here.
The grouping cue is defined over data subsets containing k ? 2 data features (here
and is just the second eigenvalue of the associated covariance matrix. Clearly, if this
eigenvalue is small, the data subset is closer to linear (see, e.g. [GM92]). The cue is global,
hence the underlying graph is the complete graph. To binarize this cue we simply check if
its value is lower than a threshold T .
We consider synthetic random images containing randomly drawn points (e.g Figure
5(a)). The points are drawn according to a distribution specified by a collection of arbitrary
straight lines which are the "objects" associated with the given data, and some additional,
uniformly distributed, aliens. With this data source, it is easy to automatically create many
data sets with known noise distributions and grouping ground truth (with the exception of
a few alien points, located very close to the "objects").
A typical grouping result is shown and explained in Figure 5. We used the co-linearity
example to comprehensivly test the performance of the grouping algorithm against its pre-
dictions. The first results show the performance of the cue function and the cue enhancement
procedure. To examine the cue function, we estimate the two cue-value distributions,
differ by the consistency of the included pair fu; vg ae A. This is done
by a monte-carlo process over randomly-selected feature-triplets. These two distributions,
(defined in eq. 18), tend to be quite similar, as shown in Figure 9(a). In order to make it
a binary cue, we proceed with selecting the threshold, T , for the binary cue decision. Any
specified threshold determines different binary-cue errors, (p miss ; p fa ). While one is a non-decreasing
function of T , the other is a non-increasing function of T , so some compromise
is done. The values of (p miss ; p fa ) effects the efficiency of the SPRT algorithm, which is
measured by the average number of subsets, Efng, needed to reach a specified error rate
Using eq. 17 with P 0 (C(A)); P 1 (C(A)) of Figure 9(a), one can draw Efng in
terms of the selected threshold, as shown in Figure 9(b). The optimal threshold is found
as the cue-value of the global minimum. Note that the selection of T does NOT effects the
resulted grouping quality, but only the computational time needed for the SPRT to reach
the desired error of the enhanced cue, (ffl miss ; ffl fa ). This threshold is also optimal in the sense
that it provides the maximum information from each evaluated feature-triplet. The measured
average number of subsets needed for the SPRT, Efng is given as labels in Figure 9(e),
for 100 different pre-specified (ffl miss ; ffl fa ) values, and remarkably agrees with the predicted
average (eq. 17), shown by the curves in this graph. It is also shown that the enhanced cue
reliability can exceeds 95% (i.e. ffl miss ! 5% , and ffl fa ! 5%), even with the simple cue we
used, which has a very low discrimination power by itself.
The next results show the overall grouping quality. Regardless the choice of (ffl miss
the 5 lines were always detected as the 5 largest groups in our experiments. The selection of
does affects, however, the overall grouping quality. This is measured by counting
the addition errors and the deletion errors, as shown in Figure 9(c) and 9(d), respectively.
Note that while the deletion error is very low, as expected, the addition error is higher than
expected. The reason for this discrepancy is some alien data features which are very close
to one of the lines and are erroneously added to it. The tradeoff between grouping quality
and the computational time of the cue enhancement procedure is obtained by these three
As Efng increases (in Figure (e)), the errors decrease (in Figures 9(c) and 9(d)).
6.2 Example 2: Grouping of edgels by smoothness
Starting from an image of edgels, (data feature = edge location gradient direction), the algorithm
should group edgels which lie on the same smooth curve. This is a very useful grouping
task, considered by many researchers (see, e.g [GM92, ZMFL95, HH93, SU90, CRH93]).
A crude co-circularity cue function, operating on edgel triples, is used. It is calculated as
the maximal angular difference between the gradient direction and the corresponding normal
direction to the circular arc passing through the three points. The underlying graph is
locally connected and is constructed by connecting every edgel to its K 2 [10; 50] nearest
edgels (K is a constant).
We test this procedure both on synthetic and real images, and the results are very good
in both cases (see Figure 6 and Figure 7). Synthetic images are created by detecting the
edges of piecewise constant images which contain grey level smooth blobs (e.g. Figure 6(a)).
In the synthetic example, we found that the perimeter of each of the two big blobs splits into
3-4 groups (see Figure 6(e)). It happens in places where the connectivity in G u is low, the
minimal connectivity assumption fails, and the split probability increases. (see Figure 3).
6.3 Example 3: Segmentation from Optical Flow using Affine Mo-
tion
The third grouping algorithm is based on common motion. The data features are pixel
blocks, which should be grouped together if their motion obeys the same rule, that is if
the given optical flow over them is consistent with one Affine motion model [JC94, AW94].
Technically, every pixel block is represented by its location and six parameters of the local
Affine motion model (calculated using Least Squares). The grouping cue is defined over pairs
of blocks, and its value is the sum of the optical flow errors of each block when calculating it
using the Affine model of the other block. The cue is global and hence a complete underlying
graph is used. No cue enhancement is used here, and the cue is not very reliable: typical
error probabilities are ffl miss = 0:35 and ffl 0:2. Still, the results are comparable to those
obtained by a domain specific algorithm [AW94]. The final clustering result, shown in Figure
8(f), was obtained after a post-processing stage: the obtained grouping is used to calculate
an Affine motion model for every group, which is used to classify all the individual pixels in
the image into groups. (The same method used in [AW94].)
(a) Original image: A set of points. (b) Associated data features: same as
original image.
(c) Underlying graph Gu : A complete
graph. The pixel gray level indicates the
number of arcs passing thru.
(d) Measured graph Gm . The pixel gray
level indicates the number of arcs passing
thru.
One of the detected groups. Only
very few points, if any, were fall in a
wrong group.
(f) All the detected groups
Figure
5: Example 1: grouping of co-linear points. An example to the images used in the
experiments. This image is associated with five lines, contains points in the vicinity of
each of them, and 150 uniformally distributed additional data features. The grouping result
is near-optimal, which not surprise the predictions. It demonstrates the power of a complete
underlying graph. Quantitative results of this experiments are shown in Figure 9.
(a) Original image: (b) Associated data features: edgels.
(c) Underlying graph G locally connected
(40 nearest nbrs). The pixel
gray level indicates the number of arcs
passing thru. The brighter areas correspond
to denser regions in G u .
(d) Measured graph Gm . The pixel
gray level indicates the number of arcs
passing thru. Note that the bright
groups in the measured graph are no
longer correspond to the local density
of Gu , but to smoothness. This
byproduct can also serve as a saliency
map.
One of the 14 detected groups. (f) All the 14 detected groups.
Figure
Example 2-1: Grouping of smooth curves in a synthetic image. Edge detection
and gradient where calculated on image (a). 50% of the edge pixels were randomly removed,
and 10% of the background pixels were added, as aliens, with uniformly distributed gradient
directions. Total number of edgels is about 5,000, and about 110,000 arcs in G u . The
(a) Original image: A brain image. (b) Edge detection of (a). The associated
data features are edgels.
(c) Underlying graph G locally connected
(40 nearest nbrs) The pixel
gray level indicates the number of arcs
passing thru.
(d) Measured graph Gm . The pixel
gray level indicates the number of arcs
passing thru.
(e) The five largest detected groups. (f) All the detected groups, superimposed
on the original image.
Figure
7: Example 2-2: Grouping of smooth curves in a brain image. The underlying graph,
G u , is made of 10,400 edgels and 230,000 arcs. The processing time is about 10 minutes on
(a) Original image: Flowers sequence. (b) Associated data features: Optical flow
(blocks).
(c) Underlying graph Gu : A complete graph.
The pixel gray level indicates the number of arcs
passing thru.
(d) Measured graph Gm . The pixel gray level
indicates the number of arcs passing thru. The
low number of edges in the clouds area indicates
that the optical flow in this area does not match
with the Affine motion model.
(e) The 3 resulted groups, each in a different
gray level. Black regions were either eliminated
for high error with their Affine model (e.g. on
the tree border), or not grouped to any of the
groups.
(f) A post-processing stage: the obtained
grouping is used to calculate an Affine motion
model for every group, which is used to classify
all the individual pixels in the image into
groups (Black pixels were not classified). This
result shows that even the groups (e) are not
visually nice, they can still capture the correct
motion clustering of the image.
Figure
8: Example 3: Image segmentation into regions consistent with the same Affine
motion parameters. The underlying graph is a complete graph of about 600 nodes (180,000
arcs), and the runtime is about 5 minutes.
Cue value
Probability
densety
(a) The distribution of the co-linearity cue
values, for subsets include a consistent feature
(C(A)),(solid) and for subsets include
an inconsistent feature pair, P 1 (C(A)),
(dashed). Although these two are very similar,
their populations can be distinguished with less
then 5% error, as shown in (e).
Cue threshold
(b) The expected number of trials needed for
the cue enhancement procedure as a function
of the selected cue threshold. The optimal
cue threshold correspond to the minima of this
curve. The experimental results are shown in
E_fa20101100111111001100110210222111101111221121422411
(c)
E_fa42234345556464546577866788981511111312141413816192011111916272115192820
(d)
E_miss
E_fa
(e) The tradeoff between the enhanced cue reliability
and the computational effort invested is
clearly demonstrated in this figure, showing the
error probabilities, ffl miss ; ffl fa , associated with the
enhanced cue and the experimental average number
of trials, E(n) (the points' labels). The solid
lines show the predicted error probabilities, for
labels).
Every point represents a complete grouping
process and is labeled by the resulting number
of: deleted points (deletion error) from all 5 lines
(c), added points (addition error) to all 5 lines(d).
Figure
9: Quantitative results: Comparison between the analysis prediction to the experimental
results of example 1. The grouping results tends to reach a near-perfect grouping by
using the cue enhancement procedure.
The goal of this work is to provide a theoretical framework and a generic algorithm that
would apply to various domains and that would have predictable performance. The proposed
generic grouping algorithm relies on established statistical techniques such as sequential testing
and maximum likelihood. The maximum likelihood principle is close to some previous
grouping approaches like the use of densities for evaluating the evidence of certain cues in
[Jac88] or the cumulative pairwise interaction score used for Figure-from-Ground discrimination
in [HH93]. This paper is distinctive from previous approaches because it provides, for
the first time, an analysis of the use of these principles, which relates the expected grouping
quality to the cue reliability, the connectivity used, and in some cases the computational
effort invested. We did not limit ourselves to the theoretical study. Three grouping appli-
cations, every one of which based on a different cue, are implemented as instances of the
generic grouping algorithm, and demonstrate its usefulness. Although we made an argument
against judging the merits of vision algorithm only by visually comparing their action on a
few examples, we would like to indicate here that our results are similar to those obtained by
domain specific methods (e.g. [SU90, HH93] for smoothness based grouping and [AW94] for
motion based grouping). Note that Gm may also be used to create a saliency map, where the
saliency of every data element is its degree in Gm . This saliency map (e.g Figure 7(d),8(d)),
is visually comparable with those proposed by other works (e.g. Shashua and Ullman [SU88],
Guy and Medioni [GM92]). Its suitablity for figure-ground discrimination is now studied.
Some interesting conclusions arise from our analysis and experimentation with grouping
algorithms: It is apparent that higher connectivity, provided either by a complete underlying
graph or by a high degree locally-connected graph, can enhance the grouping quality. There-
fore, the selection of cues for a grouping algorithm, should not be based only on maximizing
their reliability but also on their extent. The cue extent determines the connectivity of the
valid underlying graph, or in other words, the amount of information which may be extracted
by this cue. Another consideration is the cue enhancement possibility: if the cue satisfies
the independent random variable assumption, then more reliable cue may be obtained, with
a relatively low computational effort.
Our analysis of the computational complexity is not complete. Although the requirements
of the cue enhancement stage were clearly stated, even as a function of the quality required,
we do not have complexity results for the second stage, of finding the maximum likelihood
partition. This task is known to be difficult. (simulated annealing is used to solve similar
problems [HH93]). We used some heuristics, based on results from random graph theory and
on a greedy search, which turned out to work surprisingly good.
In the design of a grouping algorithm, one may either invest the computational effort
in enhancing the quality of a relatively small number of cues or use a larger number of
unreliable cues and merge them by higher connectivity underlying graph. The framework
proposed in this paper makes this choice explicit by providing a cue enhancement procedure,
independent from the maximum likelihood graph clustering. Making the optimal choice is
an interesting open question which we consider. Another research direction is to use our
methodology in the context of a different grouping notion, different than partitioning, by
which the hypothesized groups are not necessarily disjoint.
Acknowledgements
We would like to thank John Wang for providing us the Flowers garden optical flow data.
--R
The construction and analysis of a generic grouping algorithm.
Representing moving images with layers.
A bayesian mulitple- hypothesis approach to edge grouping and contour segmentation
Computing curvilinear structure by token-based grouping
Fast spreading metric based approximate graph partitioning algorithms.
An algorithm for finding best matches in logarithmic expected time.
Grimson and Daniel P.
Perceptual grouping using global saliency- enhancing operators
Theories of Visual Perception.
Extraction of groups for recognition.
The Use of Grouping in Visual Object Recognition.
Finding structurally consistent motion correspondences.
On the amount of data required for reliable recognition.
Perceptual Organization and Visual Recognition.
Using perceptual organization to extract 3-d structures
Graphical Evolution.
Trace interface
Applications of Spatial Data Structures.
Labeling of curvilinear structure across scales by token grouping.
Affine Analysis of Image Sequences.
Structural saliency: The detection of globally salient structures using locally connected network.
Grouping contours by iterated pairing network.
Supervised classification of early perceptual structure in dot patterns.
Relational Matching.
Sequencial Analysis.
Laws of organization in perceptual forms.
An optimal graph theoretic approach to data clus- tering: Theory and its application to image segmentation
On the role of structure in vision.
--TR
--CTR
Shyjan Mahamud , Lance R. Williams , Karvel K. Thornber , Kanglin Xu, Segmentation of Multiple Salient Closed Contours from Real Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.4, p.433-444, April
P. Kammerer , R. Glantz, Segmentation of brush strokes by saliency preserving dual graph contraction, Pattern Recognition Letters, v.24 n.8, p.1043-1050, May
Jens Keuchel , Christoph Schnrr , Christian Schellewald , Daniel Cremers, Binary Partitioning, Perceptual Grouping, and Restoration with Semidefinite Programming, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.11, p.1364-1379, November
Alexander Berengolts , Michael Lindenbaum, On the Performance of Connected Components Grouping, International Journal of Computer Vision, v.41 n.3, p.195-216, February/March 2001
Jacob Feldman, Perceptual Grouping by Selection of a Logically Minimal Model, International Journal of Computer Vision, v.55 n.1, p.5-25, October
Anthony Hoogs , Roderic Collins , Robert Kaucic , Joseph Mundy, A Common Set of Perceptual Observables for Grouping, Figure-Ground Discrimination, and Texture Classification, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.4, p.458-474, April
Song Wang , Joachim S. Stahl , Adam Bailey , Michael Dropps, Global Detection of Salient Convex Boundaries, International Journal of Computer Vision, v.71 n.3, p.337-359, March 2007
A. Engbers , Arnold W. M. Smeulders, Design Considerations for Generic Grouping in Vision, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.4, p.445-457, April
Song Wang , Toshiro Kubota , Jeffrey Mark Siskind , Jun Wang, Salient Closed Boundary Extraction with Ratio Contour, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.4, p.546-561, April 2005
Bernd Fischer , Joachim M. Buhmann, Path-Based Clustering for Grouping of Smooth Curves and Texture Segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.4, p.513-518, April
Sudeep Sarkar , Padmanabhan Soundararajan, Supervised Learning of Large Perceptual Organization: Graph Spectral Partitioning and Learning Automata, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.5, p.504-525, May 2000
Stella X. Yu , Jianbo Shi, Segmentation Given Partial Grouping Constraints, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.2, p.173-183, January 2004
Sudeep Sarkar , Daniel Majchrzak , Kishore Korimilli, Perceptual organization based computational model for robust segmentation of moving objects, Computer Vision and Image Understanding, v.86 n.3, p.141-170, June 2002 | maximum likelihood;generic grouping algorithm;perceptual grouping;grouping analysis;performance prediction;Wald's SPRT;graph clustering |
279249 | Decomposition of Arbitrarily Shaped Binary Morphological Structuring Elements Using Genetic Algorithms. | AbstractA number of different algorithms have been described in the literature for the decomposition of both convex binary morphological structuring elements and a specific subset of nonconvex ones. Nevertheless, up to now no deterministic solutions have been found to the problem of decomposing arbitrarily shaped structuring elements. This work presents a new stochastic approach based on Genetic Algorithms in which no constraints are imposed on the shape of the initial structuring element, nor assumptions are made on the elementary factors, which are selected within a given set. | INTRODUCTION
MATHEMATICAL morphology [1], [2], [3] concerns the study of shape
using the tools of set theory. Mathematical morphology has been
extensively used in low-level image processing and analysis appli-
cations, since it allows to filter and/or enhance only some characteristics
of objects, depending on their morphological shape. A lot of
tutorials [3], [2], [4], [1], [5], [6], [7] can be found in the literature.
Within the mathematical morphology framework, a binary image
A is defined as a subset of the two-dimensional Euclidean
space
In [3], monadic transforms acting on a generic image A
(complement, reflection, and translation) and dyadic operators between
sets (dilation, erosion, opening, and closing) are defined. In the
following only the definitions of operators used throughout this
paper are recalled, such as translation
, for some with
(2)
and dilation
for some
where A represents the image to be processed, and B is called
Structuring Element (SE), namely, another subset of E 2 whose shape
parameterizes each operation.
An SE B is said to be convex with respect to a given set of morphological
operations (e.g., dilation) with a given set of SEs (factors) {F ,
it can be expressed as a chain of dilations of the F i elements:
with for (4)
Otherwise B is said to be nonconvex with respect to the same set of SEs,
and, thus, it can only be expressed as a chain of Boolean operations
(e.g., unions and/or intersections) between convex elements
(called
. (5)
where ( represents any Boolean operation (such as unions <,
intersections >, .) and C i are convex elements that can be expressed
as chains of dilations, as shown in (4).
As discussed in the following section, the decomposition of a
binary SE into a chain of operations involving only elementary
factors is a key problem. So far, only deterministic solutions have
been analyzed and proposed in the literature [8], [9], [10], each relying
on different assumptions (such as convex SEs, specific sets of
elementary operators, etc.); on the other hand the optimal decomposition
(with respect to a given set of optimality criteria) of nonconvex
generic SEs with a deterministic approach is still an open problem.
This paper addresses this problem utilizing a stochastic ap-
proach, based on Genetic Algorithms: starting from a population
of potential solutions (individuals) determined through an exhaustive
algorithm, an iterative process modifies the existing individuals
and/or creates new ones in accordance to a set of genetic
operators applied randomly. The individuals that minimize a
given cost function tend to replace the others, and, after a sufficient
number of iterations, the algorithm tends to converge toward
the optimal solution. In particular, the main purpose of this work
is to develop a tool able to give a preliminary answer to the problem
of optimal decomposition of nonconvex SEs into concatenations
of generic elementary operations.
This work is organized as follows: Section 2 motivates the need
for SE decomposition and discusses some optimality criteria that
can drive the decomposition; Section 3 briefly summarizes the
Genetic Approach, its terminology and its notations, and describes
the implementation of the decomposition algorithm and the data
structures involved; Section 4 presents some results while Section 5
concludes the paper with some remarks and a discussion on future
developments.
2.1 Motivation
The following two subsections motivate the need for SE decomposition
on traditional serial systems, in which the use of a large SE is
not efficient, and on SIMD cellular systems that allow the execution
of only basic operations based on a neighborhood smaller than the
size of the SE; the different characteristics of general-purpose
(serial) and SIMD cellular (parallel) systems require different techniques
in order to exploit the specific hardware characteristics of
each system.
Hereinafter, a dilation between a generic image A and a complex
SE B is considered; due to the different properties of unions
and intersections discussed in [3], namely
in the following nonconvex SEs are decomposed using chains of
unions of convex SEs (using the equality expressed by relation (6),
instead of using chains of intersections or other Boolean operations
(where no equality relations hold).
2.1.1 Serial Systems
General-purpose serial systems have no upper bound to the size of
possible SEs: In fact, using a bitmapped image representation, the
value of any image pixel can be accessed within a constant time.
. The authors are with the Dipartimento di Ingegneria dell'Informazione,
Universit- di Parma, I-43100 Parma, Italy.
E-mail: broggi@ce.unipr.it.
Manuscript received 15 Apr. 1996; revised 17 Apr. 1997. Recommended for acceptance
by R. Chin.
For information on obtaining reprints of this article, please send e-mail to:
tpami@computer.org, and reference IEEECS Log Number 104926.
J:\PRODUCTION\TPAMI\2-INPROD\104926\104926_1.DOC correspondence97.dot SB 19,968 12/23/97 2:31 PM 2 8
On the other hand, the computational complexity of a serial implementation
of morphological operations depends on the number
of elements which form the operands. As an example, the computation
of A ! B requires one vector sum and one logical union for
each couple of elements a OE A and b OE B, and, thus,
where 9(#) indicates the computational complexity (the number of
vector operations) of a given operation, and #) represents the
number of elements in a set.
Using the well-known visual representation of morphological
sets [3], the number of vector sums and logical unions required by
dilation
according to (8), is given by #(A)
The structuring element B can be expressed as the dilation between
subsets
and using the chain rule property [3],
. The number of sums required to perform the
first step of the processing
is given by while the number of sums
required to complete the processing (R
Thus, the decomposition shown in (10),
while incrementing the total number of dilations from one to two,
decreases the number of operations performed from 180 to 145.
2.1.2 SIMD Cellular Systems
When a bitmapped data representation is used, mathematical
morphology operations involve repeated computations over large
data structures, thus the use of parallel systems improves the
overall performance. Both parallel architectures with spatial parallelism
(cellular systems), based on a high number of Processing Elements
(PEs) devoted to the simultaneous processing of different
image areas, and parallel architectures with operational parallelism
(pipeline systems), where the different PEs work in pipeline of the
same image area, share common constraints. The planar surface of
the silicon chip limits the hardware interconnections, thus reducing
the complexity of the elementary operations (the size of the
possible SEs) that can be performed by each single PE.
This fact is more evident in cellular systems, where the set of all
possible operations performed by each single PE (known as Instruction
Set, IS) is generally based on the use of 3 - 3 SEs. Thus,
since operations based on large SEs cannot be performed, their
decomposition into chains of simpler operations belonging to the
IS becomes mandatory. The above dilation shows the
main difference between serial and cellular systems. On serial
systems the dilation can be performed either directly (with a single
after the decomposition of B, as a chain of
more than one dilation (as shown by (11)), thus leading to a different
computational complexity. On the other hand, that dilation
cannot be performed directly on a cellular system since it is based
on a SE not belonging to the IS. Thus, while in the first case (serial
systems), the decomposition may be recommended for a number
of reasons (such as the speed-up of the processing), it becomes
mandatory in the second case (cellular systems).
Assuming a system capable of performing horizontal and vertical
dilations and translations in the eight main directions, B (as
defined in (9)) is nonconvex with respect to the IS of the system; it
may be expressed as a union of convex sets, for example
where
are convex with respect to the IS of the system and can
thus be expressed as:
Thus, according to this decomposition, the initial dilation
can be performed with six elementary dilations and one logical
union. This is a one-level solution, involving only a single level of
unions of dilations, also called sum of products (see Fig. 1a).
It is obvious that a multilevel solution may lead to a better re-
sult. For example using the chain rule property, can be
expressed in a two-levels solution:
This solution, depicted graphically in Fig. 1b, requires only five
dilations and one logical union.
2.2 Optimality Criteria
The decomposition of a SE can be aimed to many different goals,
such as:
. the minimization of the number of decomposing sets (to reduce
the number of dilations);
. the minimization of the total number of computations (for
speed-up reasons);
. the minimization of the total number of elements in the decomposing
sets (to reduce the size of the data structures and
thus also the memory requirements in serial systems);
. the possibility to implement complex morphological operations
on cellular systems whose IS is based on simple, elementary
operations (to overcome the problem caused by the
simple interconnection topology that limits the size of possible
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 20, NO. 2, FEBRUARY 1998 3
J:\PRODUCTION\TPAMI\2-
correspond
ence97.dot
. or even the determination of factors with a given shape (to
aid the recognition of 2D objects).
The optimality criterion addressed in this work can be changed
acting on the parameters of a cost function.
Genetic Algorithms (GAs), widely used in various fields [11], are
optimization algorithms based on a stochastic search [12], operating
by means of genetic operators on a population of potential solutions
of the considered problem (individuals). The main data
structure is the Genome or Chromosome, that is composed of a set of
Genes and of a Fitness value. In the population of possible solutions
the set of new individuals generated by means of genetic operators
is called Offspring.
The genetic search is driven by the fitness function: Each individual
is evaluated to give some quantitative measure of its fitness, that
is the "goodness" of the solution it represents. At each iteration
(generation) the fitness evaluation is performed on all individuals.
Then, at the following iteration, a new population is generated,
starting from the individuals with the highest fitness, and replac-
ing, completely or partially, the previous generation. The genetic
operators used to generate new individuals are subdivided into
two main categories: unary operators, creating new individuals
and replacing the existing ones with a modified version of them
(e.g., mutation, introduction of random changes of genes), and
binary operators, creating new individuals through the combination
of data coming from two individuals (e.g., crossover, exchange
of genetic material between two individuals). Each iteration step is
called generation.
The study of GAs led to the more general Evolution Programs
(EPs) [13], or Generalized GAs. In "standard" GAs an individual is
represented by a fixed-length binary string, encoding the parameters
set, which corresponds to the solution it represents; the genetic
operators act on these binary codes. In EPs, individuals are
represented by generalized data structures without the fixed-length
constraint [14], [15]. In addition, ad-hoc operators are defined
to act on these data structures.
EPs perfectly match the requirements of the SE decomposition
problem, since the varying number of elementary items forming a
solution does not allow to know a priori the size of a generic solu-
tion, that is the length of the coding of a generic individual. In fact,
for an efficient implementation, the data structure representing a
decomposition must explicitly encode both the number and the
shape of each single elementary operation composing the solution.
Moreover, this coding must also allow fast and easy processing
and evaluation phases. For these reasons it has been necessary to
develop an ad hoc EP with specific genetic rules, exploiting a
method similar to the one presented in [16] for the solution of the
bin-packing problem. Up to now, the number of iterations is chosen
by the user, but different termination criteria are under
evaluation (such as the percentage of improvement or the number
of different individuals) [11], [13].
3.1 Data Structure
As stated above, the data structure representing an individual has
to describe in a flexible and compact way the convex elements of (5),
showing its shape and decomposition into factors, but it has also
to make the evaluation phase fast and simple. This representation
has to be variable in length, since the number z of possible partitions
involved in the decomposition of a generic individual , has
no maximum bound; on the other hand, recalling (4), the number
m and the shape of factors F kj , which form element C k , depend
directly on C k .
For these reasons an individual is represented by an arbitrarily
long chain of genes, each gene representing a partition of the input SE
(see Fig. 2). The logical union of all genes produces the individual.
The simplest implementation consists in representing each
individual with a data structure whose fields contain all the
above information. Conversely, a more complex, hierarchical
data structure has been developed in order both to use a lower
amount of memory for each individual and to ease and speed-up
the determination of new better solutions. Although the handling
of this data structure is definitely complex, it allows to
detect possible overlappings among the individuals of a popula-
tion. Each level of the hierarchy encodes only the information
strictly necessary to that level. Three are the levels of the hierar-
chy, as shown in Fig. 2:
. Factor level: The basic components are the elementary morphological
operations (i.e., the instruction set elements): an
integer indicates which element of the IS is used, while a
pointer allows to follow the chain of elements.
. Gene level: A gene is composed of one or more factors and
it corresponds to a dilation chain of factors; an integer gives
the origin of the partition described by the gene, thus speci-
Fig. 1. (a) Union of dilations, also known as sum of products. (b) A two-levels solution.
Fig. 2. The data structure representing two individuals.
4 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 20, NO. 1, JANUARY 1998
J:\PRODUCTION\TPAMI\2-INPROD\104926\104926_1.DOC correspondence97.dot SB 19,968 12/23/97 2:31 PM 4 8
fying the translation required to fit the gene onto the initial
(the origin) and a pointer identifies the next gene.
. Individual level: One or more genes form the individual
that corresponds to a union of dilation chains, corresponding
to a decomposition or, more often, to a part of a decomposi-
tion. An integer gives the total number of genes forming the
individual, a pointer gives the position of the first gene of
the chain, while a double precision number contains the fitness
value of the individual.
3.2 Initialization of the Population
In order to understand this fundamental step, some definitions are
introduced.
DEFINITION 1. The IS is a set of M factors:
DEFINITION 2. Notation H (x,y) stands for H # {(x, y)}, namely, it represents
a translation, as in (2).
DEFINITION 3. For a generic image H,
. terms, 0
DEFINITION 4. For a generic image H, O H represents its origin.
In the following, B is the input SE, B i is a generic subset of B
with the same origin, and H represents any generic set, convex
with respect to the IS:
If O Fi OE F i for every F i belonging to the IS, 1 then O H OE H. The process
starts with the identification of every element of set &(B),
which is defined as
for some (19)
since every element of &(B) may represent a possible gene; this search
has to be deterministic and exhaustive. Since O F
OE , the set of possible
pairs (h, k) is given by the set of all elements of B; for a generic image H, if
(h, Therefore, the algorithm scans all the
pixels of the SE and determines which factors can form a legal chain of
dilations starting from that pixel. The gene obtained so far needs an
additional shift in order to overlap its origin with the origin of B.
Since the length of the best solution is not known a priori and
since randomly linking together some genes seldom yields a legal
solution, we have decided to use each element of &(B), which
comprises all the workable genes, as constituting an individual
with a chromosome composed of a single gene only. It is also pos-
sible, as an option, to include in the population multiple copies of
each individual so formed. In this way the set of individuals
forming the initial population is obtained.
3.3 The Fitness Function
The fitness function f(,) is used to evaluate each individual , in
order to drive the algorithm during the search. A cost function
f C (,) has been introduced, which is identical to f(,) if and only if
is the general partition forming solution , of
length N. This function must have several properties:
solutions it is equal to f(,);
it is defined also for nonlegal solutions (i.e., solutions not covering
perfectly the original SE), thus widening the search space;
it is easily implemented as a penalty function.
1. In the current implementation, each factor is limited in size to 3 - 3
and O F
F OE .
Penalty functions are used in highly constrained problems when
the need of evaluating nonlegal solutions is met by penalizing
them with respect to the legal ones. The cost function thus includes
a penalty term:
where a is equal to 1 if and only if B B
# , as stated above,
otherwise a is expressed by a term proportional to the ratio between
the number of elements present in the solution and the total
number of elements in B. The term b is related, via user-defined
parameters, to the current population size, and f P is the penalty
function, that is still related to the percentage of elements of B covered
by the decomposition contained in the considered individual.
Assuming that the goal of the process is to obtain the decomposition
that minimizes the number of operations required to compute
the dilation of a generic image with SE B, the fitness function
f(,) is mainly constituted by the sum of the cost of every partition
in addition, it takes into account also the number of logical union
operations required, weighted by an appropriate coefficient,
and the additional saving allowed by a multilevel solution. The
program, in fact, lets the user choose from among three different
optimization levels, thus leading to different final results: Level 0
does not perform any optimization; level 1 performs a first pack-
ing, based on the methods explained at the end of Section 2.1.2;
level 2 tries to pack again the solutions obtained at level 1. The use
of optimization levels 1 and 2 becomes of basic importance when
the target architecture has the capability to store temporary re-
sults, since it allows to achieve solutions with sensibly lower costs.
3.4 The Genetic Search
The structure of the algorithm is depicted in Fig. 3; a single processing
cycle is composed of four stages:
. Selection Operators: Choose two individuals among population
for reproduction purpose;
. Binary Genetic Operators: Combine, in various ways, the
parents' chromosomes in order to get offspring (i.e., one
child or two children);
. Comparison Operators: Set up a competition between parents
and offspring for inclusion in the next generation;
. Unary Genetic Operators: Mutate the chromosomes of the
individuals winning at the previous stage.
When the list containing the individuals of the current population
is exhausted, i.e., every individual has been chosen for reproduc-
tion, the population just formed undergoes the same processing:
this generation-formation process stops when the maximum number
of allowed generations is reached. 2 In the following, a brief
review of the operators is presented.
3.4.1 Selection Operators
The operator is based on the tournament selection scheme as exposed
in [17] with a tournament size of two. The scheme implemented
makes use of a slightly modified version of the genic selective
crowding technique [17] which forces individuals to compete with
those who have at least
I I
I I
pixels in common with them. In this way, a pressure is maintained
for similar to compete with similar, incrementing in this way the
significance of the tournament.
2. As mentioned, other termination criteria are currently under
evaluation.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 20, NO. 2, FEBRUARY 1998 5
J:\PRODUCTION\TPAMI\2-
correspond
ence97.dot
3.4.2 Binary Genetic Operators
In order to obtain individuals that cover the SE in the initial phase
we need an operator which varies appreciably the length of indi-
viduals' chromosomes. Conversely, after this phase, when the
average length of individuals is roughly close to the optimal one,
we are concerned about the quality of the chromosome; for this
reason two different operators have been conceived.
. The first one is based upon the cut and splice operator [18].
Let l be the number of genes constituting a chromosome:
This operator cuts the chromosome randomly in correspondence
to one of the l possible points. If one of the l - 1
points connecting two consecutive genes is chosen, the
chromosome is broken into two parts; otherwise, with probabilityl , it is left unchanged. The two, three, or four chromosome
segments of two different individuals are pushed
in a stack; the splice stage either merges the first two top
elements of the stack, creating in this way a single child, or
promotes each element to a full individual. This shows how
the number of individuals in the following generation can
be altered. An example is shown in Fig. 4a.
. The second operator is the dual of the previous one: it attempts
to improve the fitness of the parents by mixing their
chromosomes, searching for a slight edge of improvement
by trial and error. A gene composing the first parent is
"injected" into the chromosome of the other parent, replacing
one of its genes, thus not changing the length of the
chromosome but altering only its content. The procedure is
run twice, swapping the two parents' roles. The number of
offspring generated is always two, although in some cases
they can coincide with a parent. An example is shown in
Fig. 4b.
3.4.3 Comparison Operators
At this stage the operator chooses among parents and offspring the
individuals to be inserted in the next generation. The scheme followed
is based upon the Deterministic Crowding scheme presented in [19].
3.4.4 Unary Genetic Operators
In standard GA the unary operator is the mutation, that simply
inverts randomly one or more bits of the string representing the
chromosome. On the other hand, our implementation of mutation
has the primary goal of reinserting genes previously discarded
and otherwise definitively lost; typically this is the case of little
partitions, whose contribution to the fitness improvement has
been underestimated in the previous phases of the execution. This
contribution can be essential later to achieve the covering of the
whole B. Genes are drawn from an array of genes (containing all
possible genes) and stored in memory so that every gene is chosen
cyclically. Two operators have been created:
. MUTATION 1: This operator compares each gene forming the
chromosome of the individual with the gene g coming from
the array. The gene g substitutes the most similar one in the
chain, that is the gene that maximizes the intersection between
the two genes, as in the following example:
. MUTATION 2: This operator forces gene g to be included,
along with the suppression of those which overlap with it. It
can cause a big fitness worsening but it has the advantage to
increase diversity in the chromosomes as a whole, as in the
following example:
The complete process is shown in Fig. 5 where a simple selection
(which does not use a tournament scheme) is added. It is possible
to traverse the graph following 2 3 paths and the decisions are
made according to the value of the respective probability values
These parameters are computed at the beginning of every
generation starting from parameters describing the status of the current
generation; they can be regarded as adaptive parameters [20].
4.1 Decomposition of Convex SEs
In accordance with the way the initial population is generated, the
decomposition of a convex SE by means of this approach leads to
the same results discussed in the literature (such as in [8]). This is
due to the fact that the optimal solution is a member of the initial
population, which consists of all possible (i.e., legal) decompositions
of the SE, given an arbitrary set of elementary SEs.
4.2 Decomposition of NonConvex SEs
Let us now consider the decomposition of the following nonconvex
Fig. 3. The generational cycle.
Fig. 4. Examples of the cut and splice. (a) Replacing crossover. (b) Operators.
6 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 20, NO. 1, JANUARY 1998
J:\PRODUCTION\TPAMI\2-INPROD\104926\104926_1.DOC correspondence97.dot SB 19,968 12/23/97 2:31 PM 6 8
whose optimal decomposition using the following 3 IS
is definitely nontrivial. The stochastic decomposition led to the
result shown in the following:
The dilation of a generic image A with B is then reduced to the
that, considering the IS shown in (23), takes a total of 50 elementary
dilations and eight logical unions. Had the algorithm run with
the optimization level set to one, (24) could be expressed as a sequence
of 22 elementary dilations and eight logical unions:
where I is the identity image.
In [8] the original SE needs to be convex and it is decomposed
using a given set of factors. Conversely, in [9], a wider class of SEs
is considered: The original SE can also be nonconvex but must be
simply connected and must belong to a specific class ' of decomposable
SEs. In that paper, the decomposition of a generic SE S is
defined by:
S A A A n
where A i is a 3 - 3 or less simply connected factor. This represents
an optimal decomposition when n is minimized, regardless of the
3. This IS has been chosen to reflect the set of operations available on
the PAPRICA system, a special-purpose architecture dedicated to the
execution of morphological processings.
shape of the A i elements. To compare this algorithm with ours, let
us choose a SE belonging to ' as discussed in [9]:
Hereinafter, when presenting a SE decomposition, we will use the
following
where M indicates the decomposition method used, B is the input
indicates the function giving the cost associated with each
factor belonging to the IS.
The optimal decomposition of SE H proposed by Park and Chin
in [9] is
where f is the four-connected shift cost function mentioned in [8].
Note that using this technique the morphology of the factors is not
known a priori, but only at the end. This is in contrast with our
approach that requires to specify a set of factors before running
the algorithm. To overcome this we can synthesize every factor in
(28) with factors belonging to a generic set and use the same set
within our program. Obviously, when a factor is not convex with
respect to such IS, Boolean operators must be used (in this case,
logical unions). The IS used here is a modified version of the set
specified in [8]:
The resulting decomposition is:
According to the four-connected shift cost function [8], D PC (H, f)
has cost 14. In the following, the decomposition obtained with our
Fig. 5. A schematic representation of the generational process: at each generation individuals are taken in pairs, and in accordance to the respective
probability values p1, p2 and p3, selection, binary operators and unary operators are applied.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 20, NO. 2, FEBRUARY 1998 7
J:\PRODUCTION\TPAMI\2-
correspond
ence97.dot
(Anelli, Broggi, Destri, ABD) stochastic approach is presented, 4
where the superscript stands for the optimization level:
with the following partial results:
On the other hand, if the cost is set to one for every factor (cost
function g), the total cost of D PC (H, g) is equal to the cost of
ABD (H, g) becomes:
with the following partial results:
leading to a total cost of nine. When operating with level 2 of op-
timization, A # H can be easily computed replacing the identity
image I with the image A in (34) and (36). Table 1 summarizes the
cost of the solutions we have obtained for SE H for different optimization
levels, and for cost functions f and g, along with the solution
presented in [9].
Even though we had to rearrange the decompositions given in
[9] in order to fit our requirements (thus altering its cost), this two
examples show that the two approaches, although not directly
comparable, give solutions with similar cost. In addition, in our
approach the freedom of not knowing a priori the shape of the SEs
composing the IS is replaced with the possibility to decompose
also nonconvex SEs.
5 CONCLUSION
This paper presented a new approach to the decomposition of
arbitrarily shaped binary morphological structuring elements into
chains of elementary factors, using a stochastic technique. The
application of this technique to convex structuring elements leads
to the optimal decompositions discussed in the literature; in addi-
tion, this paper provides a way of decomposing also nonconvex SEs.
4. In these examples, the cost of the union operation has been set to
zero.
Extensive experimentations (not documented here due to space
limitations) have shown that the amount of memory required by
the system grows according to the size of the initial SE and with
the number and size of the elementary SEs. Elements up to
have been decomposed using ISs composed of eight basic operations
on a two processors HP 9000 with 128 megabytes of RAM;
the decompositions took about six hours of CPU time and computed
200 generations starting with an average of 2,000 individu-
als; the computations required about 80 megabytes of memory.
Due to the extremely high computational load required by this
iterative approach and to the large memory requirements, the
genetic engine is now being ported to the MPI parallel environ-
ment: the decomposition is managed by a "master" process, which
spawns child processes on the different nodes of a cluster of work-
stations: each child process is in charge of a specific portion of the
processing, which is executed in parallel with all others. This parallel
implementation allows to speed up the processing and to
decompose very large SEs. Moreover, a graphical interface is also
being designed to ease the definition of both the initial SE and the
IS, as well as the introduction of parameters. Being based on the
Java programming language, its integration into a Web page is
straightforward, thus allowing remote users to define and run
their own decompositions on our cluster of workstations.
In addition, the first release of the complete tool running on
many different systems (SunOS, AIX, Linux, HP-UX, DOS, and
will be shortly available as public domain software via
anonymous FTP to researchers working in the mathematical morphology
field.
ACKNOWLEDGMENTS
The authors would like to thank Prof. Aurelio Piazzi for his valuable
suggestions. This work was partially supported by the Italian
National Research Council (CNR) under the frame of the
"Progetto Finalizzato Trasporti 2."
--R
"An Evolutionary Algorithm That Constructs Recurrent Neural Networks,"
"Speeding-Up Mathematical Morphology Computations With Special-Purpose Array Processors,"
"A New Representation and Operators for Genetic Algorithms Applied to Grouping Problems,"
"An Introduction to Simulated Evolutionary Optimiza- tion,"
Genetic Algorithms in Search
"Messy Genetic Algorithms: Motivation, Analysis, and First Results,"
"Messy Genetic Algorithms Revisited: Studies in Mixed Size and Scale,"
"Image Analysis Using Mathematical Morphology,"
Adaption Natural and Artificial Systems.
"Crossover Interactions Among Niches,"
Random Sets and Integral Geometry.
Genetic Algorithms
"Optimal Decomposition of Convex Structuring Elements for a 4-Connected Parallel Array Processor,"
"Decomposition of Arbitrarily Shaped Morphological Structuring Elements,"
Image Analysis and Mathematical Morphology.
"Adaptive Probabilities of Crossover and Mutation in Genetic Algorithm,"
"Methods for Fast Morphological Image Transforms Using Bitmapped Binary Images,"
"Theory of Matrix Morphology,"
"Morphological Structuring Element Decomposition,"
--TR
--CTR
Ronaldo Fumio Hashimoto , Junior Barrera , Carlos Eduardo Ferreira, A Combinatorial Optimization Technique for the Sequential Decomposition of Erosions and Dilations, Journal of Mathematical Imaging and Vision, v.13 n.1, p.17-33, August 2000
Frank Y. Shih , Yi-Ta Wu, Decomposition of binary morphological structuring elements based on genetic algorithms, Computer Vision and Image Understanding, v.99 n.2, p.291-302, August 2005
Ronaldo Fumio Hashimoto , Junior Barrera, A Note on Park and Chin's Algorithm, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.1, p.139-144, January 2002 | genetic algorithms;arbitrarily shaped structuring element decomposition;mathematical morphology |
279272 | A Volumetric/Iconic Frequency Domain Representation for Objects With Application for Pose Invariant Face Recognition. | AbstractA novel method for representing 3D objects that unifies viewer and model centered object representations is presented. A unified 3D frequency-domain representation (called Volumetric Frequency RepresentationVFR) encapsulates both the spatial structure of the object and a continuum of its views in the same data structure. The frequency-domain image of an object viewed from any direction can be directly extracted employing an extension of the Projection Slice Theorem, where each Fourier-transformed view is a planar slice of the volumetric frequency representation. The VFR is employed for pose-invariant recognition of complex objects, such as faces. The recognition and pose estimation is based on an efficient matching algorithm in a four-dimensional Fourier space. Experimental examples of pose estimation and recognition of faces in various poses are also presented. | Introduction
A major problem in 3-D object recognition is the method of representation, which actually
determines to a large extent, the recognition methodology and approach. The large variety
of representation methods presented in the literature do not provide a direct link between the
3-D object representation and its 2-D views. These representation methods can be divided
into two major categories: object centered and viewer centered (iconic). Detailed discussions
are included in [15] and [12]. An object centered representation describes objects in a coordinate
system attached to objects. Examples of object centered methods of representation are
spatial occupancy by voxels [15], constructive solid geometry (CSG) [15], superquadrics [21]
[2], algebraic surfaces [8], etc. However, object views are not explicitly stored in such representations
and therefore such datasets do not facilitate the recognition process since the
images cannot be directly indexed into such a dataset and need to be matched to views generated
by perspective/orthographic projections. Since the viewpoint of the given image is
a priori unknown, the recognition process becomes computationally expensive. The second
category i.e. viewer centered (iconic) representations, is more suitable for matching a given
image, since the model dataset also is comprised of various views of the objects. Examples
of viewer centered methods of representation are aspect graphs [16], quadtrees [12], Fourier
descriptors [30], moments [13], etc. However, in a direct viewer centered approach, the
huge number of views needed to be stored renders this approach impractical for large object
datasets. Moreover, such an approach does not automatically provide a 3-D description of
the object. For example, in representations by aspect graphs [16], qualitative 2-D model
views are stored in a compressed graph form, but the view retrieval requires additional 3-D
information in order to generate the actual images from different viewpoints. For recognition
purposes, viewer centered representations do not offer a significant advantage over
object centered representations. In summation, viewer centered and object centered representations
have complementary merits that could be augmented in a merged representation
- as proposed in this paper.
A first step in unifying object and viewer centered approaches was provided by our
recently developed iconic recognition method by Affine Invariant Spectral Signatures (AISS)
[6] [5] [4], which was based on an iconic 2-D representation in the frequency domain. However,
the AISS is fundamentally different from other viewer centered representations since each
2-D shape representation encapsulates all the appearances of that shape from any spatial
pose. It also implies that the AISS enables to recognize surfaces which are approximately
planar, invariant to their pose in space. Although this approach is basically viewer centered,
it has the advantage of directly linking 3-D model information with image information, thus
merging object and viewer centered approaches. Hence, to generalize the AISS it is necessary
to extend it from 2-D or flat shapes to general 3-D shapes. Towards this end, we describe in
Section 2, a novel representation of 3-D objects by their 3-D spectral signatures which also
captures all the 2-D views of the object and therefore facilitates direct indexing of a given
image into such a dataset.
As a demonstration of the VFR, it is applied for estimating pose of faces and face recognition
in Section 3. Range image data of a human head is used to construct the VFR model
of a face. We demonstrate that reconstruction from slices of the VFR results are accurate
enough to recognize faces from different spatial poses and scales. In Section 3, we describe
the matching technique by means of which a gray scale image of a face is directly indexed
into the 3-D VFR model based on fast matching by correlation in a 4 dimensional Fourier
space. In our experiments (described in Section 5), we demonstrate how the range data
generated from a model is used to estimate the pose of a person's face in various images.
We also demonstrate the robustness of our 2-D slice matching process by recognizing faces
with different poses from a dataset of 40 subjects, and present statistics of the matching
experiments.
Frequency Representation (VFR)
In this section, we describe a novel formulation that merges the 3-D object centered representation
in the frequency domain to a continuum of its views. The views are also expressed
in the frequency domain. The following formulation describes the basic idea.
Given an object O, which is defined by its spatial occupancy on a discrete 3-D grid as a
set of voxels fV (x; z)g, we assume without loss of generality, that the object is of equal
density. Thus, V (x; otherwise. The 3-D
Discrete Fourier Transform (DFT) of the object is given by V(u; v; z)g. The
surface of the object is derived from the gradient vector field
@
@x
@
@y
@
@z
where k x , k y and k z are the unit vectors along the x; y and z axes. The 3-D Discrete Fourier
Transform (DFT) of the surface gradient is given by the frequency domain vector field:
F frV (x;
Let the object be illuminated by a distant light source 1 with uniform intensity \Upsilon and direction
z . We assume that the object O is a regular set [1] and has a constant
gradient magnitude K on the object surface, i.e. j rV K. The surface normal is given by
. We also assume that O has a Lambertian surface with constant albedo A. Thus points
on its surface have a brightness proportional to
@
@x
@
@y
@
@z
i are the positive and negative parts. The function
z) is not a
physically realizable brightness and is introduced only for completeness of Eq. (3). The separation
of the brightness function into positive and negative components is used to consider
Additional light sources can be handled using superposition.
only positive illuminations. The negative components are disregarded in further processing,
as this function is separable only in the spatial domain. As elaborated in Section 2.2,
can be eliminated using a local Gabor transform. In another approach, the side of the object
away from the illumination can be considered as planar and
becomes a plane with a
negative constant value which does not alter the resulting image.
It is also necessary to consider the viewing direction when generating views from the
VFR. The brightness function B i (x; y; z) is decomposed as a 3-D vector field by projecting
onto the surface normal at each point of the surface. This enables the correct projection of
the surface from a given viewpoint. As noted earlier, the surface normal is given by rV
Thus, the new vectorial brightness function B i is given by
\UpsilonA
The 3-D Fourier transform of this model is a complex 3-D vector field V i (u; v; w)=FfB i (x; z)g.
The transform is evaluated as:
(i x
where denotes convolution. Variation in illumination only emphasizes the amplitude of V i
in the (i x but does not change its basic structure. The absolute value of
defined as the VFR signature.
2.1 Projection Slice Theorem and 2-D Views
The function V i (u; v; w) is easily obtained, given the object O. To generate views of the
object, we resort to 3-D extensions of the Projection-Slice Theorem [14] [24] that projects
the 3-D vector field V i (u; v; w) onto the central slice plane normal to the viewpoint direction.
Fig. 1 illustrates the principle by showing the slice derived from the 3-D DFT of a rectangular
block. Orthographically viewing the object from a direction results
in an image I c which has a 2-D DFT given by Ic To find I c its
necessary to project the vector brightness function
along the
Figure
1: The Projection-Slice Theorem: A slice of the 3-D Fourier Transform of a rectangular
block (on the right) is equivalent to the 2-D Fourier Transform of the projection of the image of
that block (on the left).
viewing direction c after removing all the occluded parts from that viewpoint. The vectorial
decomposition of the brightness function along the surface normals as given by Eq. (4)
compensates for the integration effects of projections of slanted surfaces. This explains the
necessity of using a vectorial frequency domain representation.
Removing the occluded surfaces is not a simple task if the object O is not convex or if the
scene includes other objects that may partially occlude O. For now, we shall assume that O
is convex and is entirely visible. This assumption is quite valid for local image analysis where
a local patch can always be regarded as either entirely occluded or visible. Also, for local
z) is not a major problem. The visible part of B i (x; y; z) from direction c,
denoted by B ic (x; y; z), is given by
\UpsilonA
where hwr[ff] is the "half wave rectified" value of ff, i.e.
Now V ic (u; v; w) can be obtained from B ic (x; simply by calculating the DFT,
The image DFT Ic obtained using the Projection-Slice Theorem [14] [24] by
slicing V ic (u; v; w) through the origin with a plane normal to c, i.e.
derived by sampling V ic (u; v; w) on this plane. An example
of such a slicing operation is illustrated in Fig. 1. Note that V ic actually encapsulates both
the objects 3-D representation and the continuum of its view-signatures, which are stored as
planar sections of j V ic j. As we see from Eq. (5), variations in illumination only emphasizes
the amplitude of V i in (i x direction, but do not change its basic structure. Thus, it is
feasible to recognize objects that are illuminated from various directions by local signature
matching methods as described in Section 2.3, while employing the same signature.
2.2 Local Signature Analysis in 3-D
Local signature analysis is implemented by windowing B i with a 3-D Gaussian centered at
location proceeding as in Eq. (4) on the windowed object gradient. Such
local frequency analysis is implemented by using the Gabor Transform (GT) instead of the
DFT. The transition required from the DFT to the GT is quite straightforward. The object
O is windowed with a 3-D Gaussian to give
oe x
oe y
oe z
The equivalent local VFR is given by
The important outcome from this are: 1) The Projection-Slice Theorem [14], [24] can be still
employed for local space-frequency signatures of object parts. 2) In local space-frequency
almost always does not contain the problematic
i part, which can be eliminated
by the windowing function. We note that for most local surfaces, [B i \Delta
as the local analysis approximates the hwr[\Delta] function with respect to viewing direction c.
Hence, the VFR of B ic is a general representation of a local surface patch of V (x;
uc +vc
x y z
e
a
u'
w'
slice plane
Figure
2: The frequency domain coordinate system in which the slice plane is defined.
are the direction cosines of the slice plane normal, which has an azimuth ff and an elevation ffl.
Image swing is equivalent to in-plane rotation ', and viewing distance results is variation in the
radial frequency r f of the VFR function.
2.3 Indexing using the VFR signature
As explained in Section 2.1, the VFR is a continuum of the 2-D DFT of views of the model.
To facilitate indexing into the VFR signature data structure, we consider the VFR signature
slice plane uc x are the direction cosines of the slice plane
normal. We define a 4-D pose space in the frequency domain which consists of the azimuth ff
and elevation ffl, defining the slice plane normal with respect to the original axes, the in-plane
rotation ' of the slice plane and the scale ae which changes with the distance to the viewed
object. Fig. 2 illustrates the coordinate system used. [c x are related to the azimuth
ff and elevation ffl as follows6c x
c y
c z7
sin ff cos ffl
sin ffl7\Gamma-=2 - ff -=2
We note again that slices of the VFR signature are planes which are parallel to the imaging
plane. Thus the image plane normal and the slice plane normal coincide. By using 3-D
coordinate transformations (see Fig. 2) we can transform the frequency domain VFR model
to the 4-D pose space (ff; ffl; '; ae). Let (u; v; w) represent the original VFR coordinate system
and (-u; - v; -
w) be the coordinate system defined by the slice plane. The slice plane is within
the 2-D coordinate system (-u; - v), where -
w is the normal to the slice plane and corresponds
to the viewing direction. The relation between these two systems is given by6u
sin ff sin ffl cos ff sin ff cos ffl
VFR signature slices, being 2-D DFT's of model views are further transformed to polar
coordinates by considering the in-plane rotation ' (equivalent to the image swing or rotation
about the optical axis), and the radial frequency r f .
- u
sin '
\Gamma-=2 -=2
The radial frequency r f is transformed logarithmically to attain exponential variation of r f
given by ae = log a
. The full transformation of the coordinate system to the 4-D pose space
is given by 24
cos ' sin ff sin ffl \Gamma sin ' cos ff
sin ' cos ffl7
Thus, the 4-tuple (ff; ffl; '; ae) defines all the points in the 3-D VFR signature frequency space
We observe that the space defined by the 4-tuple (ff; ffl; '; ae) is redundant in the
sense that infinite number of 4-tuples (ff; ffl; '; ae) may represent the same (u; v; w) point.
However, this representation has the important advantage that every (ff; ffl) pair defines a
planar slice in V ic (u; v; w). Moreover, every ' defines an image swing and every ae defines
another scale. Thus the (ff; ffl; '; ae) representation significantly simplifies the indexing search
for the viewing poses and scales. Now, the indexing can be simply implemented by correlation
in the frequency domain to immediately determine all pose parameters by linear shifts in
space. The significance of this transformation to the 4-D pose space is in using
the following properties. The polar coordinate transformation within the slice allows rotated
image views to have 2-D frequency domain signatures which shift along the ' axis. Similarly
the exponential sampling of the radial frequency r f results is scale changes causing linear
shifts along the ae axis. Thus the new coordinate system given by (ff; ffl; '; ae) results in a 2-D
frequency domain signature which is invariant to view point and scale and results only in
linear shifts in the 4-D pose space so defined. A particular slice corresponding to a particular
viewpoint is easily indexed into the transformed VFR signature by using correlation.
3 Pose Estimation and Recognition of Human Faces
Recognition of human faces is a hard problem for machine vision, primarily due to the
complexity of the shape of a human face. The change in the observed view caused by
variation in facial pose is a continuum which needs large numbers of stored models for every
face. Since the representation of such a continuum of 3-D views is well addressed by our
VFR, we present here, the application of our VFR model for pose-invariant recognition of
human faces. First we discuss some of the existing work in face recognition in Section 3.1
followed by our approach to the problem in Section 3.2. We present our results in face
pose estimation (Section 4) and face recognition (Section 3) and compare our results in face
recognition to some other recent works using the same database [20].
azimuth a
elevation
e
Figure
3: Reconstructions of a model face from slices of the VFR are shown for various azimuths
and elevations. Note that all facial features are accurately reconstructed indicating the robustness
of the VFR model.
3.1 Face Recognition: A Literature Survey
Recent works in face recognition have used a variety of representations including parameterized
models like deformable templates of individual facial features [29] [26] [10], 2-D
pictorial or iconic models using multiple views [9] [7], matching in eigenspaces of faces or
facial features [22] and using intensity based low level interest operators in pictures. Other
recent significant approaches have used convolutional neural networks [18] as well as other
neural network approaches like [11] and [28]. Hidden Markov Models [25], modeling faces as
deformable intensity surfaces [19], and elastic graph matching [17] have also been developed
for face recognition.
Parameterized models approaches like that of Yuille et al. [29], use deformable template
models which are fit to preprocessed images by minimizing an energy functional, while
Terzopoulos and Waters [26] used active contour models of facial features. Craw et al. [10]
and others have used global head models from various smaller features. Usually deformable
models are constructed from parameterized curves that outline subfeatures such as the iris or
a lip. An energy functional is defined that attracts portions of the models to pre-processed
versions of the image and model fitting is performed by minimizing the functional. These
models are used to track faces or facial features in image sequences. A variation is the
deformable intensity surface model proposed by Nastar and Pentland [19]. The intensity
is defined as a deformable thin plate with a strain energy which is allowed to deform and
match varying poses for face recognition. A 97% recognition rate is reported for a database
with 200 test images.
Template based models have been used by Brunelli and Poggio [9]. Usually they operate
by direct correlation of image segments and and are effective only under invariant conditions
of scale orientations and illumination. Brunelli and Poggio computed a set of geometrical
features such as nose width and length, mouth position and chin shape. They report 90%
recognition rate on a database of 47 people. Similar geometrical considerations like symmetry
[23] have also been used. A more recent approach by Beymer [7] uses multiple views and a
face feature finder for recognition under varying pose. An affine transformation and image
warping is used to remove distortion and bring correspondence between test images and
model views. Beymer reports a recognition rate of 98% of a database of 62 people, while
using 15 modeling views for each face.
Among the more well known approaches has been the eigenfaces approach [22]. The
principal components of a database of normalized face images is used for recognition. The
results report a 95% recognition rate from a database of 3000 face images of about 200
people. However, it must be noted that the database has several face images of each person
with very little variation in face pose. More recent reports on a fully automated approach
with extensive preprocessing on the FERET database indicate only 1 mistake on a database
of 150 frontal views.
Elastic graph matching using the dynamic link architecture [17] was used quite successfully
for distortion invariant recognition. Objects are represented as sparse graphs. Graph
vertices labeled with multi-resolution spectral descriptions and graph edges associated with
geometrical distances form the database. A recognition rate of 97.3% is reported for a
database of 300 people.
Neural network approaches have also been popular. Principal components generated
using an autoassociative network have been used [11] and classified using a multilayered
perceptron. The database consists of 20 people with no variation in face pose or illumination.
Weng and Huang used a hierarchical neural network [28] on a database of 10 subjects. A
more recent approach uses a hybrid approach using self organizing map for dimensionality
reduction and a convolutional neural networks for hierarchical extraction of successively
larger features for classification [18]. The reported results show a 3.8% error rate on the
ORL database using 5 training images per person.
In [25], a HMM-based approach is used on the ORL database. Error rates of 13%
were reported using a top-down HMM. An extension using a pseudo two-dimensional HMM
reduces the error to 5% on the ORL database. 5 training and 5 test images were used for
each of 40 people under various pose and illumination conditions.
3.2 VFR model of faces
In our VFR model, we present a novel representation using dense 3-D data to represent
a continuum of views of the face. As indicated by Eq. (7) in Section 2, the VFR model
encapsulates the information in the 3-D Fourier domain. This has the advantage of 3-D
translation invariance with respect to location in the image coupled with faster indexing
to a view/pose of the face using frequency domain scale and rotation invariant techniques.
Hence, complete 3-D pose invariant recognition can be implemented on the VFR.
Range data of the head is acquired using a Cyberware range scanner. The data consists
of 256 \Theta 512 range information from the central axis of the scanned volume. 360 ffi of azimuth
is sampled in 512 columns and heights in the range of 25 to 35 cm is sampled in 256
rows. The data is of the heads of subjects looking straight ahead at 0 ffi azimuth and 0 ffi
latitude corresponding to the x-axis. This model is then illuminated with numerous sources
of uniform illumination thus approximating diffuse illumination. The resulting intensity data
in converted from the cylindrical coordinates of the scanner to Cartesian coordinates and
inserted in a 3-D surface representation of the head surface as given by Eq. (3).
The facial region of interest to us is primarily the frontal region consisting of the eyes,
lips and nose. A region corresponding to this area is extracted by windowing the volumetric
surface model with a 3-D ellipsoid centered at the nose with a Gaussian fall-off. The parameters
of the 3-D volumetric mask are adjusted to ensure that the eyes, nose and lips are
contained within it, with the fall off beyond the facial region. The model thus formed is a
complex surface which consists of visible parts of the face from an continuous range of view
centered around the x-axis or the (0 direction. The resulting model then corresponds
to Eq. (6) in our VFR model. Applying Eq. (7), the VFR of the face is obtained. The VFR
model is then resampled into the 4-D pose space using Eq. (13) as described in Section 2.3.
Reconstructions of a range of viewpoints from a model head, from the VFR slices are shown
in Fig. 3. We see from the reconstructions, that all relevant facial characteristics are retained
thus justifying our use of the vectorial VFR model. This model is used in the face
pose estimation experiments.
3.3 Indexing images into the VFR signature
Images of human faces are masked with an ellipse with Gaussian fall-off to eliminate background
textures. The resulting image shows the face with the eyes nose and lips. The
magnitudes of Fourier transform of the windowed 2-D face images are calculated. The windowing
has the effect of focusing on local frequency components (or foveating) on the face.
while retaining the frequency components due to facial features. The Fourier magnitude
spectrum make the spectral signature translation invariant in the 2-D imaging plane. The
spectrum is then sampled in the log-polar scheme similar to the slices of the VFR signa-
ture. As most illumination effects are typically lower frequency, band pass filtering is used
to compensate for illumination.
The spectral signatures from the gray scale images are localized (windowed) log-polar
sampled Fourier magnitude spectra. The continuum of slices of the VFR provide all facial
poses, and band-passed Fourier magnitude spectrum provides 2-D translation invariant (in
the imaging plane) signatures. Log-polar sampling of the 2-D Fourier spectrum allows for
scale invariance (translation normal to the imaging plane) and rotation invariance (within
the imaging plane). This is because a scaled image manifests itself in Fourier spectrum
inversely proportional to the scale and a rotated image has a rotated spectrum. Thus scaled
and rotated images have signatures which are only linearly shifted in the log-polar sampled
frequency domain.
The pose of a given image is determined by correlating the intensity image signature
with the VFR in the 4-D pose space. The matching process is based on indexing through
the sampled VFR signature slices and maximizing the correlation coefficient for all the 4
pose parameters. The correlation is performed on the signature gradient which reduces
dependence of actual spectral magnitudes and as it considers only the shape of the spectral
envelope. The results take the form of scale and rotation estimate along with a matching
score from 0 to 1. Similar matching methods have been very sucessfully used to match Affine
Invariant Spectral Signatures (AISS) [27] [3] [6] [5] [4]. References [27] and [3] already include
detailed noise analysis with white and colored noise which shows robustness to noise levels
of up to 0 dB SNR for these matching methods.
Table
1: Pose estimation errors for faces with known pose. These are the averaged absolute
errors for angles and standard deviation of the ratio of estimated size to true size for scale.
Azimuth Error Elevation Error Rotation Error Scale Std. Dev.
4 Face Pose Estimation
To verify the accuracy of the pose estimation procedure, the method is first tested on images
generated from the 3-D face model. 20 images of the face in Fig. 3 are generated using
random viewpoints and scales from uniform distributions. The azimuth and elevation are in
the range the rotation angle is in the range [\Gamma45 and the scale in the range
[0:5; 1:5]. These are indexed in the VFR signature pose space. The results are summarized in
Table
1. An example of the correlation peak for the estimated pose in azimuth and elevation
is shown in Fig. 4(b) for the test image in Fig. 4(a). The corresponding reconstructed face
from the VFR signature slice is shown in Fig. 4(c).
a b c
Figure
4: (a) A test image with pose parameters
The correlation maximum in the azimuth-elevation dimensions of the pose space. The peak is
quite discriminative as seen by relative brightness. (c) The reconstructed image from the slice
which maximizes the correlation. Pose parameters
In addition, we also show the results of pose estimation of face images of the subject with
unknown pose and illumination in Fig. 5.
Figure
5: Using the VFR model, the pose of the face in the above images is estimated
and the faces are recognized. The estimated poses are given in terms of the 4-tuple azimuth
ff, the elevation ffl, the relative swing (rotation) ', and the relative scale r 0 a ae . The
results are A:(+15
Table
2: Face recognition using the ORL database. Recognition rates are given for 5, 6, 7 and
8 images as VFR signature slices.
Number of Slices 5 6 7 8
Recognition Rate 92.5% 95.6% 96.6% 100%
5 Face Recognition Results
In this section, we describe experiments on face recognition based on the VFR model. The
ORL database [20] is used. The ORL database consists of 10 images of each of 40 people
taken in varying pose and illumination. Thus, there are a total of 400 images in the database.
Figure
Shown are images of a few faces from the set of test images which are used for the
face recognition task using our matching scheme.
We select a number of these images varying from 5 to 8 as model images and the remaining
images form the test set. The model images are windowed with an ellipse with a Gaussian
fall-off. The recognition is robust to the window parameters selected, provided the value of
oe for the Gaussian fall-off is relatively large. The images are 112 \Theta 92 pixels. The window
parameters chosen were for the longer elliptical axis aligned vertically and 22 pixels
for the shorter axis aligned horizontally and pixels. Each window is centered at
(60,46). This allows for faster processing rather than manually fitting windows to each face
image. Thus, the same elliptical Gaussian window was used on all model and test images
even though its axes does not align accurately with the axes of all the faces. The windowed
images are transformed to the Fourier domain and then sampled in a log-polar format, now
correspond to slices in a 4-D VFR signature pose space. The test images are then indexed
into the dataset of slices for each person. The recognition rates using 5, 6, 7 and 8 model
images are summarized in Table 2. As can be seen, a recognition rate of 92.5% is achieved
when using 5 slices. This increases to 100% when using 8 slices in the model. A few of the
test images that are recognized are shown in Fig. 6. Computationally each face indexing
takes about 320 seconds when using 5 slices and up to about 512 seconds when using 8 slices.
The experiments are performed on a 200 MHz Pentium Pro running Linux.
6 Summary and Conclusions
We present a novel representation technique for 3-D objects unifying both the viewer and
model centered object representation approaches. The unified 3-D frequency-domain representation
(called Volumetric Frequency Representation - VFR) encapsulates both the spatial
structure of the object and a continuum of its views in the same data structure. We show that
the frequency-domain representation of an object viewed from any direction can be directly
extracted employing an extension of the Projection Slice theorem. Each view is a planar
slice of the complete 3-D VFR. Indexing into the VFR signature is shown to be efficiently
done using a transformation to a 4-D pose space of azimuth, elevation, swing (in-plane image
rotation) and scale. The actual matching is done by correlation techniques.
The application of the VFR signature is demonstrated for pose-invariant face recognition.
Pose estimation and recognition experiments is carried out using a VFR model constructed
from range data of a person and using gray level images to index into the model. The
pose estimation errors are quite low at about 4:05 ffi in azimuth, 5:63 ffi in elevation, 2:68 ffi in
rotation and 0:0856 standard deviation in scale estimation. The standard deviation in scale
is taken for the ratio of estimated size to true size. Thus it represents the standard deviation
assuming a scale of 1.0. Face recognition experiments are also carried out on a large database
of 40 subjects with face images in varying pose and illumination. Varying number of model
images between 5 and 8 is used. Experimental results indicate recognition rates of 92.5%
using 5 model images and goes up to 100% using 8 model images. This compares well with
[25] who reported recognition rates of 87% and 95% using the same database with 5 training
images. The eigenfaces approach [22] was able to achieve a 90% recognition rate on this
database. It also is comparable to the recognition rates of 96.2% reported in [18] again
using 5 training images per person from the same database. These are highest reported
recognition rates for the ORL database in the literature. The VFR model holds promise
as a robust and reliable representation approach that inherits the merits of both the viewer
and object centered approaches. We plan future investigations in using the VFR model for
robust methods in generic object recognition.
--R
"Computer Vision,"
"Superquadrics and Angle Preserving Transformations,"
"Pictorial Recognition Using Affine Invariant Spectral Sig- natures,"
"Affine Invariant Shape Representation and Recognition using Gaussian Kernels and Multi-dimensional Indexing,"
"Iconic Recognition with Affine-Invariant Spectral Signatures,"
"Iconic Representation and Recognition using Affine-Invariant Spectral Signatures,"
"Face Recognition Under Varying Pose,"
"Describing Surfaces"
"Face Recognition: Features versus Templates,"
"Finding face features,"
"Non-linear dimensionality reduction,"
"Object Models and Matching,"
"Visual Pattern recognition by Moment Invariants,"
"Image Reconstruction from Projections,"
"Object Recognition,"
"The Internal Representation of Solid Shape with respect to Vision,"
"Distortion Invariant Object Recognition in the Dynamic Link Architecture,"
"Face recognition: A Convolutional Neural Network Approach,"
Olivetti and Oracle Research Laboratory
"Perceptual Organization and the Representation of Natural Form,"
"View-based and modular eigenspaces for face recognition,"
"Robust detection of facial features by generalized symme- try,"
"Parameterisation of a Stochastic Model for Human Face
"Analysis of Facial Images using Physical and Anatomical Models,"
"SVD and Log-Log Frequency Sampling with Gabor Kernels for Invariant Pictorial Recognition,"
"Learning Recognition and Segmentation of 3- D Objects from 2-D Images,"
"Feature Extraction from Faces using Deformable Templates,"
"Fourier Descriptors for Plane Closed Curves,"
--TR
--CTR
Ching-Liang Su, Robotic Intelligence for Industrial Automation: Object Flaw Auto Detection and Pattern Recognition by Object Location Searching, Object Alignment, and Geometry Comparison, Journal of Intelligent and Robotic Systems, v.33 n.4, p.437-451, April 2002
Ching-Liang Su, Face Recognition by Using Feature Orientation and Feature Geometry Matching, Journal of Intelligent and Robotic Systems, v.28 n.1-2, p.159-169, June 2000
Yoshihiro Kato , Teruaki Hirano , Osamu Nakamura, Fast template matching algorithm for contour images based on its chain coded description applied for human face identification, Pattern Recognition, v.40 n.6, p.1646-1659, June, 2007
Seong G. Kong , Jingu Heo , Besma R. Abidi , Joonki Paik , Mongi A. Abidi, Recent advances in visual and infrared face recognition: a review, Computer Vision and Image Understanding, v.97 n.1, p.103-135, January 2005 | Volumetric frequency representation VFR;pose invariant face recognition;object representation;4D Fourier space;face pose estimation;projection-slice theorem |
279531 | An Efficient Solution to the Cache Thrashing Problem Caused by True Data Sharing. | AbstractWhen parallel programs are executed on multiprocessors with private caches, a set of data may be repeatedly used and modified by different threads. Such data sharing can often result in cache thrashing, which degrades memory performance. This paper presents and evaluates a loop restructuring method to reduce or even eliminate cache thrashing caused by true data sharing in nested parallel loops. This method uses a compiler analysis which applies linear algebra and the theory of numbers to the subscript expressions of array references. Due to this method's simplicity, it can be efficiently implemented in any parallel compiler. Experimental results show quite significant performance improvements over existing static and dynamic scheduling methods. | Introduction
Parallel processing systems with memory hierarchies have become quite common today. Commonly,
most multiprocessor systems have a local cache in each processor to bridge the speed gap between the
processor and the main memory. Some systems use multi-level caches [5, 14]. Very often, a copy-back
snoopy cache protocol is employed to maintain cache coherence in these multiprocessor systems. Certain
supercomputers also use a local memory which can be viewed as a program-controlled cache. When
programs with nested parallel loops are executed on such parallel processing systems, it is important to
assign parallel loop iterations to the processors in such a way that unnecessary data movement between
different caches is minimized. For convenience, in this paper, we call each iteration of a parallel loop a
thread. The following loop nest is an example.
DO I=1,100
DO K=1,100
ENDDO
ENDDO
In this example, loop J is a parallel loop because, with the value of I fixed, statement S has no
loop-carried dependences, while J varies. Loop J is executed 100 times in the loop nest, creating 10,000
iterations, or 10,000 threads, in total. Each thread addresses 100 elements of array A. Many array
elements are repeatedly accessed by these threads as shown in Table 1 and Figure 1, where T i;j denotes
the thread corresponding to loop index values I = i and denotes the set of threads
created by the index value I = i. As shown in Figure 2, there exist lists of threads: (T 1;1 ), (T 1;2 , T 2;1 ),
that each thread modifies and reuses most of the
array elements accessed by the neighboring threads in the same list. If the threads in the same list are
assigned to different processors, the data of array A will unnecessarily move back and forth between
different caches in the system, causing a cache thrashing problem due to true data sharing [12].
The nested loop construct shown in the above example is quite common in parallel code used for
scientific computation. J. Fang and M. Lu studied a large number of programs including the LINPACK
benchmarks, the PERFECT Club benchmarks, and programs for mechanical CAE, computational
chemistry, image and signal processing, and petroleum applications [11]. They reported that almost
all of the most time-consuming loop nests contain at least three loop levels, out of which 60% contain
at least one parallel loop. Even after using loop interchange to move parallel loops outwards when
it was legal, they still found 94% of the parallel loops enclosed by sequential loops. Such loop nests
include the cases in which a parallel loop appears in the outermost loop level in a subroutine, but the
subroutine is called by a call-statement which is contained by a sequential loop. Most of these loop
nests are not perfectly nested, i.e. there exist statements right before or after an inner loop. Fang and
Lu proposed a thread alignment algorithm to solve the cache thrashing problem which may be created
by such multi-nested loops. Their algorithm, however, assigns threads to processors either by solving
linear equations at run time or by storing the precomputed numerical solutions to the equations in the
processor's memory. Since storing all the numerical solutions requires quite a large memory space, and
since the exact number of threads often cannot be determined statically due to unknown loop bounds,
they favor the on-line computation approach. In this paper, we present a method to reduce the run-time
overhead in Fang and Lu's algorithm by using a thorough compiler analysis of array references
to derive a closed-form formula that solves the thread assignment equations. The thread assignment
then becomes highly efficient at run time. Previously, we presented preliminary algorithms [22, 23] to
deal with a simple case in which data-dependent array references use the same linear function in the
subscripts. No experimental data were given. In this paper, we extend the work by covering multiple
linear functions and by clarifying the underlying theory. We report experimental results using a Silicon
Graphics (SGI) multiprocessor.
Table
1: The elements of A accessed by each thread.
With our method, the compiler analyzes the data dependences between the threads and uses that
information to restructure the nested loop, perfectly nested or otherwise, in order to reduce or even
eliminate true data sharing, which causes cache thrashing. Our method can be efficiently implemented
in any parallel compiler, and our experimental results show quite significant improvement over existing
static and dynamic scheduling methods.
In what follows, we first address related work. We then introduce basic concepts and assumptions.
After that, we present solutions to the cache thrashing problem due to true data sharing, and lastly we
show the experimental results conducted on an SGI multiprocessor system.
Related Work
Extensive research regarding efficient memory hierarchies has been reported in the literature.
Abu-Sufah, Kuck and Lawrie use loop blocking to improve paging performance by improving the
locality of references [2]. Wolfe proposes iteration space tiling as a way to improve data reuse in a cache
or a local memory [35]. Gallivan, Jalby and Gannon define a reference window for a dependence as the
variables referenced by both the source and the sink of the dependence [15, 16]. After executing the
source of the dependence, they save the associated reference window in the cache until the sink has also
T1,3
Figure
1: The elements of A accessed by different threads.
been executed, which may increase the number of cache hits. Carr, Callahan and Kennedy [7, 8] discuss
options for compiler control of a uniprocessor's memory hierarchy. Wolf and Lam develop an algorithm
that estimates all temporal and spatial reuse of a given loop permutation [34]. These optimizations all
attempt to maximize the reuse of cached data on a single processor. They also have a secondary effect
of improving multiprocessor performance by reducing the bandwidth requirement of each processor,
thereby reducing contention in the memory system. In contrast, our work considers a multiprocessor
environment where each processor has its own local cache or its own local memory, and where different
processors may share data.
The work by Peir and Cytron [28], Shang and Fortes [30], and by D'Hollander [9] share the common
goal of partitioning an index set into independent execution subsets such that the corresponding loop
iterations can execute on different processors without interprocessor communication. Their methods
apply to a specific type of loop nest called a uniform recurrence or a uniform dependence algorithm, in
which the loops are perfectly nested, the loop bounds are constant, the loop-carried dependences have
constant distances, and the array subscripts are of the form is a loop index and c an
integer constant. Hudak and Abraham [1, 18] develop a static partitioning approach called adaptive data
partitioning (ADP) to reduce interprocessor communication for iterative data-parallel loops. They also
assume perfectly nested loops. The loop body is restricted to update a single data point A(i;
a two-dimensional global matrix A. The subscript expressions of right-hand side array references are
restricted to be the sum of a parallel loop index and a small constant, while the subscript expressions of
left-hand array references are restricted to contain the parallel loop indices only. Tomko and Abraham
T1,3
T100,3
Figure
2: Lists of threads accessing similar array elements.
[32] develop iteration partitioning techniques for data-parallel application programs. They assume that
there is only one pair of data access functions and that each loop index variable can appear in only one
dimension of each array subscript expression. Agarwal, Kranz, and Natarajan [3] propose a framework
for automatically partitioning parallel loops to minimize cache coherence traffic on shared-memory
multiprocessors. They restrict their discussion to perfectly nested doall loops, and assume rectangular
iteration spaces. Unlike these previous works, our work considers nested loops which are not necessarily
perfectly nested. Loop bounds can be any variables, and array subscript expressions are much more
general. Many researchers have studied the cache false sharing problem [10, 17, 19, 33] in which cache
thrashing occurs when different processors share the same cache line of multiple words, although the
processors do not share the same word. Many algorithms have been proposed to reduce false sharing by
better memory allocation, better thread scheduling, or by program transformations. Our work considers
cache thrashing which is due to the true sharing of data words.
Our work is most closely related to the research done by Fang and Lu [11, 12, 26, 13]. In their work,
the iteration space is partitioned into a set of equivalence classes, and each processor uses a formula to
determine which iterations belong to the same equivalence class at execution time. Each processor then
executes the corresponding iterations so as to reduce or eliminate cache thrashing. These iterations are
the solution vectors of a linear integer system. In Fang and Lu's work, these vectors may either be
computed at run time or may be precomputed and later retrieved at run time when loop bounds are
known before execution. Both approaches require additional execution time when a processor fetches
the next iteration. Unlike Fang and Lu's approaches, we solve the thrashing problem at compile time to
reduce run-time overhead, while we achieve the same effect of reducing cache thrashing. Our new method
restructures the loops at compile time and is based on a thorough analysis of the relationship between
the array element accesses and the loop indices in the nested loop. We have performed experiments on
a commercial multiprocessor, namely a Silicon Graphics Challenge Cluster, thereby obtaining real data
regarding cache thrashing and its reduction. In contrast, previous data were mainly from simulations
[11, 12, 26].
3 Basic Concepts and Assumptions
Data dependences between statements are defined in [6, 24, 4, 25]. If a statement S 1 uses the result
of another statement S 2 , then S 1 is flow-dependent on S 2 . If S 1 can safely store its result only after
fetches the old data stored in that location, then S 1 is anti-dependent on S 2 . If S 1 overwrites the
result of S 2 , then S 1 is output-dependent on S 2 . A dependence within an iteration of a loop is called
a loop-independent dependence. A dependence across the iterations of a loop is called a loop-carried
dependence.
There can be no cache thrashing due to true data sharing if the outermost loop is parallel, because
no data dependences will cross the threads. Therefore, in this paper, we consider only loop nests whose
outermost loops are sequential. To simplify our discussion, we make the following assumptions about
the program pattern: 1) All functions representing array subscript expressions are linear. 2) The loop
construct considered here consists of a sequential loop which embraces one or several single-level parallel
loops. If there exist multilevel parallel loops, only one level is parallelized, as on most commercial
shared-memory multiprocessor systems. Hence, as shown below, a loop nest in our model has three
levels: a parallel loop, its immediately enclosed sequential loop, and its immediately enclosing sequential
loop:
ENDDO
ENDDO
Figure
3: The loop nest model.
are linear mappings from the iteration space N 1 \Theta N 2 \Theta N 3 to the domain space M 1 \Theta M 2
of A:
and they can be expressed as:
array k). The loops in the above example are not necessarily perfectly
nested. Our restructuring techniques, to be presented later, assume arbitrary loop bounds, although
we are showing lower bounds of 1 here for simplicity of notation. Multiple array variables and multiple
linear subscript functions may exist in the nested loop.
Since we are considering cache thrashing due to true data sharing, i.e. due to data dependences
between threads, we can also write the loop nest in Figure 3 as:
ENDDO
ENDDO
where A fl m) is an array name appearing in the loop body, ~ h
m) are linear
mappings from iteration space N 1 \Theta N 2 \Theta N 3 to domain space M 1 fl
\Theta M 2 fl
of A fl ,
A
m) are potentially dependent reference pairs, and m is the number of such
pairs. Fang and Lu [11] reported that arrays involved in nested loops are usually two-dimensional or
three-dimensional with a small-sized third dimension. The latter can be treated as a small number of
two-dimensional arrays. Nested loops with the parallel loop at the innermost level are degenerate cases
of the loop nest in Figure 3. Therefore, our loop nest model seems quite general. Our method can also
be applied to a loop nest which contains several separate parallel loops at the middle level. Each of
these parallel loops may be restructured according to its own reference patterns, such that the threads in
different instances of the same parallel loop are aligned. We currently do not align the threads created
by different parallel inner loops. For programs with more complicated loop nests, pattern-matching
techniques can be used to identify a loop subnest that matches the nest shown in Figure 3. Other
outer- or inner- loops that are not a part of the subnest can be ignored, as long as their loop indices do
not appear in the array subscripts.
The compiler analysis is based on a simple multiprocessor model in which the cache memory has the
following characteristics: 1) it is local to a processor; 2) it uses a copy-back snoopy coherence strategy;
and its line size is one word. The transformed code, however, will execute correctly on machines
which have a multiword cache line and multilevel caches. Furthermore, as the experimental results will
show, the performance of the transformed code is quite good on such realistic machines. Our analysis
can also be extended to incorporate more machine parameters such as the cache line size.
Solutions
In this section, we analyze the relationship between linear functions in the array subscripts. Based
on this analysis, we restructure a given loop nest to reduce or eliminate the cache thrashing due to
true data sharing. We consider nested loops which are not necessarily perfectly nested and which may
have variable loop bounds. For clarity of presentation, in Section 4.1 we first discuss how to deal with
dependent reference pairs such that the same subscript function is used in both references in each pair
(Different pairs may use different subscript functions). Later in Section 4.2, we will discuss how to deal
with more general cases by using simple affine transforms to fit to this model.
4.1 The Basic Model
In this subsection, we assume that for each pair of dependent references, the same subscript function is
used in both references. Under this assumption, if we extract the subscript function ~ h fl (I; J; K) from
each pair of dependent references, then a model for a nested loop which has m pairs of dependent
references can be illustrated by the following code segment.
ENDDO
ENDDO
Without loss of generality, suppose that all m linear subscript functions above are different. We assume
that each function ~ h m, is of rank 2 and is in the form of ~ h fl (i; j;
where
Take the following example.
ENDDO
ENDDO
Figure
4: A nested loop with multiple linear subscript functions.
In this example, no data dependences exist within loop J . However, two data dependences exist in
the whole loop nest, one between the references to A, and the other between those to B. We have two
linear functions to consider, one for each dependence:
The iteration subspace N 1 \Theta N 2 is called the reduced iteration space because it omits the K loop. In
order to find the iterations in the reduced iteration space which may access common memory locations
within the corresponding threads, we define a set of elements of array A fl which are accessed within
thread using subscript function ~ h fl .
Definition 1: Given iteration (i in the reduced iteration space, the elements A fl (f fl (i
are accessed within thread
. They are
denoted by A i 0 ;j 0
g.
Definition 2: If we suppose that T i;j and T i 0 ;j 0 are two threads corresponding to (I; J)=(i;
) in the reduced iteration space of the given loop nest such that A i;j
m); we say T has a dependence because of ~ h fl , denoted by T i;j
Definition 3: If there exists fl, 1 - fl - m, such that T i;j
Since both f fl and g fl are linear in terms of i; j and k, the following lemma is obvious.
Lemma 1: In the program pattern described above, if there exist fl, 1 - fl - m, and two iterations
in the iteration space N 1 \Theta N 2 \Theta N 3 , (i, j,
, such that
then for any constant n 0 , we have a series of iterations in the space, (i;
that satisfy the following equations:
The following lemma and two theorems establish the relationship between the loop indexes corresponding
to two inter-dependent threads. We will use this index relationship to stagger the loop
iteration space such that inter-dependent threads can be assigned to the same processors.
Lemma 2: Let T i;j , T i 0 ;j 0 be two threads,
. T i;j exist k, k 0
such that
a
a
Theorem 1: Let b fl;1 a exist k, k 0
b fl;1 a fl;2 \Gamma a fl;1 b fl;2
a fl;2 c fl;1 \Gamma a fl;1 c fl;2
b fl;1 a fl;2 \Gamma a fl;1 b fl;2
The proofs of Lemma 2 and Theorem 1 are obvious from Definition 3. We now consider the case
of b fl;1 a fl;2 \Gamma a fl;1 b assuming that the loop bounds, N 2 and N 3 , are large enough to satisfy the
c fl;1
These assumptions are almost always true in practice [31]. When they are not true, the parallel loops
will be too small to be important. With these assumptions, we have the following theorem.
Theorem 2 [21]: Let b fl;1 a fl;2 \Gamma a fl;1 b The fact that the J loop at the middle level is a
loop guarantees that T i;j
(1) a fl;1 (i 0
(2) a fl;2 (i 0
In order to find the threads which have data dependence relations with thread T i;j , we make the
following definition.
Definition 4: Given iteration (i; j) in the reduced iteration space, we let S i;j denote the following
set of iterations in the space:
r
r
where L fl;1 (L fl;1 6= 0) and L fl;2 (1 - fl - m) are defined as:
and
with GCD fl equal to G:C:D:(b fl;1 c fl;2 \Gammac fl;1 b fl;2 ; a fl;2 c fl;1 \Gammaa fl;1 c fl;2 ; b fl;1 a fl;2 \Gammaa fl;1 b fl;2 ) or equal to \GammaG:C:D:
(b fl;1 c fl;2 - c fl;1 b fl;2 , a fl;2 c fl;1 -a fl;1 c fl;2 , b fl;1 a fl;2 -a fl;1 b fl;2 ) to guarantee L fl;1 ? 0;
(2)
and L
with GCD fl equal to G:C:D:(a fl;1 ; b fl;1 ) or equal to \GammaG:C:D:(a fl;1 ; b fl;1 ) to guarantee L fl;1 ? 0;
and L
with GCD fl equal to G:C:D:(a fl;2 ; b fl;2 ) or equal to \GammaG:C:D:(a fl;2 ; b fl;2 ) to guarantee L fl;1 ? 0;
called the staggering parameter corresponding to linear function ~ h fl . If there exist no
data dependences between the given pair of references, we define the staggering parameter (L fl;1 ; L fl;2 )
as (0; 0).
The staggering parameters for the example in Figure 4 can be calculated to be: (L 1;1 ; L 1;2
and (L 2;1 ; L 2;2 according to Definition 4(1).
The following theorem, derived from Theorems 1 and 2 and Definition 4, states that we can use the
staggering parameters to uniquely partition the threads into independent sets.
Theorem 3: S i;j as defined above satisfies:
(1) if (i;
The theorem above indicates that S i;j includes all the iterations whose corresponding threads have
a data dependence relation with T i;j . We call S i;j an equivalence class of the reduced iteration space.
In order to eliminate true data sharing, threads in the same equivalence class should be assigned to
the same processor. We want to restructure the reduced iteration space such that threads in the same
equivalence class will appear in the same column. Each staggering parameter (L computed for a
dependent reference pair tells us that if we stagger the (i row in the reduced iteration space
by columns to the right if L 2 ! 0, or to the left if relative to the i-th, then the threads
involved in the dependence pair will be aligned in the same column. Different staggering parameters
may require staggering the iteration space in different ways. However, if these staggering parameters
are in proportion, then staggering by the unified staggering parameter defined below will satisfy all the
requirements simultaneously.
Definition 5: Given staggering parameters (L
, and then we call (g; L1;2
the unified staggering parameter.
Lemma 3 [21]: If the condition L k;1
m) in Definition 5 is true, then (a) the
iterations (i; belong to two different equivalent classes; and (b) the iterations
belong to two different equivalence classes.
Theorem 4 [21]: If the condition L k;1
m) in Definition 5 is true, then the
reduced iteration space must be staggered according to the unified staggering parameter (g; L1;2
L1;1 g) in
order to reduce or eliminate data sharing among the threads, i.e. the (i g)-th row in the reduced
iteration space must be staggered by j L1;2
L1;1 gj columns to the right if L 1;2 ! 0, or to the left if L 1;2 ? 0,
relative to the i-th row.
If a given loop nest satisfies the condition L k;1
m) in Definition 5, then, according
to Theorem 4 above, the reduced iteration space can be transformed into a staggered and reduced
iteration space (SRIS) by leaving the first g rows unchanged,
staggering each of the remaining rows using the unified staggering parameter. There will be no data
dependences between different columns in the SRIS.
However, if the staggering parameters are not in proportion, i.e, if there exist (j; k) such that
, then we can no longer obtain a unique unified staggering parameter.
Moreover, staggering alone is no longer sufficient for eliminating data dependences between the different
columns in the restructured iteration space. This is because some threads in the same equivalence class
are still in different columns. We perform a procedure called compacting which stacks these columns
onto each other. We will discuss staggering first.
Definition Given staggering parameters (L
(L 1;1 , L 2;1 , ., L m;1 ), suppose there exists (j; k) such that 1 -
. According to
the theory of numbers [27], there exist integers a 1 , a 2 , ., am that satisfy
a
a fl L fl;2 . We call (g; g 0
unified staggering parameter.
Note that since the m-tuple (a 1 , a 2 , ., am ) is not necessarily unique, the (g; g 0
may not be
unique either. With Definition 6, a unified staggering parameter (g; g 0
) of the example in Figure 4 is
found to be
After staggering by using any unified staggering parameter (g; g 0
), the resulting SRIS has four
possible shapes, as shown in Figure 5(b \Gamma e). Figure 5(a) shows details of one of these shapes. Figure
6(a) and 6(b) show the reduced iteration space for the example in Figure 4 before and after staggering
with (3,-3) as the unified staggering parameter.
Next, we compute the compacting parameter d using Algorithm 1 and 2, to be presented shortly.
We then partition the SRIS into n chunks, where
d
, which is the total number of
columns in the SRIS devided by the compacting parameter d (Figure 5(d \Gamma e)). These d-wide chunks
are stacked onto each other to form a compacted iteration space of width d, as shown in Figure 7. As we
will explain later, the threads in different columns after compacting the SRIS with d are independent.
Moreover, the product of d and g equals the number of equivalence classes. The SRIS shown in Figure
6(b) for our example is transformed by being compacted with into the form shown in Figure 8.
The following algorithm computes the compacting parameter d.
Algorithm 1:
Input: A set of staggering parameters (L
Output: The compacting parameter d.
Step 1: For each 2-element subset, fL i;1 ; L j;1 g, of fL 1;1 ; L 2;1 ; :::; L m;1 g, compute
of all such d 2 hL i;1 ; L j;1 i.
Step 2: For each j-element subset, fL pick any element, say
a) g'<0, g>1
d
d
d) g'<0, g>1 e) g'>0, g>1
Figure
5: SRIS and outlines.
Using the Euclidean Algorithm, compute integers b
Apply Algorithm 2 below to find nonzero integers r 2 ; :::; r j such that
r
a) Original reduced iteration space
(1,1) .
Figure
The reduced iteration space before and after rearrangement.
Let
r
Step 3: For j from 3 to m, compute
Step 4:
As will be established later, d is unique regardless of the choice of L i 1 ;1 in Step 2.
To calculate the compacting parameter d, non-zero integers r need to be found in Algorithm 1
from the integer coefficients b computed by the Euclidean Algorithm. Algorithm 2 is therefore
invoked to derive a group of non-zero integer coefficients from a group of any integer coefficients of a
linear expression.
Algorithm 2:
Input: Non-zero positive integers such that
Output: non-zero integers such that
1: If there are an even number of zero coefficients a
(0 - 2k - p) among
d)
d)
d)
d)
d)
d)
d)
d)
d)
d)
d)
a
Figure
7: Compacted SRIS.
Step 2: If there are an odd number of zero coefficients a i 1
(0
Obviously, the non-zero integers computed by Algorithm 2 satisfy
For the example in Figure 4, since there are only two linear functions in the loop nest, only Step 1
and Step 4 of Algorithm 1 are used to calculate the compacting parameter d, that is,
Next, we need to establish two important facts. First, after compacting with d, the threads in different
columns are independent. Second, the compacting parameter d computed by Algorithm 1 is the
Figure
8: The reduced iteration space after compacting.
largest number of independent columns possible as the result of compacting the SRIS with a constant
value. The first fact is established by Theorem 5, Theorem 6, and Corollary 1. To do so, we introduce
the following definition.
Definition 7: Given an iteration (i; j) in the reduced iteration space, staggering parameters
and the unified staggering parameter (g; g 0
are integers that satisfy
a
a set of iterations S 0
i;j is constructed as follows:
(1) For any integer r, iteration (i
) in the space belongs to S 0
a
(2) If there exist integers r not all zero, integer r, and iterations (i 0
the space, such that
r
r
and
a
The following three lemmas and Theorem 5 show that S 0
i;j is the same as the equivalence class S i;j .
From the process of constructing S 0
i;j , we immediately have the following lemma.
Lemma 4: Given iterations (i;
) in the reduced iteration space, and a unified staggering
parameter (g; g 0
), if there exists integer r such that
Lemma 5 [21]: Given iterations (i 0
) in the reduced iteration space, if there
exist integers r all zero, such that
r fl L fl;2
r
Lemma 6 [21]: Given iterations (i;
) in the reduced iteration space, if (i 0
Theorem 5 [21]: Given staggering parameters (L
for any iteration (i; j) in the reduced iteration space.
Next, we establish that S 0
i;j is the result of staggering with (g; g 0
followed by compacting with d.
This is stated by Corollary 1 below.
Lemma 7 [21]: Given staggering parameters (L
that are integers that satisfy
(1)
r
(2) if there exist integers r 0
satisfying
1 .
For any integers r 00
satisfying
r 00
there exists an integer k - 1 such that r 00
Theorem 6 [21]: If d is the compacting parameter determined by Algorithm 1, and d 0
r fl L fl;2 ,
are integers, not all zero, which satisfy
r
then there exists an integer k such that d 0
Corollary 1: The set S 0
i;j in Definition 7 satisfies
where k and r are integers, (g; g 0
) is the unified staggering parameter in Definition 6, and d is the
compacting parameter computed by Algorithm 1.
From the above result, the threads in different columns after compacting the SRIS with d are inde-
pendent. Next, we establish with Theorem 7 that any two columns which are d columns apart, where d
is computed by Algorithm 1, should be dependent and that, therefore, d is the largest possible number
of independent columns as the result of compacting the SRIS with a constant number.
Theorem 7: Given (i; j), we have
i;j .
Proof: According to how d is computed in Algorithm 1, there exist integers r such that
By the definition of S 0
To further simplify the process of the staggering and the compacting of the reduced iteration space,
the following theorem can be used to replace multiple staggering parameters, which are in proportion,
with a single staggering parameter.
Theorem 8 [21]: Given staggering parameters (L
m) are the staggering parameters satisfying
there exists an integer r satisfying
We now estimate the time needed by the compiler to compute the staggering parameters, a unified
staggering parameter, and the compacting parameter. Suppose there are m reference pairs. The
complexity of determining all the staggering parameters is O(m). A unified staggering parameter
of these staggering parameters can be determined in O(m) with the Euclidean Algorithm. Let
be the number of groups of staggering parameters such that all parameters in the same group are
in proportion. m 0
is very small in practice. According to Theorem 8, we only need to consider one
representative from each group. The complexity of Algorithm 1 and 2 for computing the compacting
parameter is C 2
Lastly, we show the result of restructuring the original loop nest based on staggering and compacting.
Note that if all staggering parameters are in proportion, then compacting is unnecessary for data
dependence elimination. However, to improve load balance, we compact the SRIS by a compacting
factor d, which equals the number of the available processors. The restructured code is parameterized
by the loop bounds and by the number of available processors, which can be obtained by a system call
at runtime. There is no need to recompile for a different number of available processors.
If the given loop nest is perfectly nested, then the resulting code, after staggering and compacting,
is shown in Code Segment 1 listed below.
Code Segment 1 (The result after restructuring the perfectly nested loop with multiple linear
ENDDO
ENDDO
ENDDO
If the given loop nest is not perfectly nested, then the resulting code has two variants, one for
and the other for g 0
=0 . We show the code for g 0
listed below (the code for
=0 is similar [21]). In this code segment, LB 1 and LB 2 are the lower bounds of I and J , UB 1 and
UB 2 are the upper bounds of these two loops, (g; g 0
and d are the unified staggering parameter and
the compacting parameter, respectively, which have been determined above. PSI and PSJ are local
variables that each processor uses to determine the first iteration J 0 of loop J to be executed on it.
Variables J 0 and OFFSET are also local to each processor. PSI and PSJ for each processor are modified
every g iterations of the loop I, according to the staggering and the compacting parameters. The
values of (PSI; PSJ) are initialized for the d different processors to (LB
respectively. We define a function mod* such that x mod*
Code Segment 2 (The restructured code for the case of g 0 6= 0):
endif
endif
ENDDO
ENDDO
ENDDO
4.2 An Extended Model
The theory we developed in the previous subsection can be extended to more general cases in which the
subscript functions in the same pair of references are not necessary the same. Suppose the following
two linear functions
~
and
~
belong to the same pair of references. In order to determine which iterations in the reduced iteration
space are dependent due to this reference pair, we consider an affine transformation
such that the linear function ~
h 2 can be expressed as
~
a 2;1
a
a 2;2
which we denote by ~
). In order to use the previous results from Section 4.1, we let ~
be identical to ~
which implies
a 1;1
a 2;1
a 1;2
a 2;2
c 1;1
c 2;1
c 2;2
and
a 2;1
a 2;2
We can now apply the algorithms in Section 4.1 to ~
2 and ~
which yield a staggering parameter,
say For a given iteration (i 0
). The iteration (i; must
have a dependence with (i 00
before the affine transformation if and only if the iteration (i 0
a dependence with (i 00
after the transformation. We denote the distance between (i;
as (L 0
2 ), which can be calculated as:
or
such that L 0
not be constant, meaning that the iterations cannot be
aligned with a constant staggering parameter. In common practice, since loop J is DOALL in our loop
nest model, the two linear functions ~
will have the same cofficients for loop index variables I
and J , which implies that ff 1. In this paper, we will consider the case of ff
We now have
or
We define (L 0
2 ), which are two constants given staggering parameter in this case.
If Equations (1) and (2) have a unique solution for we have a unique staggering
parameter (L 0
On the other hand, if there exist multiple solutions for then the following theorem shows
that under certain conditions, (L 0
determined by different should be in proportion.
Theorem 9: Assume ff 1. If the staggering parameter (L of the subscript function
~
after the affine transformation is a solution for Equations (1) and (2), then (L 0
is equal to (fi or to (L proportion
with solution (fi Equations (1) and (2).
proportion with prove that
in proportion to (L supposing that Every solution to Equations (1)
and (2) can be written as 0
solution of the homogeneous system associated with Equations (1) and (2), that
is,
a
a
So, if a 1;2 b 1;1 \Gamma b 1;2 a 1;1 6= 0, we have
a 2;2 b
a 1;2 b 1;1 \Gamma b 1;2 a 1;1
a 2;2 b
c 1;2 a
a 1;2 b 1;1 \Gamma b 1;2 a 1;1
Suppose
then we have c 1;2 a
according to Definition 4, we have
c 1;2 a
For the case of a 1;2 b as in Theorem 2, we have: (1) a 1;1
(2) a 1;2 Therefore, according to Definition 4,
we also have (- proportion with are in
proportion with
2 ) is in proportion with
If the condition in Theorem 9 is met, we choose (L 0
as the staggering
parameter for the reference pair ~
Table
shows examples of staggering parameters for different subscript functions appearing in the
dependent reference pair, where the loop index variables are listed in the order from the outermost loop
level to the innermost. If we simultaneously consider two reference pairs: A(I; J) with
and B(I; J) with then the thread T i;j will share the same array element A(i;
thread T i+3;j+1 and the same array element B(i; j) with thread T i+1;j+3 . Using Theorem 9, the staggering
parameters (L 0
these two pairs are (3,1) and (1,3) respectively. A unified staggering
parameter and compacting parameter can be calculated as (g; g 0
8.
Table
2: Examples of different functions in the same dependent reference pair.
Loop nest Dependent reference pair ( ~
5 Experimental Results
The thread alignment techniques described in this paper have been implemented as backend optimizations
in KD-PARPRO [20], a knowledge-based parallelizing tool which can perform intra- and inter-procedural
data dependence analysis and a large number of parallelizing transformations, including loop
interchange, loop distribution, loop skewing, and strip mining for FORTRAN programs.
To evaluate the effect of the thread alignment techniques on the performance of multiprocessors with
memory hierarchies, we experimented with three programs from the LINPACK benchmarks on a SGI
Challenge cluster which can be configured to contain up to twenty MIPS 4400 processors. First, the
programs were parallelized and optimized using KD-PARPRO. To reduce or eliminate cache thrashing
due to true data sharing, our tool recognized the nested loops which may cause the thrashing. It
applied the techniques described in the previous section to analyze and restructure the loop nests. The
parallelized programs were then compiled using SGI's f77 compiler with the optimization option -O2.
The sequential versions of the programs were compiled on the same machine using the same optimization
option for f77. The output binary codes were then executed on various configurations with a different
number of processors during dedicated time. Each MIPS 4400 processor has a 16K-byte primary data
cache and a 4M-byte secondary cache. The cache block size is 32 bytes for the primary data cache and
128 bytes for the secondary cache. A fast and wide split transaction bus POWERpath-2 is used as its
coherent interconnect. Cache coherence is maintained with a snoopy write-invalidate strategy.
We compared the results obtained by using our algorithm to align the threads with those obtained
by using four different loop scheduling strategies provided by SGI system software, namely, simple,
interleave, dynamic, and gss. The simple method divides the iterations by the number of processors
and then assigns each chunk of consecutive iterations to one processor. The interleave scheduling
method divides the iterations into chunks of the size specified by the CHUNK option, and execution of
those chunks is statically interleaved among the processes. With dynamic scheduling, the iterations are
also divided into CHUNK-sized chunks. As each process finishes a chunk, however, it enters a critical
section to grab the next available chunk. With gss scheduling [29] , the chunk size is varied, depending
on the number of iterations remaining. None of these SGI-provided methods consider task alignment.
The speedup of a parallel execution on shared memory machines just like the SGI cluster can
be affected by many factors, including: program parallelism, data locality, scheduling overhead, and
load balance. Usually gss, dynamic and interleave schedulings with a small chunk size are supposed
to show better load balance than simple scheduling. On the other hand, they tend to incur more
scheduling overhead than simple. Furthermore, simple captures more data locality in most cases than
other schedulings do.
The programs we selected from LINPACK are SGEFA, SPODI, and SSIFA. SGEFA factors a double
precision matrix by Gaussian elimination. The main loop structure in this program consists of three
imperfectly nested loops. The innermost loop is inside subroutine SAXPY, which multiplies a vector
by a constant and then adds the result to another vector. In order to show the array access pattern
inside the loop body, we inlined the SAXPY in the code section given below. However, we kept the
subroutine call when we applied our techniques to the program.
ENDDO
ENDDO
The SRIS, after staggering the iterations in the reduced iteration space (K; J), is shown in Figure
9. For this program, we only consider the linear function (I; J). The staggering parameter is (1,0)
according to Definition 4(4). The number of processors is used to determine the compacting factor.
Figure
9: SRIS for SGEFA.
SPODI computes the determinant and inverse of a certain double precision symmetric positive-definite
matrix. There are two main loop nests in this program, as shown below. We restructured both
loop nests. The same as in SGEFA, the innermost loop is contained in the subroutine SAXPY.
/* the first loop nest */
ENDDO
ENDDO
/* the second loop nest */
ENDDO
ENDDO
The SRISs, after staggering the iterations in the reduced iteration spaces (K; J) and (J; K) for
these two loop nests, respectively, are shown in Figure 10(a)(b). The linear functions we considered are:
(I; J) and (I; K), respectively. Their staggering parameters are both (1,0), according to Definition 4(4).
(a) Loop nest 1
(b) Loop nest 2
Figure
10: SRISs for SPODI.
SSIFA factors a double precision symmetric matrix by elimination. The main loop nest in this
program is shown below. We view the backward GOTO loop as the outermost sequential loop, within
which the value of kstep may change between 1 and 2 in different iterations, based on the input matrix.
Depending on the value of kstep, one of the two parallel loop nests inside the outermost sequential loop
will be executed for each iteration of the outermost loop. The index step of the outermost loop equals
i.e. \Gamma1 or \Gamma2. The array access patterns for these two kstep values are slightly different. The
innermost loop is again inside the SAXPY subroutine.
10: CONTINUE
IF (K .EQ.
ENDDO
ENDDO
ENDDO
20: CONTINUE
The SRISs, after staggering the iterations in the reduced iteration space (K; JJ) for two different
ksteps, are shown in Figure 11. For clarity, we use c kstep to denote the value of kstep in the current
K iteration, and we use p kstep for its value in the previous K iteration. All threads will be aligned
well if we properly align the threads created in the current K iteration with those in the previous K
iteration. We need to consider four possible cases of dependences: one is between two references to
another is between two references to A(I; the third is from A(I; K \Gamma JJ)
to A(I; and the last is from A(I; K \Gamma For all of these cases, the
staggering parameter is (L
(a)
Figure
11: SRISs for SSIFA.
The problem sizes we used in our experiments are n=100 and 1000. The performance of the parallel
codes transformed by our techniques, compared with the performance achieved by the scheduling
methods provided by SGI, are shown in Figures 12-15. Both SGEFA and SSIFA may require pivoting
for non-positive-definite symmetric matrices, but not for positive-definite symmetric matrices. We show
data for SGEFA with pivoting, and we show data both with and without pivoting for SSIFA. Pivoting
may potentially destroy the task alignment.1352 4 8
Number of processors
Interleave (chunk
Interleave (chunk
Dynamic
Gss
Simple
Our method
(a)
Number of processors
Interleave (chunk
Interleave (chunk
Dynamic
Gss
Simple
Our method
(b) n=1000
Figure
12: Gaussian elimination (SGEFA).12345
Number of processors
Interleave (chunk
Interleave (chunk
Dynamic
Gss
Simple
Our method
(a) n=10026101418
Number of processors
Interleave (chunk
Interleave (chunk
Dynamic
Gss
Simple
Our method
(b) n=1000
Figure
13: Determinant and inverse of a symmetric positive matrix (SPODI).
As the figures show, our method always outperforms all of the SGI's scheduling methods, with the
exception of program SGEFA. For this program, our method's performance is almost the same as that
of simple, although our method outperforms simple by 14% on 16 processors with should
be attributed to the reduction of cache thrashing due to true data sharing, a problem that tends to
be more severe when more processors are running. The simple scheduling method tends to get better
performance than the dynamic, gss, and interleave methods, because it results in better locality and
less cache thrashing in most cases, and it also incurs less scheduling overhead. But when the programs
Number of processors
Interleave (chunk
Interleave (chunk
Dynamic
Gss
Simple
Our method
(a) n=100246810
Number of processors
Interleave (chunk
Interleave (chunk
Dynamic
Gss
Simple
Our method
(b) n=1000
Figure
14: Factorization of a symmetric matrix (SSIFA).
do not exhibit a good load balance, like SPODI, SSIFA, and other programs in LINPACK, which deal
with symmetric matrices, simple's performance results degrade substantially. Our method outperforms
simple quite significantly in most cases, especially for SPODI (Figure 13), as well as for SSIFA without
pivoting (Figure 15), where our method beats simple by as much as 105%. We are not able to get
improvement over simple for program SGEFA when pivoting is much more likely to
destroy the locality we try to keep. For the rest of the programs, we attribute our performance gain
over simple both to the reduction of cache thrashing due to true data sharing and to a better load
balance, although, for SSIFA with pivoting, we believe our method benefits more from load balancing.
We note that the SGI system software cannot pick the right scheduling method automatically to fit the
particular program. On the other hand, our method seems more capable of delivering good performance
for different loop shapes. As the thrashing problem becomes more serious on parallel systems with more
processors and greater communication overhead, our method will likely be even more effective.
6 Conclusions
This paper presents a method in which the reduced iteration space is rearranged according to the staggering
and the compacting parameters. The nested loop (either perfectly nested or imperfectly nested)
is restructured to reduce or even eliminate cache thrashing due to true data sharing. This method can
be efficiently implemented in any parallel compiler. Although the analysis per se is based on a simple
machine model, the resulting code executes correctly on more complex models. Our experimental
results show that the transformed code can perform quite well on a real machine. How to extend the
techniques proposed in this paper to incorporate additional machine parameters is interesting future
work.
--R
On the performance enhancement of paging systems through program analysis and transformations.
Automatic partitioning of parallel loops and data arrays for distributed shared-memory multiprocessors
Automatic loop interchange.
Multilevel cache hierarchies: organizations
Dependence analysis for supercomputing.
Improving register allocation for subscripted variables.
Compiling scientific code for complex memory hierarchies.
Partitioning and labeling of loops by unimodular transformations.
Eliminating False Sharing.
A solution of cache ping-pong problem in RISC based parallel processing systems
Cache or local memory thrashing and compiler strategy in parallel processing systems.
An iteration partition approach for cache or local memory thrashing on parallel processing.
Performance optimizations
On the problem of optimizing data transfers for complex memory systems.
Strategies for cache and local memory management by global program transformation.
Effects of program parallelization and stripmining transformation on cache performance in a multiprocessor.
Compiler techniques for data partitioning of sequentially iterated parallel loops.
Reducing false sharing on shared memory multiprocessors through compile-time data transformations
The design and the implementation of a knowledge-based parallelizing tool
An efficient solution to the cache thrashing problem (Extended Version).
Loop restructuring techniques for the thrashing problem.
Loop staggering
The structure of computers and computations.
Dependence graphs and compiler op- timizations
A solution of the cache ping-pong problem in multiprocessor systems
An introduction to the theory of numbers.
Minimum distance: a method for partitioning recurrences for multiproces- sors
Guided self-scheduling: a practical scheduling scheme for parallel supercomputers
Time optimal linear schedules for algorithms with uniform dependencies.
An empirical study of Fortran programs for parallelizing compilers.
Iteration partitioning for resolving stride conflicts on cache-coherent multiprocessors
False sharing and spatial locality in multiprocessor caches.
A data locality optimizing algorithm.
More iteration space tiling.
--TR | multiprocessors;parallelizing compilers;parallel threads;loop transformations;cache thrashing;true data sharing |
279583 | Modulo Scheduling with Reduced Register Pressure. | AbstractSoftware pipelining is a scheduling technique that is used by some product compilers in order to expose more instruction level parallelism out of innermost loops. Modulo scheduling refers to a class of algorithms for software pipelining. Most previous research on modulo scheduling has focused on reducing the number of cycles between the initiation of consecutive iterations (which is termed II) but has not considered the effect of the register pressure of the produced schedules. The register pressure increases as the instruction level parallelism increases. When the register requirements of a schedule are higher than the available number of registers, the loop must be rescheduled perhaps with a higher II. Therefore, the register pressure has an important impact on the performance of a schedule. This paper presents a novel heuristic modulo scheduling strategy that tries to generate schedules with the lowest II, and, from all the possible schedules with such II, it tries to select that with the lowest register requirements. The proposed method has been implemented in an experimental compiler and has been tested for the Perfect Club benchmarks. The results show that the proposed method achieves an optimal II for at least 97.5 percent of the loops and its compilation time is comparable to a conventional top-down approach, whereas the register requirements are lower. In addition, the proposed method is compared with some other existing methods. The results indicate that the proposed method performs better than other heuristic methods and almost as well as linear programming methods, which obtain optimal solutions but are impractical for product compilers because their computing cost grows exponentially with the number of operations in the loop body. | Introduction
Increasing the instruction level parallelism is an observed trend in the design of current
microprocessors. This requires a combined effort from the hardware and software in order
to be effective. Since most of the execution time of common programs is spent in loops,
many efforts to improve performance have targeted loop nests.
Software pipelining [5] is an instruction scheduling technique that exploits the instruction
level parallelism of loops by overlapping the execution of successive iterations of a
loop. There are different approaches to generate a software pipelined schedule for a loop
[1]. Modulo scheduling is a class of software pipelining algorithms that was proposed at
the begining of last decade [23] and has been incorporated into some product compilers
(e.g. [21, 7]). Besides, many research papers have recently appeared on this topic
[11, 14, 25, 13, 28, 12, 26, 22, 29, 17].
Modulo scheduling framework relies on generating a schedule for an iteration of the
loop such that when this same schedule is repeated at regular intervals, no dependence is
violated and no resource usage conflict arises. The interval between the succesive iterations
is termed Initiation Interval (II ). Having a constant initiation interval implies that no
resource may be used more than once at the same time modulo II .
Most modulo scheduling approaches consists of two steps. First, they compute an
schedule trying to minimize the II but without caring about register allocation and then,
variables are allocated to registers. The execution time of a software pipelined loop depends
on the II , the maximum number of live values of the schedule (termed MaxLive) and the
length of the schedule for one iteration. The II determines the issue rate of loop iterations.
Regarding the second factor, if MaxLive is not higher than the number of available registers
then the computed schedule is feasible and then it does not influence the execution time.
Otherwise, some actions must be taken in order to reduce the register pressure. Some
possible solutions outlined in [24] and evaluated in [16] are:
ffl Reschedule the loop with an increased II . In general, increasing the II reduces
MaxLive but it decreases the issue rate, which has a negative effect on the execution
time.
ffl Add spill code. This again has a negative effect since it increases the required memory
bandwidth and it will result in additional memory penalties (e.g. cache misses).
Besides, memory may become the most saturated resource and therefore adding spill
code may require to increase the II .
Finally, the length of the schedule for one iteration determines the cost of the epilogue
that should be executed after the main loop in order to finish the last iterarions which
have been initiated in the main loop but have not been completed (see section 2.1). This
cost may be negligible when the iteration count of the loop is high.
Most previous works have focused on reducing the II and sometimes also the length of
the schelude for one iteration but they have not considered the register requirements of the
proposed schedule, which may have a severe impact on the performance as outlined above.
A current trend in the design of new processors is the increase in the amount of instruction
level parallelism that they can exploit. Exploiting more instruction level parallelism results
in a significant increase in the register pressure [19, 18], which exacerbates the problem of
ignoring its effect on the performance of a given schedule.
In order to obtain more effective schedules, a few recently proposed modulo scheduling
approaches try to minimize both the II and the register requirements of the produced
schedules.
Some of these approaches [10, 9] are based on formulating the problem in terms of an
optimization problem and solve it using an integer linear programming approach. This
may produce optimal schedules but unfortunately, this approach has a computing cost
that grows exponentially with the number of basic operations in the loop body. Therefore,
they are impractical for big loops, which in most cases are the most time consuming parts
of a program and thus, they may be the ones that most benefit from software pipelining.
Practical modulo scheduling approaches used by product compilers use some heuristics
to guide the scheduling process. The two most relevant heuristic approaches proposed
in the literature that try to minimize both the II and the register pressure are: Slack
Scheduling [12] and Stage Scheduling [8].
Slack Scheduling is an iterative algorithm with limited backtracking. At each iteration
the scheduler chooses an operation based on a previouly computed dynamic priority. This
priority is a function of the slack of each operation (i.e., a measure of the scheduling
freedom for that operation) and it also depends on how much critical the resources used
by that operation are. The selected operation is placed in the partial schedule either as
early as possible or as late as possible. The choice between these two alternative is made
basically by determining how many of the operation's inputs and outputs are stretchable
and choosing the one that minimizes the involved values' lifetimes. If the scheduler cannot
place the selected operation due to a lack of conflict-free issue slots, then it is forced to a
particular slot and all the conflicting operations are ejected from the partial scheduler. In
order to limit this type of backtracking, if operations are ejected too many times, the II is
incremented and the scheduling is started all over again.
Stage Scheduling is not a whole modulo scheduler by itself but a set of heuristic techniques
that reduce the register requirements of any given modulo schedule. This objective
is achieved by shifting operations by multiples of II cycles. The resulting schedule has the
same II but lower register requirements.
This paper presents Hypernode Reduction Modulo Scheduling (HRMS) 1 , a heuristic
modulo scheduling approach that tries to generate schedules with the lowest II , and from
all the possible schedules with such II , it tries to select that with the lowest register
requirements. The main part of HRMS is the ordering strategy. The ordering phase orders
the nodes before scheduling them, so that only predecessors or successors of a node can be
scheduled before it is scheduled (except for recurrences). During the scheduling step the
nodes are scheduled as early/late as possible, if their predecessors/successors have been
preliminary version of this work appeared in [17].
previously scheduled.
The performance of HRMS is evaluated and compared with that of a conventional
approach (a top-down scheduler) that does not care about register pressure. For this
evaluation we have used over a thousand loops from the Perfect Club Benchmark Suite
[4] that account for 78% of its execution time. The results show that HRMS achieves an
optimal II for at least 97.5% of the loops and its compilation time is comparable to the
top-down approach whereas the register requirements are lower.
In addition, HRMS has been tested for a set of loops taken from [10] and compared
against two other heuristic strategies. These two strategies are the previously mentioned
Slack Scheduling, and FRLC [27], which is an heuristic strategy that does not take into
account the register requirements. In addition, HRMS is compared with SPILP [10], which
is a linear programming formulation of the problem. Because of the computing requirements
of this latter approach, only small loops are used for this comparison. The results
indicate that HRMS obtains better schedules than the other two heuristic approaches and
its results are very close to the ones produced by the optimal scheduler. The compilation
time of HRMS is similar to the other heuristic methods and much lower than the linear
programming approach.
The rest of this paper is organized as follows. In Section 2, an example is used to
illustrate the motivation for this work, that is, reducing the register pressure in modulo
scheduled loops while achieving near optimal II . Section 3 describes the proposed modulo
scheduling algorithm that is called HRMS. Section 4 evaluates the performance of the
proposed approach, and finally, Section 5 states the main conclusions of this work.
2 Overview of modulo scheduling and motivating ex-
ample
This section includes an overview of modulo scheduling and the motivation for the work
presented in this paper. For a more detailed discussion on modulo scheduling refer to [1].
2.1 Overview of modulo scheduling
In a software pipelined loop the schedule for an iteration is divided into stages so that the
execution of consecutive iterations that are in distinct stages is overlapped. The number
of stages in one iteration is termed stage count(SC). The number of cycles per stage is
II .
Figure
1 shows the dependence graph for the running example used along this section.
In this graph, nodes represent basic operations of the loop and edges represent values
generated and consumed by these operations. For this graph, Figure 2a shows the execution
of the six iterations of the software pipelined loop with an II of 2 and a SC of 5. The
operations have been scheduled assuming a four-wide issue machine, with general-purpose
functional units (fully pipelined with a latency of two cycles). The scheduling of each
iteration has been obtained using a top-down strategy that gives priority to operations in
A
G
F
Figure
1: A sample dependence graph.
the critical path with the additional constraint that no resource can be used more than
once at the same cycle modulo II . The figure also shows the corresponding lifetimes of
the values generated in each iteration.
The execution of a loop can be divided into three phases: a ramp up phase that fills
the software pipeline, an steady state phase where the software pipeline achieves maximum
overlap of iterations, and a ramp down phase that drains the software pipeline. The code
that implements the ramp up phase is termed the prologue. During the steady state
phase of the execution, the same pattern of operations is executed in each stage. This is
achieved by iterating on a piece of code, termed the kernel, that correspods to one stage
of the steady state phase. A third piece of code called the epilogue, is required to drain
the software pipeline after the execution of the steady state phase.
The initiation interval II between two successive iterations is bounded either by loop-carried
dependences in the graph (RecMII ) or by resource constraints of the architecture
(ResMII ). This lower bound on the II is termed the Minimum Initiation Interval
)). The reader is refered to [7, 22] for an extensive dissertation
on how to calculate ResMII and RecMII .
Since the graph in Figure 1 has no recurrence circuits, its initiation interval is constrained
only by the available resources: number of operations
divided by number of resources). Notice that in the scheduling of Figure 2a no dependence
is violated and every functional unit is used at most once at all even cycles (cycle modulo
and at most once at all odd cycles (cycle modulo
The code corresponding to the kernel of the software pipelined loop is obtained by
ovelapping the different stages that constitute the schedule of one iteration. This is shown
in Figure 2b. The subscripts in the code indicate relative iteration distance in the original
loop between operations. For instance, in this example, each iteration of the kernel executes
an instance of operation A and an instance of operation B of the previous iteration in the
initial loop.
Values used in a loop correspond either to loop-invariant variables or to loop-variant
variables. Loop-invariants are repeatedly used but never defined during loop execution.
Loop-invariants, have a single value for all the iterations of the loop and therefore they
iteration 1
iteration 2
iteration 3
iteration 4
iteration 5
Prologue
Steady
Epilogue
II
Kernel Code
iteration 6
a)
G
F
G
F
G
F
G
F
G
F
G
F
Figure
2: a) Software pipelined loop execution, b) Kernel, and c) Register requirements.
require one register each regardless of the scheduling and the machine configuration.
For loop-variants, a value is generated in each iteration of the loop and, therefore,
there is a different value corresponding to each iteration. Because of the nature of software
pipelining, lifetimes of values defined in an iteration can overlap with lifetimes of
values defined in subsequent iterations. Figure 2a shows the lifetimes for the loop-variants
corresponding to every iteration of the loop. By overlapping the lifetimes of the different
iterations, a pattern of length II cycles that is indefinetely repeated is obtained. This
pattern is shown in Figure 2c. This pattern indicates the number of values that are live
at any given cycle. As it is shown in [24], the maximum number of simultaneously live
values MaxLive is an accurate approximation of the number of register required by the
schedule 2 . In this section, the register requirements of a given schedule will be approximated
by MaxLive. However, in the experiments section we will measure the actual register
requirements after register allocation.
Values with a lifetime greater than II pose an additional difficulty since new values are
generated before previous ones are used. One approach to fix this problem is to provide
some form of register renaming so that successive definitions of a value use distinct registers.
Renaming can be performed at compile time by using modulo variable expansion [15], i.e.,
2 For an extensive discussion on the problem of allocating registers for software-pipelined loops refer to
[24]. The strategies presented in that paper almost always achieve the MaxLive lower bound. In particular,
the wands-only strategy using end-fit with adjacency ordering never required more than MaxLive
registers.
unrolling the kernel and renaming at compile time the multiple definitions of each variable
that exist in the unrolled kernel. A rotating register file can be used to solve this problem
without replicating code by renaming different instantiations of a loop-variant at execution
time [6].
2.2 Motivating example
In many modulo scheduling approaches, the lifetimes of some values can be unnecessarily
large. As an example, Figure 2a shows a top-down scheduling, and Figure 3a a bottom-up
scheduling for the example graph of Figure 1 and a machine with four general-purpose
functional units with a two-cycle latency.
In a top-down strategy, operations can only be scheduled if all their predecessors have
already been scheduled. Each node is placed as early as possible in order not to delay any
possible successors. Similary, in a bottom-up strategy, an operation is ready for scheduling
if all its successors have already been scheduled. In this case, each node is placed as late
as possible in order not to delay possible predecessors. In both strategies, when there are
several candidates to be scheduled, the algorithm chooses the one that is more critical in
the scheduling.
In the top-down scheduling, node E is scheduled before node F. Since E has no predecessors
it can be placed at any cycle, but in order not to delay any possible successor, it is
placed as early as possible. Figure 2a shows the lifetimes of loop variants for the top-down
scheduling assuming that a value is alive from the beginning of the producer operation to
the beginning of the last consumer. Notice that loop variant VE has an unnecessary large
lifetime due to the early placement of E during the scheduling.
In the bottom-up approach E is scheduled after F, therefore it is placed as late as
possible reducing the lifetime of VE (Figure 3b). Unfortunately C is scheduled before B
and, in order to not delay any possible predecessor it is scheduled as late as possible. Notice
that the VB has an unnecessary large lifetime due to the late placement of C.
In HRMS, an operation will be ready for scheduling even if some of its predecessors
and successors have not been scheduled. The only condition (to be guaranteed by the
pre-ordering step) is that when an operation is scheduled, the partial schedule contains
only predecessors or successors or none of them, but not both of them (in the absence
of recurrences). The ordering is done with the aim that all operations have a previously
scheduled reference operation (except for the first operation to be scheduled). For instance,
consider that nodes of the graph in Figure 1 are scheduled in the order fA, B, C, D, F,
Gg. Notice that node F will be scheduled before nodes fE, Gg, a predecessor and a
successor respectively, and that the partial scheduling will contain only a predecessor (D)
of F. With this scheduling order, both C and E (the two conflicting operations in the top-down
and bottom-up strategies) have a reference operation already scheduled, when they
are placed in the partial schedule.
Figure 4a shows the HRMS scheduling for one iteration. Operation A will be scheduled
in cycle 0. Operation B, which depends on A, will be scheduled in cycle 2. Then C and later
D, are scheduled in cycle 4. At this point, operation F is scheduled as early as possible,
G C9
Cycle
c) d)
Figure
3: Bottom-Up scheduling: a) Schedule of one iteration, b) Lifetimes of variables, c)
Kernel, d) Register requirements.
i.e. at cycle 6 (because it depends on D), but there are no available resources at this cycle,
so it is delayed to cycle 7. Now the scheduler places operation E as late as possible in the
scheduling because there is a successor of E previously placed in the partial scheduling,
thus operation E is placed at cycle 5. And finally, since operation G has a predecessor
previously scheduled, it is placed as early as possible in the scheduling, i.e. at cycle 9.
Figure 4b shows the lifetimes of loop variants. Notice that neither C nor E have
been placed too late or too early because the scheduler always takes previously scheduled
operations as a reference point. Since F has been scheduled before E, the scheduler has a
reference operation to decide a late start for E. Figure 4d shows the number of live values
in the kernel (Figure 4c) during the steady state phase of the execution of the loop. There
are 6 live values in the first row and 5 in the second. In contrast the top-down schedule
has simultaneosly live values and the bottom-up schedule has 9.
The following section describes the algorithm that orders the nodes before scheduling,
and the scheduling step.
3 Hypernode Reduction Modulo Scheduling
The dependences of an innermost loop can be represented by a Dependence Graph
is the set of vertices of the graph G, where each vertex
an operation of the loop. E is the dependence edge set, where each edge (u; v) 2 E
represents a dependence between two operations u, v. Edges may correspond to any of
the following types of dependences: register dependences, memory dependences or control
dependences. The dependence distance ffi (u;v)
is a nonnegative integer associated with each
edge There is a dependence of distance ffi (u;v)
between two nodes u and v if the
execution of operation v depends on the execution of operation u at ffi (u;v) iterations before.
The latency - u is a nonzero positive integer associated with each node u 2 V and is defined
Cycle
c) d)
Figure
4: HRMS scheduling: a) Schedule of one iteration, b) Lifetimes of variables, c)
Kernel, d) Register requirements.
as the number of cycles taken by the corresponding operation to produce a result.
HRMS tries to minimize the register requirements of the loop by scheduling any operation
u as close as possible to their relatives i.e. the predecessors of u, P red(u), and the
successors of u, Succ(u). Scheduling operations in this way shortens operand's lifetime
and therefore reduces the register requirements of the loop.
To software pipeline a loop, the scheduler must handle cyclic dependences caused by
recurrence circuits. The scheduling of the operations in a recurrence circuit must not be
stretched
beyond\Omega \Theta II ,
where\Omega is the sum of the distances in the edges that constitute
the recurrence circuit.
HRMS solves these problems by splitting the scheduling into two steps: A pre-ordering
step that orders nodes and, the actual scheduling, that schedules nodes (once at a time)
in the order given by the pre-ordering step.
The pre-ordering step orders the nodes of the dependence graph with the goal of scheduling
the loop with an II as close as possible to MII and using the minimum number of reg-
isters. It gives priority to recurrence circuits in order not to stretch any recurrence circuit.
It also ensures that, when a node is scheduled, the current partial scheduling contains only
predecessors or successors of the node, but never both (unless the node is the last node of
a recurrence circuit to be scheduled).
The ordering step assumes that the dependence graph, -), is connected
component. If G is not a connected component it is decomposed into a set of connected
components fG i g, each G i is ordered separately and finally the lists of nodes of all G i are
concatenated giving a higher priority to the G i with a more restrictive recurrence circuit(in
terms of RecMII ).
Next the pre-ordering step is presented. First we will assume that the dependence graph
function Pre Ordering(G, L, h)
fReturns a list with the nodes of G orderedg
fIt takes as input: g
fThe dependence graph (G) g
fA list of nodes partially ordered (L) g
fAn initial node (i.e the hypernode) (h) g
List
return List
Figure
5: Function that pre-orders the nodes in a dependence graph without recurrence
circuits
has no recurrence circuits (Section 3.1), and in Section 3.2 we introduce modifications in
order to deal with recurrence circuits. Finally Section 3.3 presents the scheduling step.
3.1 Pre-ordering of graphs without recurrence circuits
To order the nodes of a graph, an initial node, that we call Hypernode, is selected. In an
iterative process, all the nodes in the dependence graph are reduced to this Hypernode.
The reduction of a set of nodes to the Hypernode consists of: deleting the set of edges
among the nodes of the set and the Hypernode, replacing the edges between the rest of
the nodes and the reduced set of nodes by edges between the rest of the nodes and the
Hypernode, and finally deleting the set of nodes being reduced.
The pre-ordering step (Figure 5) requires an initial Hypernode and a partial list of
ordered nodes. The current implementation selects the first node of the graph (i.e the
node corresponding to the first operation in the program order) but any node of the graph
can be taken as the initial Hypernode 3 . This node is inserted in the partial list of ordered
3 Preliminary experiments showed that selecting different initial nodes produced different schedules
function Hypernode Reduction(V 0 ,G,h)
f Creates the subgraph G
f And reduces G 0 to the node h in the graph G g
for each do
for each do
else
return G 0
Figure
Function Hypernode Reduction
nodes, then the pre-ordering algorithm sorts the rest of the nodes.
At each step, the predecessors (successors) of the Hypernode are determined. Then the
nodes that appear in any path among the predecessors (successors) are obtained (function
Search All Paths) 4 . Once the predecessors (successors) and all the paths connecting them
have been obtained, all these nodes are reduced (see function Hypernode Reduction in
Figure
to the Hypernode, and the subgraph which contains them is topologically sorted.
The topological sort determines the partial order of predecessors (successors), which is
appended to the ordered list of nodes. The predecessors are topologically sorted using the
PALA algorithm. The PALA algorithm is like an ALAP (As Late As Possible) algorithm,
but the list of ordered nodes is inverted. The successors are topologically sorted using an
ASAP (As Soon As Possible) algorithm.
As an example, consider the dependence graph in Figure 7a. Next, we illustrate the
ordering of the nodes of this graph step by step.
1. Initially, the list of ordered nodes is empty (List = fg). We start by designating a
node of the graph as the Hypernode (H in Figure 7). Assume that A is the first node
of the graph. The resulting graph is shown in Figure 7b. Then A is appended to the
that had approximately the same register requirements (there were minor differences caused by resource
constraints).
4 The execution time of Search All Paths is O(kV k
list of ordered nodes (List = fAg).
2. In the next step the predecessors of H are selected. Since it has no predecessors, the
successors are selected (i.e. the node C). Node C is reduced to H, resulting in the
graph of Figure 7c, and C is added to the list of ordered nodes (List = fA; Cg).
3. The process is repeated, selecting nodes G and H. In the case of selecting multiple
nodes, there may be paths connecting these nodes. The algorithm looks for the
possible paths, and topologically sorts the nodes involved. Since there are no paths
connecting G and H, they are added to the list (List = fA; C; G; Hg), and reduced
to the Hypernode, resulting the graph of Figure 7d.
4. Now H has D as a predecessor, thus D is reduced, producing the graph in Figure 7e,
and appended to the list (List = fA; C; G; H;Dg).
5. Then J, the successor of H, is ordered (List = fA; C; G; H;D;Jg) and reduced,
producing the graph in Figure 7f.
6. At this point H has two predecessors B and I, and there is a path between B and
I that contains the node E. Therefore B, E, and I are reduced to H producing the
graph of Figure 7g. Then, the subgraph that contains B, E, and I is topologically
sorted, and the partially ordered list fI; E; Bg is appended to the list of ordered
circuits. Then, we order this dependence graph as shown in Subsection 3.1.
Before presenting the ordering algorithm for recurrence circuits, let us put forward some
considerations about recurrences. Recurrence circuits can be classified as:
ffl Single recurrence circuits (Figure 8a).
I
G
F
I
G
F
I
G
F
I
F
I
F
I
F
a) b) c)
d) e) f) g) h)
Figure
7: Example of reordering without recurrences.
A
A
A
A
a) b) c) d)
Figure
8: Types of recurrences
Recurrence circuits that share the same set of backward edges (Figure 8b). We
call recurrence subgraph to the set of recurrence circuits that share the same set of
backward edges. In this way Figures 8a and 8b are recurrence subgraphs.
ffl Several recurrence circuits can share some of their nodes (Figures 8c and 8d) but
have distinct sets of backward edges. In this case we consider that these recurrence
circuits are different recurrence subgraphs.
All recurrence circuits are identified during the calculation of RecMII . For instance, the
recurrence circuits of the graph of Figure 8b are fA, D, Eg and fA, B, C, Eg. Recurrence
circuits are grouped into recurrence subgraphs (in the worst case there may be a recurrence
subgraph for each backward edge). For instance, the recurrence circuits of Figure 8b are
grouped into the recurrence subgraph fA, B, C, D, Eg. Recurrence subgraphs are ordered
based on the highest RecMII value of the recurrence circuits contained in each subgraph,
in a decreasing order. The nodes that appear in more than one subgraph are removed from
all of them excepting the most restrictive subgraph in terms of RecMII . For instance, the
procedure Ordering Recurrences(G, L, List, h)
fThis procedure takes the dependence graph (G)g
fand the simplified list of recurrence subgraphs (L)g
fIt returns a partial list of ordered nodes (List)g
fand the resulting hypernode (h)g
List := Pre Ordering(G 0 , List, h);
while L 6= ; do
function Generate Subgraph(V , G)
fThis function takes the dependence graph (G) and a subset of nodes V g
fand returns the graph that consists of all the nodes in V and the edgesg
famong themg
Figure
9: Procedure to order the nodes in recurrence circuits
list of recurrence subgraphs associated with Figure 8c ffA, C, Dg, fB, C, Egg will be
simplified to the list ffA, C, Dg, fB, Egg.
The algorithm that orders the nodes of a grah with recurrence circuits (see Figure
takes as input a list L of the recurrence subgraphs ordered by decreasing values of their
RecMII . Each entry in this list is a list of the nodes traversed by the associated recurrence
subgraph. Trivial recurrence circuits, i.e. dependences from an operation to itself, do not
affect the preordering step since they do not impose scheduling constraints, as the scheduler
previously ensured that II - RecMII . The algorithm starts by generating the corresponding
subgraph for the first recurrence circuit, but without one of the backward edges that
causes the recurrence (we remove the backward edge with higher ffi (u;v)
). Therefore the
resulting subgraph has no recurrences and can be ordered using the algorithm without
recurrences presented in Section 3.1. The whole subgraph is reduced to the Hypernode.
Then, all the nodes in any path between the Hypernode and the next recurrence subgraph
are identified (in order to properly use the algorithm Search All Paths it is required that all
the backward edges causing recurrences have been removed from the graph). After that,
the graph containing the Hypernode, the next recurrence circuit, and all the nodes that are
in paths that connect them are ordered applying the algorithm without recurrence circuits
and reduced to the Hypernode. If there is no path between the Hypernode and the next
recurrence circuit, any node of the recurrence circuit is reduced to the Hypernode, so that
the recurrence circuit is now connected to the Hypernode.
A
F
G
IH
KA
F
G
I
KH
G
I
a) b) c) d) e)
Figure
10: Example for Ordering Recurrences procedure
This process is repeated until there are no more recurrence subgraphs in the list. At this
point all the nodes in recurrence circuits or in paths connecting them have been ordered
and reduced to the Hypernode. Therefore the graph that contains the Hypernode and the
remaining nodes, is a graph without recurrence circuits, that can be ordered using the
algorithm presented in the previous subsection.
For instance, consider the dependence graph of Figure 10a. This graph has two recurrence
subgraphs fA, C, D, Fg and fG, J, Mg. Next, we will illustrate the reduction of the
recurrence subgraphs:
1. The subgraph fA, C, D, Fg is the one with the highest RecMII . Therefore the algorithm
starts by ordering it. By isolating this subgraph and removing the backward
edge we obtain the graph of Figure 10b. After ordering this graph the list of ordered
nodes is (List = fA; C; D;Fg). When the graph of Figure 10b is reduced to the
Hypernode H in the original graph (Figure 10a), we obtain the dependence graph of
Figure
10c.
2. The next step is to reduce the following recurrence subgraph fG, J, Mg. For this
purpose the algorithm searches for all the nodes that are in all possible paths between
H and the recurrence subgraphs. Then, the graph that contains these nodes is
constructed (see Figure 10d). Since backward edges have been removed, this graph
has no recurrence circuits, so it can be ordered using the algorithm presented in the
previous section. When the graph has been ordered, the list of nodes is appended to
the previous one resulting in the partial list (List = fA; C; D;F; I; G; J; Mg). Then,
this subgraph is reduced to the Hypernode in the graph of Figure 10c producing the
graph of Figure 10e.
3. At this point, we have a partial ordering of the nodes belonging to recurrences, and
the initial graph has been reduced to a graph without recurrence circuits (Figure 10e).
This graph without recurrence circuits is ordered as presented in Subsection 3.1. So
finally the list of ordered nodes is List = fA; C; D;F; I; G; J; M;H;E;B; L; Kg.
3.3 Scheduling step
The scheduling step places the operations in the order given by the ordering step. The
scheduling tries to schedule the operations as close as possible to the neighbors that have
already been scheduled. When an operation is to be scheduled, it is scheduled in different
ways depending on the neighbors of these operations that are in the partial schedule.
ffl If an operation u has only predecessors in the partial schedule, then u is scheduled
as early as possible. In this case the scheduler computes the Early Start of u as:
Early Start
Where t v is the cycle where v has been scheduled, - v is the latency of v, ffi (v;u)
is the dependence distance from v to u, and PSP (u) is the set of predecessors of
u that have been previously scheduled. Then the scheduler scans in the partial
schedule for a free slot for the node u starting at cycle Early Start u until the cycle
Early Start u 1. Notice that, due to the modulo constraint, it makes no sense
to scan more than II cycles.
ffl If an operation u has only successors in the partial schedule, then u is scheduled as
late as possible. In this case the scheduler computes the Late Start of u as:
Late Start
Where PSS(u) is the set of successors of u that have been previously scheduled.
Then the scheduler scans in the partial schedule for a free slot for the node u starting
at cycle Late Start u until the cycle Late Start
ffl If an operation u has predecessors and successors, then the scheduler scans the partial
schedule starting at cycle Early Start u until the cycle min(Late Start
ffl Finally, if an operation u has neither predecessors nor successors, the scheduler computes
the Early Start of u as: Early Start scans the partial schedule
for a free slot for the node u from cycle Early Start u to cycle Early Start u
are found for a node, then the II is increased by 1. The scheduling
step is repeated with the increased II , which will result in more opportunities for finding
slots. An advantage of HRMS is that the nodes are ordered only once, even if the
scheduling step has to do several trials.
4 Evaluation of HRMS
In this section we present some results of our experimental study. First, the complexity and
performance of HRMS are evaluated for a benchmark suite composed of a large number of
Number of registers6080100
%of
loops L4 HRMS
L4 Top-down
L6 Top-down
Figure
cumulative distribution of register requirements of loop variants.
innermost DO loops in the Perfect Club [4]. We have selected those loops that include a
single basic block. Loops with conditionals in their body have been previously converted to
single basic block loops using IF-conversion [2]. We have not included loops with subroutine
calls or with conditional exits. The dependence graphs have been obtained using the
experimental ICTINEO compiler [3]. A total of 1258 loops, which account for 78% of
the total execution time 5 of the Perfect Club, have been scheduled. For these loops, the
performance of HRMS is compared with the performance of a Top-Down scheduler. Second,
we compare HRMS with other scheduling methods proposed in the literature using a small
set of dependence graphs for which there are previously published results.
4.1 Performance evaluation of HRMS
We have used two machine configurations to evaluate the performance of HRMS. Both
configurations have 2 load/store units, 2 adders, 2 multipliers and 2 Div/Sqrt units. We
assume a unit latency for store instructions, a latency of 2 for loads, a latency of 4 (con-
figuration L4) or 6 (configuration L6) for additions and multiplications, a latency of 17
for divisions and a latency of roots. All units are fully pipelined except the
Div/Sqrt units which are not pipelined at all.
In order to evaluate performance the execution time (in cycles) of a scheduled loop has
been estimated as the II of this loop times the number of iterations this loop performs (i.e.
the number of times the body of the loop is executed). For this purpose the programs of
the Perfect Club have been instrumented to obtain the number of iterations of the selected
loops.
HRMS achieved loops, which means that it is optimal in terms of
II for at least 97.5% of the loops. On average, the scheduler achieved an
5 Executed on an HP 9000/735 workstation.
HRMS Top-down HRMS Top-down
Memory
ideal
regs.
regs.
Figure
12: Memory traffic with infinite registers, 64 registers and registers.
HRMS Top-down HRMS Top-down
L4 L61030Cycles
regs.
regs.
Figure
13: Cycles required to execute the loops with infinite registers, 64 registers and
registers.
Considering dynamic execution time, the scheduled loops would execute at 98.4% of the
maximum performance.
Register allocation has been performed using the wands-only strategy using end-fit with
adjacency ordering. For an extensive discussion of the problem of allocating registers for
software-pipelined loops refer to [24].
Figure
11 compares the register requirements of loop-variants for the two scheduling
techniques (Top-down that does not care about register requirements and HRMS) for the
two configurations mentioned above. This figure plots the percentage of loops that can
been scheduled with a given number of registers without spill code. On average, HRMS
requires 87% of the registers required by the Top-down scheduler.
Since machines have a limited number of registers, it is also of interest to evaluate
the effect of the register requirements on performance and memory traffic. When a loop
requires more than the available number of registers, spill code has to be added and the
loop has to be re-scheduled. In [16] different alternatives and heuristics are proposed to
speed-up the generation of spill code. Among them, we have used the heuristic that spills
the variable that maximizes the quotient between lifetime and the number of additional
loads and stores required to spill the variable; this heuristic is the one that produces the
best results.
Figures
12 and 13 show the memory traffic and the execution time respectively of the
loops scheduled with both schedulers when there are infinite, 64 and registers available.
Notice that in general HRMS requires less memory traffic than Top-down when the number
of registers is limited. The difference in memory traffic requirements between both
schedulers increases as the number of available registers decreases. For instance, for configuration
L6, HRMS requires 88% of the traffic required by the Top-down scheduler if 64
registers are available. If only 32 registers are available, it requires 82.5% of the traffic
required by the Top-down scheduler.
In addition, assuming an ideal memory system, the loops scheduled by HRMS execute
faster than the ones scheduled by Top-down. This is because HRMS gives priority to
recurrence circuits, so in loops with recurrences usually produces better results than Top-
down. An additional factor that increases the performance of HRMS over Top-down is that
it reduces the register requirements. For instance, for configuration L6, scheduling the loops
with HRMS produces a speed-up over Top-down of 1.18 under the ideal assumption that
an infinite register file is available. The speed-up is 1.20 if the register file has 64 registers
and 1.25 if it has only registers.
Notice that for both schedulers, the most agressive configuration (L6) requires more registers
than the L4 configuration. This is because the degree of pipelining of the functional
units has an important effect on the register pressure [19, 16]. The high register requirements
of aggressive configurations produces a significant degradation of performance and
memory traffic when a limited number of registers is available [16]. For instance, the loops
scheduled with HRMS require 6% more cycles to execute for configuration L6 than for L4,
if an infinite number of registers is assumed. If only 32 registers are available, L6 requires
16% more cycles than L4.
4.2 Complexity of HRMS
Scheduling our testbench consumed 55 seconds in a Sparc-10/40 workstation. This time
compares to the 69 seconds consumed by the Top-Down scheduler. The break-down of the
scheduler execution time in the different steps is shown in Figure 14. Notice that in HRMS,
computing the recurrence circuits consumed only 7%, the pre-ordering step consumed 66%,
and the scheduling step consumed 27%. Even though most of the time is spent in the preordering
step, the overall time is extremely short. The extra time lost in pre-ordering the
nodes, allows for a very simple (and fast) scheduling step. In the Top-Down scheduler,
the pre-ordering step consumed a small percentage of the time but the the scheduling step
required a lot of time; when the scheduler fails to find an schedule with a given II , the
loop has to be rescheduled again with an increased initiation interval, and Top-Down has
to re-schedule the loops much more often than HRMS.
Time (seconds)
HRMS
Top-Down Scheduling
Priority function
Find recurrences and compute MII
Figure
14: Time to schedule all 1258 loops for the HRMS and Top-Down schedulers.
4.3 Comparison with other scheduling methods
In this section we compare HRMS with three schedulers: an heuristic method that does not
take into account register requirements (FRLC [27]), a life-time sensitive heuristic method
(Slack [12]) and a linear programming approach (SPILP [10]).
We have scheduled 24 dependence graphs for a machine with 1 FP Adder, 1 FP Mul-
tiplier, 1 FP Divider and 1 Load/Store unit. We have assumed a unit latency for add,
subtract and store instructions, a latency of 2 for multiply and load, and a latency of 17
for divide.
Table
1 compares the initiation interval II , the number of buffers (Buf) and the total
execution time of the scheduler on a Sparc-10/40 workstation, for the four scheduling
methods. The results for the other three methods have been obtained from [10] and the
dependence graphs to perform the comparison supplied by its authors. The number of
buffers required by a schedule is defined in [10] as the sum of the buffers required by each
value in the loop. A value requires as many buffers as the number of times the producer
instruction is issued before the issue of the last consumer. In addition, stores require one
buffer. In [20], it was shown that the buffer requirements provide a very tight upper bound
on the total register requirements.
Table
2 summarizes the main conclusions of the comparison. The entries of the table
represent the number of loops for which the schedules obtained by HRMS are better (II !),
equal (II =), or worse (II ?) than the schedules obtained by the other methods, in terms
Application HRMS SPILP Slack FRLC
Program II Buf Secs II Buf Secs II Buf Secs II Buf Secs
Liver Loop5 3 5
Linpack
Whets. Cycle1 4 4
Table
1: Comparison of HRMS schedules with other scheduling methods.
of the initiation interval. When the initiation interval is the same, it also shows the number
of loops for which HRMS requires less buffers (Buf !), equal number of buffers (Buf =),
or more buffers (Buf ?). Notice that HRMS achieves the same performance as the SPILP
method both in terms of II and buffer requirements. When compared to the other methods,
HRMS obtains a lower II in about 33% of the loops. For the remaining 66% of the loops
the II is the same but in many cases HRMS requires less buffers, specially when compared
with FRLC.
Finally Table 3 compares the total compilation time in seconds for the four methods.
Notice that HRMS is slightly faster than the other two heuristic methods; in addition, these
methods perform noticeably worse in finding good schedulings. On the other hand, the
linear programming method (SPILP) requires a much higher time to construct a scheduling
that turns out to have the same performance than the scheduling produced by HRMS. In
fact, most of the time spent by SPILP is due to Livermore Loop 23, but even without
taking into account this loop, HRMS is over 40 times faster.
Slack 7 1
Table
2: Comparison of HRMS performance versus the other 3 methods.
HRMS SPILP Slack FRLC
Compilation Time 0.32 290.72 0.93 0.71
Table
3: Comparison of HRMS compilation time to the other 3 methods.
Conclusions
In this paper we have presented Hypernode Reduction Modulo Scheduling (HRMS), a novel
and effective heuristic technique for resource-constrained software pipelining. HRMS attempts
to optimize the initiation interval while reducing the register requirements of the
schedule.
HRMS works in three main steps: computation of MII , pre-ordering of the nodes of
the dependence graph using a priority function, and scheduling of the nodes following
this order. The ordering function ensures that when a node is scheduled, the partial
scheduling contains at least a reference node (a predecessor or a successor), except for the
particular case of recurrences. This tends to reduce the lifetime of loop variants and thus
reduce register requirements. In addition, the ordering function gives priority to recurrence
circuits in order not to penalize the initiation interval.
We provided an exhaustive evaluation of HRMS using 1258 loops from the Perfect Club
Benchmark Suite. We have seen that HRMS generates schedules that are optimal in terms
of II for at least 97.4% of the loops. Although the pre-ordering step consumes a high
percentage of the total compilation time, the total scheduling time is smaller than the time
required by a convential Top-down scheduler. In addition, HRMS provides a significant
performance advantage over a Top-down scheduler when there is a limited number of
registers. This better performance comes from a reduction of the execution time and the
memory traffic (due to spill code) of the software pipelined execution.
We have also compared our proposal with three other methods: the SPILP integer
programming formulation, Slack Scheduling and FRLC Scheduling. Our schedules exhibit
significant improvement in performance in terms of initiation interval and buffer requirements
compared to FRLC, and a significant improvement in the initiation interval when
compared to Slack lifetime sensitive heuristic. We obtained similar results to SPILP, which
is an integer linear programming approach that obtains optimal solutions but has a prohibitive
compilation time for real loops.
--R
Software pipelining.
Conversion of control dependence to data dependence.
A uniform representation for high-level and instruction-level transformations
The Perfect Club benchmarks: Effective performance evaluation of supercomputers.
An approach to scientific array processing: The architectural design of the AP120B/FPS-164 family
Overlapped loop support in the Cydra 5.
Compiling for the Cydra 5.
Stage scheduling: A technique to reduce the register requirements of a modulo schedule.
Optimum modulo schedules for minimum register requirements.
Minimizing register requirements under resource-constrained software pipelining
Highly Concurrent Scalar Processing.
Circular scheduling: A new technique to perform software pipelining.
Software pipelining: An effective scheduling technique for VLIW machines.
A Systolic Array Optimizing Compiler.
Reducing the Impact of Register Pressure on Software Pipelined Loops.
Hypernode reduction modulo scheduling.
Register requirements of pipelined loops and their effect on performance.
Register requirements of pipelined processors.
A novel framework of register allocation for software pipelin- ing
Software pipelining in PA-RISC compilers
Iterative modulo scheduling: An algorithm for software pipelining loops.
Some scheduling techniques and an easily schedulable horizontal architecture for high performance scientific computing.
Register allocation for software pipelined loops.
Parallelisation of loops with exits on pipelined architectures.
Decomposed software pipelining: A new perspective and a new approach.
Enhanced modulo scheduling for loops with conditional branches.
Modulo scheduling with multiple initiation intervals.
--TR
--CTR
Spyridon Triantafyllis , Manish Vachharajani , Neil Vachharajani , David I. August, Compiler optimization-space exploration, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, March 23-26, 2003, San Francisco, California
David Lpez , Josep Llosa , Mateo Valero , Eduard Ayguad, Widening resources: a cost-effective technique for aggressive ILP architectures, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.237-246, November 1998, Dallas, Texas, United States
David Lpez , Josep Llosa , Mateo Valero , Eduard Ayguad, Cost-Conscious Strategies to Increase Performance of Numerical Programs on Aggressive VLIW Architectures, IEEE Transactions on Computers, v.50 n.10, p.1033-1051, October 2001
Josep Llosa , Eduard Ayguad , Antonio Gonzalez , Mateo Valero , Jason Eckhardt, Lifetime-Sensitive Modulo Scheduling in a Production Environment, IEEE Transactions on Computers, v.50 n.3, p.234-249, March 2001 | software pipelining;register allocation;register spilling;loop scheduling;instruction scheduling |
279588 | Algorithms for Variable Length Subnet Address Assignment. | AbstractIn a computer network that consists of M subnetworks, the L-bit address of a machine consists of two parts: A prefix si that contains the address of the subnetwork to which the machine belongs, and a suffix (of length L$-$ |si|) containing the address of that particular machine within its subnetwork. In fixed-length subnetwork addressing, |si| is independent of i, whereas, in variable-length subnetwork addressing, |si| varies from one subnetwork to another. To avoid ambiguity when decoding addresses, there is a requirement that no si be a prefix of another sj. The practical problem is how to find a suitable set of sis in order to maximize the total number of addressable machines, when the ith subnetwork contains ni machines. Not all of the ni machines of a subnetwork i need be addressable in a solution: If $n_i > 2^{L-|s_i|},$ then only $2^{L-|s_i|}$ machines of that subnetwork are addressable (none is addressable if the solution assigns no address si to that subnetwork). The abstract problem implied by this formulation is: Given an integer L, and given M (not necessarily distinct) positive integers $n_1, \cdots, n_M,$ find M binary strings $s_1, \cdots, s_M$ (some of which may be empty) such that 1) no nonempty string si is prefix of another string sj, 2) no si is more than L bits long, and 3) the quantity $\sum \nolimits _{|s_k|\ne0} \min \left\{ n_k, 2^{L-|s_k|} \right\}$ is maximized. We generalize the algorithm to the case where each ni also has a priority pi associated with it and there is an additional constraint involving priorities: Some subnetworks are then more important than others and are treated preferentially when assigning addresses. The algorithms can be used to solve the case when L itself is a variable; that is, when the input no longer specifies L but, rather, gives a target integer for the number of addressable machines, and the goal is to find the smallest L whose corresponding optimal solution results in at least addressable machines. | Introduction
This introduction discusses the connection between computer networking and the abstract problems
for which algorithms are subsequently given. It also introduces some terminology.
In a computer network that consists of M subnetworks, the L-bit address of a machine consists
of two parts: A prefix that contains the address of the subnetwork to which the machine belongs,
and a suffix containing the address of that particular machine within its subnetwork. In the case
where the various subnetworks contain roughly the same number of machines, a fixed partition of
the L bits into a t-bit prefix, Me, and an (L \Gamma t)-bit suffix, works well in practice: Each
subnetwork can then contain up to 2 L\Gammat addressable machines; if it contains more, then only 2 L\Gammat
of them will have an address and the remaining ones will be unsatisfied, in the sense that they will
have no address. If, in a fixed length partition scheme, some machines are unsatisfied, then the only
way to satisfy them is to increase the value of L. However, a fixed length scheme can be wasteful
if the M subnetworks consist of (or will eventually consist of) different numbers of machines, say,
machines for the ith subnetwork. In such a case, the fixed scheme can leave many machines
unsatisfied (for that particular value of L) even though the variable length partition scheme that
we describe next could satisfy all of them without having to increase L.
In a variable partition scheme, the length of the prefix containing the subnetwork's address
varies from one subnetwork to another. In other words, if we let s i be the prefix that is the address
of the ith subnetwork, then we now can have js j. However, to avoid ambiguity (or having to
store and transmit js i j), there is a requirement that no s i be a prefix of another s j . Variable length
subnetwork addressing is easily shown to satisfy a larger total number of addressable machines than
the fixed length scheme: There are examples where fixed length subnetwork addressing cannot
satisfy all of the addressing
can. Furthermore, we are also interested in the cases where even variable length addressing cannot
satisfy all of the N machines: In such cases we want to use the L bits available as effectively as
possible, i.e., in order to satisfy as many machines as possible. Of course an optimal solution might
then leave unsatisfied all the machines of, say, the ith subnetwork; this translates into s i being the
empty string, i.e., js solution therefore consists of determining binary strings
that maximize the sum
A solution completely satisfies the ith subnetwork if it satisfies all of the machines of that
subnetwork, i.e., if js no machine of the ith subnetwork is
satisfied, and we then say that the ith network is completely unsatisfied. If the solution satisfies
some but not all the machines of the ith subnetwork, then that subnetwork is partially satisfied;
this happens when in which case only 2 L\Gammajs i j of the machines of that subnetwork
are satisfied. An optimal solution can leave some of the subnetworks completely satisfied, others
completely unsatisfied, and others partially satisfied.
The prioritized version of the problem models the situation where some subnetworks are more
important than others. We use the following priority policy.
Priority Policy: "The number of satisfied machines of a subnetwork is the same as if all lower-priority
subnetworks did not exist."
The next section proves some useful properties for a subset of the optimal solutions. We assume
the unprioritized case, and leave the prioritized case until the end of the paper.
Before proceeding with the technical details of our approach, we should stress that in the above
we have provided only enough background and motivation to make this paper self-contained. The
reader interested in more background than we provided can find, in references [11, 8, 9, 10, 6, 4, 12],
the specifications for standard subnet addressing, and other related topics. For a more general
discussion of hierarchical addressing, its benefits in large networks, and various lookup solution
methods (e.g., digital trees), see [7, 5]. Finally, what follows assumes the reader is familiar with
basic techniques and terminology from the text algorithms and data structures literature - we
refer the reader to, for example, the references [1, 2, 3].
Preliminaries
The following definitions and observations will be useful later on. We assume, without loss of
generality, that . Since the case when n admits a trivial solution (2 L machines
are satisfied, all from subnetwork 1), from now on we assume that
logarithms are to the base 2.
Lemma 1 Let S be any solution (not necessarily optimal). Then there exists a solution S 0 that
satisfies the same number of machines as S, uses the same set of subnetwork addresses as S, and
in which the completely unsatisfied subnetworks (if there are any) are those that have the k lowest
k. In other words, js
Proof: Among all such solutions that satisfy the same number of machines as S, consider one that
has the smallest number of offending pairs defined as pairs completely
unsatisfied, and j is not completely unsatisfied. We claim that the number of such pairs is zero:
Otherwise interchanging the roles of subnetworks i and j in that solution does not decrease the
total number of satisfied machines, a contradiction since the resulting solution has at least one
fewer offending pair. 2
On the other hand, there does not necessarily exist an S 0 of equal value to S and in which all
of the (say, completely satisfied subnetworks are those that have the k highest n i values. If,
in the optimal solution we seek, we go through the selected subnetworks by decreasing n i values,
then we initially encounter a mixture of completely satisfied and partially satisfied subnetworks,
but once we get to a completely unsatisfied one then (by the above lemma) all the remaining ones
are completely unsatisfied.
Lemma 2 Let S be any solution (not necessarily optimal). There exists a solution S 0 that satisfies
as many machines as S, uses the same set of subnetwork addresses as S, and is such that js
Proof: Among all such solutions that satisfy the same number of machines as S, consider one which
has the smallest number of offending pairs defined as pairs i; j such that js
We claim that the number of such pairs is zero: Otherwise interchanging the roles of
subnetworks i and j in that solution does not decrease the total number of satisfied machines, a
contradiction since the resulting solution has at least one fewer offending pair. 2
Let T be a full binary tree of height L, i.e., T has 2 L leaves and 2 nodes. For any
solution S, one can map each nonempty s i to a node of T in the obvious way: The node v i of T
corresponding to subnetwork i is obtained by starting at the root of T and going down as dictated
by the bits of the string s i (where a 0 means "go to the left child" and a 1 means "go to the right
child"). Note that the depth of v i in T (its distance from the root) is js i j, and that no v i is ancestor
of another v j in T (because of the requirement that no nonempty s i is a prefix of another s j ). For
any node w in T , we use parent(w) to denote the parent of w in T , and we use l(w) to denote the
number of leaves of T that are in the subtree of w; hence solution
completely satisfies subnetwork i iff which case we can extend our terminology by
saying that "node v i is completely satisfied by S" rather than the more accurate "the subnetwork
i corresponding to node v i is completely satisfied by S."
any solution that satisfies Lemmas 1 and 2. Then there is a
solution
i at the same depth as v i ,
and is such that implies that v 0
i has smaller preorder number in T than v 0
j (which is equivalent
to saying that s 0
i is lexicographically smaller than s 0
can be obtained from S by a sequence of "interchanges" of various subtrees of T , as
follows. initially a copy of T , and repeat the following until
1. Perform an "interchange" in T 0 of the subtree rooted at node v i with the subtree rooted at
the leftmost node of T 0 having same depth as
i is simply the new position occupied by
"interchange".
2. Delete from T 0 the subtree rooted at v 0
Performing in T the interchanges done on T 0 gives a new T where the v 0
's have the desired property.The "interchange" operations used to prove the above lemma will not be actually performed by
our algorithm - their only use is for the proof of the lemma.
Lemma 4 Let S be any solution (not necessarily optimal) that satisfies the properties of Lemmas 1-
3. There exists a solution S 0 that satisfies as many machines as S, that also satisfies the properties of
Lemmas 1-3, and is such that any v i that is not the root of T has l(parent(v i
the nonempty s i 's of such an S 0 are a subset of the nonempty s i 's of S.
Proof: Among all solutions that satisfy the same number of machines as S, let S
one that maximizes the integer i which all of v the lemma's property,
i.e., they have We claim that that such an S 0
already satisfies the lemma. Suppose to the contrary that i ! k, i.e., that l(parent(v i+1
cannot be completely satisfied since that would imply that l(v i+1
Hence v i+1 is only partially satisfied, i.e., l(v
z be the parent of v i+1 and y be the sibling of v i+1 in T ; y must be to the right of v i+1 since
otherwise v i is at y and v i too has l(parent(v i which contradicts the definition of i. Also
note that the fact that l(z) - n i+1 implies that n i.e., the number of unsatisfied
machines in subnetwork promoting v i+1 by "moving it to its parent",
one level up the tree T , thus (i) replacing the old s i+1 by a new (shorter) one obtained by dropping
the rightmost bit of the old s i+1 , and (ii) deleting from S 0 all of the s j that now have the new
s i+1 as a prefix. Note that, for each s j so removed, its corresponding v j was in the subtree of y,
hence the removal of these s j 's results in at most l(y) machines becoming unsatisfied, but that is
compensated for by l(y) machines of subnetwork that have become newly satisfied as a result
of v i+1 's promotion, implying that the new solution S 00 has value that is no less than that of S 0 .
However, a v j so deleted from the subtree of y can cause S 00 to no longer satisfy the property of
Lemma 1 because of a surviving v t to the right of z having an n t ! n j . We next describe how to
modify S 00 so it does satisfy Lemma 1. In the rest of the proof S 0 refers to the solution we started
with, before v i+1 was moved up by one level, and S 00 refers to the solution after v i+1 was moved.
Let (v denote the set of the deleted v j 's (who were in y's
subtree in the original S 0 but are not in S 00 ). If are in S 00 and are
to the right of z, hence we need to "repair" S 00 to restore the property of Lemma 1 (if on the other
then no such repair is needed). This is done as follows. Simultaneously for each
of the elements of the sequence (v do the following: In the tree T , place the element
considered (say, v j ) at the place previously (in the original S 0 ) occupied by v j+l+1 (if
then that v j cannot be placed and the new solution leaves completely unsatisfied). The S 00 so
modified satisfies the same number of machines as the original one, still satisfies Lemmas 1-3, but
has "moved" v i+1 one level up the tree T . This can be repeated until v i+1 is high enough that
but that is a contradiction to the definition of integer i. Hence it must be
the case that S 0 has
Lemma 5 There exists an optimal solution S that satisfies the properties of Lemma 4 and in which
every subnetwork i has an s i of length equal to either
Proof: Let S be an optimal solution satisfying Lemma 4. First, we claim that there is such an
S in which every s i satisfies js Suppose to the contrary that, in S, some s i has
length less than moving v i from its current position, say node y in T , to a
descendant of y whose depth equals e, would leave subnetwork i completely satisfied
without affecting the other subnetworks. Repeating this for all i gives a solution in which every
s i has length - course moving a v i down to (say) y's left subtree leaves a ``hole''
in y's right subtree in the sense that the right subtree of y is unulitilized in the new solution.
The resulting S might have many such unutilized subtrees of T : It is easy to "move them to the
right" so that they all lie to the right of the utilized subtrees of T (the details are easy and are
omitted). Hence we can assume that S is such that js (Note that the above does
not introduce any violation of the properties of Lemma 4.)
To complete the proof we must show that js implies that
Taking logarithms on both sides gives:
which completes the proof. 2
The observations we made so far are enough to easily solve in O(M log M) time the following
(easier) version of the problem: Either completely satisfy all M subnetworks, or report that it is
not possible to do so. It clearly suffices to find a v i in T for each subnetwork i (since the v i 's
uniquely determine the s i 's). This is done in O(M log M) time by the following greedy algorithm,
which operates on only that portion of T that is above the v i 's:
1. Sort the n i 's in decreasing order, say n log M) (the log M factor
goes away if the n i 's can be sorted in linear time, e.g., if they are integers smaller than M O(1) ).
2. For each n i , compute the depth d i of v i in T : d
3. Repeat the following for on the leftmost node of T that is at depth
d i and has none of v no such node exists then stop and output "No
Solution Exists"). Time: O(M) by implementing this step as a construction and (simultane-
ously) preorder traversal of the relevant portion of T - call it T we start at the root
and stop at the first preorder node of depth d 1 , label it v 1 and consider it a leaf of T 0 , then
resume until the preorder traversal reaches another node of depth d 2 , which is labeled v 2 and
considered to be another leaf of T 0 , etc. Note that in the end the leaves of T 0 are the v i 's in
left to right order.
Theorem 1 Algorithm greedy solves the problem of finding an assignment of addresses that completely
satisfies all subnetworks when such an assignment exists. Its time complexity is O(M) if the
are given in sorted order, O(M log M) if it has to sort the n i 's.
Proof: The time complexity was argued in the exposition of the algorithm. Correctness of the
algorithm follows immediately from Lemmas 1-5. 2
Theorem 2 An assignment that completely satisfies all subnetworks exists if and only if
Proof: Observe that algorithm greedy succeeds in satisfying all subnetworks if and only if the
inequality is satisfied. 2
Corollary 1 Whether there is an assignment that completely satisfies all subnetworks can be determined
in O(M) time, even if the n i 's are not given in sorted order.
Proof: The right-hand side of the inequality in the previous theorem can be computed in O(M)
time. 2
Would the greedy algorithm solve the problem of satisfying the largest number of machines
when it cannot satisfy all of them? That is, when it cannot assign a v i to a node (in Step 3),
instead of saying "No Solution Exists", can it accurately claim that the solution produced so far is
optimal? The answer is no, as can be seen from the simple example of
example the greedy algorithm satisfies 5 machines whereas it is possible to satisfy
7 machines). However, the following holds.
Observation 1 The solution returned by the greedy algorithm satisfies a number of machines that
is no less than half the number satisfied by an optimal solution.
be the number of subnetworks completely satisfied by greedy. Observe that
since if we had would have put v i at a greater depth than its
current position. Therefore an optimal solution could, compared to greedy, satisfy no more than
an additional
machines, which is less than
the number satisfied by greedy.However, we need not resort to approximating an optimal solution, since the next section will
give an algorithm for finding an optimal solution.
3 Algorithm for the Unprioritized Case
We assume throughout this section that the greedy algorithm described earlier has failed to satisfy
all the machines. The goal then is to satisfy as many machines as possible.
We call level ' the 2 ' nodes of T whose depth (distance from the root) is '. We number the
nodes of level ' as follows: ('; 1); ('; is the kth leftmost node of level '.
Lemma 5 says that v i is either at a depth of d i or of d limits
the number of choices for where to place v i to 2 d i choices at depth d i , and 2 d i +1 choices at depth
1. For every to be the maximum
number of machines of subnetworks that can be satisfied by using only the portion of T
having preorder numbers - the preorder number of (d i ; j), and subject to the constraint that v i
is placed at node (d defined analogously but with (d playing the role that
played in the definition of C(i; j). The C(i; j)'s and C 0 (i; j)'s will play an important role
in the algorithm: Clearly, if we had these quantities for all then we could easily obtain
the number of machines satisfied by an optimal solution, simply by choosing the maximum among
them:
Another notion used by the algorithm is that of the '-predecessor of a node v of T , where ' is
an integer no greater than v's depth: It is the node of T at level ' that is immediately to the left of
the ancestor of v at level ' (if no such node exists then v has no '-predecessor). In other words, if
w is the ancestor of v at level ' (possibly then the '-predecessor of v is the rightmost node
to the left of w at level '. The algorithms will implicitly make use of the fact that the '-predecessor
of a given node v can be obtained in constant time: If v is represented as a pair (a; b) where a is
v's depth and b is the left-to-right rank of b at that depth (i.e., v is the bth leftmost node at depth
a), then the '-predecessor of (a; b) is ('; c) where
The following algorithm preliminary will later be modified into a better algorithm. The input
to the algorithm is L and the n i 's. The output is a placement of the v i 's in T ; recall that this is
equivalent to computing the s i 's because the s i 's can easily obtained from the v i 's (in fact each s i can
be obtained from v i in constant time, as will be pointed out later). We assume that a preprocessing
step has already computed the d i 's. We use pred('; v) or pred('; a; b) interchangeably, to denote
the '-predecessor of a node v = (a; b), with the convention that pred('; a; b) is (\Gamma1; \Gamma1) when it is
undefined, i.e., when ' ? a or (a; b) has no '-predecessor.
1. For to M in turn, do the following:
(a) For
with the convention that are 0.
be the node of T that "gives C(i; b) its value" in the above maximization,
that is, f(i; b) is
pred(d
pred(d
(b) For
with the convention that are 0.
be the node of T that "gives C 0 (i; b) its value" in the above maximization,
that is, f
pred(d
pred(d
2. Find the largest, over all i and b, of the C(i; b)'s and C 0 (i; b)'s computed in the previous
step: Suppose it is C(k; b) (respectively, C 0 (k; b)). Then C(k; b) (respectively, C 0 (k; b)) is the
maximum possible number of machines that are satisfied by an optimal solution v
To generate a set of assignments that correspond to that optimal solution (rather than just its
value), we use the f and f 0 functions obtained in the previous step: Starting at node (d
(respectively, we "trace back" from there, and output the nodes of the optimal
solution as we go along (in the order v k ; v The details of this "tracing back" are
as follows:
(a) k. If the largest of the C(i; b)'s and C 0 (i; b)'s computed in the previous step was
Then repeat the following until
(b) Output "v equal to either f(i; fi) (in case or to f 0 (i; fi)
(in case
Note. To output the string s i corresponding to a v i node, rather than the (d
describing that v i , we modify the above Step 2(b) as follows: If v
(a; b) then s i is the binary string consisting of the rightmost a digits in the binary
representation of the integer 2 a is the breadth-first number
of the node (a; b), and that an empty string corresponds to the root since 2 0
This implies that s i can be computed from the pair (a; b) in constant time.
Correctness of the above algorithm preliminary follows from Lemmas 1 - 5.
The time complexity of preliminary is unsatisfactory because it can depend on the size of T as
well as M , making the worst case take O(M2 L ) time. However, the following simple modification
results in an O(M 2 In Steps 1(a) and (respectively) 1(b), replace "For
"For iteration
bounds for b remain unchanged, at 2 d i for 1(a) and 2 d i +1 for 1(b)). Before arguing the correctness
of this modified algorithm, we observe that its time complexity is O(M 2 ), since we now iterate
over only M 2 distinct note: The relevant C(i; b)'s need not be explicitly
initialized, they can implicitly be assumed to be zero initially; this works because of the particular
order in which Step 1 computes them.) Correctness follows from the claim (to be proved next) that
there is an optimal solution that, of the 2 a nodes of any level a, does not use any of the leftmost
nodes of that level. Let S be an optimal solution that has the smallest possible number
(call it t) of violations of the claim, i.e., the smallest number of nodes (a; b) where b ! 2 a \Gamma M and
some v i is at (a; b). We prove that by contradiction: Suppose that t ? 0, and let a be the
smallest depth at which the claim is violated. Let (a; b) be a node of level a that violates the claim,
is placed at (a; b) by optimal solution S. Since there are more than M
nodes to the right of v i at level a, the value of S would surely not decrease if we were to modify S
by re-positioning all of v in the subtrees of the rightmost nodes of level a
(without changing their depth). Such a modification, however, would decrease t, contradicting the
definition of S. Hence t must be zero, and the claim holds.
The following summarizes the result of this section.
Theorem 3 The unprioritized case can be solved in O(M 2 ) time.
4 Algorithm for the Prioritized Case
Let the priorities be is the priority of subnetwork k i . In the rest of
this section we assume that L is not large enough to completely satisfy all of the M subnetworks
(because in the other case, where L is large enough, the priorities do not play a role and Theorem
Use greedy (or, alternatively, Corollary 1) in a binary search for the largest i (call it - i) such
that the subnetworks k can be completely satisfied; each "comparison" in the binary search
corresponds to a call to greedy (or, alternatively, to Corollary 1) - of course it ignores the priorities
of the subnetworks k This takes total time O(M log M) even though we may use greedy
a logarithmic number of times, because we sort by decreasing n j 's only once, which makes each
subsequent execution of greedy cost O(M) time rather than O(M log M ). Let S be the solution,
returned by greedy, in which all of subnetworks are completely satisfied. By the definition
of - i, it is impossible to completely satisfy all of subnetworks Our task is to modify S
so as to satisfy as many of the machines of subnetworks k - as possible without violating
the priority policy (hence keeping subnetworks completely satisfied).
This is done as follows:
1. set the depth of each k i , 1, to be dlog n k i e.
2. Use greedy log log n k j times to binary search for the smallest depth (call it d) at which v k j
can be placed without resulting in the infeasibility (as tested by greedy) of (i) placing all
of subnetworks k at their previously fixed depths and (ii) placing k j at depth d
(there are log n k j possible values for d, which implies the log log n k j iterations of the binary
search). If no such d exists (i.e., if any placement of k j prevents the required placement of
proceed to Step 3. If the binary search finds such a d then fix the depth of
v j to be d (it stays d in all future iterations), set 2.
3. The solution is described by the current depths of k These fixed depths are then
used by a preorder traversal of (part of) T to position v k 1
in T .
That the above algorithm respects the priority policy follows from the way we fix the depth of
Subnetworks of lower priority do not interfere with it (because they are considered
later in the iteration). The time complexity is easily seen to be O(M 2 log L), since n k
The following summarizes the result of this section.
Theorem 4 The prioritized case can be solved in O(M 2 log L) time.
5 Further Remarks
What if L itself is a variable ? That is, consider the situation where instead of specifying L the
input specifies a target integer fl for the number of addressable machines; the goal is then to find
the smallest L that is capable of satisfying at least fl machines. The algorithms we gave earlier (and
that assume a fixed L) can be used as subroutines in a "forward" binary search for the optimal
(i.e., smallest) value of L (call it -
L) that satisfies at least fl machines: We can use them log -
times
in a "forward" binary search for -
L. So it looks like there is an extra multiplicative log -
if L is itself a variable that we seek to minimize, as opposed to the version of the problem that
fixes L ahead of time. However, Theorem 2 implies that there is no such log -
in the important case where we seek the smallest L that satisfies all
the machines: This version of the problem can be solved just as fast as the one where L is fixed
and we seek to check whether it can completely satisfy all M subnetworks.
Acknowledgement
. The authors are grateful to three anonymous referees for their helpful comments
on an earlier version of this paper.
--R
Combinatorial Algorithms on Words
Introduction to Algorithms
Algorithms
"Class A Subnet Experiment"
"Parallel searching techniques for routing table lookup,"
"Class A Subnet Experiment Results and Recommendations"
"Fast routing table lookup using CAMs,"
"Internet standard subnetting procedure"
"Broadcasting Internet datagrams in the presence of subnets"
"Internet subnets"
"Variable Length Subnet Table For IPv4"
"On the Assignment of Subnet Numbers"
--TR | prefix codes;addressing;algorithms;computer networks |
279589 | Analysis of Cache-Related Preemption Delay in Fixed-Priority Preemptive Scheduling. | AbstractWe propose a technique for analyzing cache-related preemption delays of tasks that cause unpredictable variation in task execution time in the context of fixed-priority preemptive scheduling. The proposed technique consists of two steps. The first step performs a per-task analysis to estimate cache-related preemption cost for each execution point in a given task. The second step computes the worst case response time of each task that includes the cache-related preemption delay using a response time equation and a linear programming technique. This step takes as its input the preemption cost information of tasks obtained in the first step. This paper also compares the proposed approach with previous approaches. The results show that the proposed approach gives a prediction of the worst case cache-related preemption delay that is up to 60 percent tighter than those obtained from the previous approaches. | INTRODUCTION
In real-time systems, tasks have timing constraints that must be satisfied for correct op-
eration. To guarantee such timing constraints, extensive studies have been performed on
schedulability analysis [1, 2, 3, 4, 5, 6]. They, in many cases, make a number of assumptions
to simplify the analysis. One such simplifying assumption is that the cost of task preemption
is zero. In real systems, however, task preemption incurs additional costs to process
interrupts [7, 8, 9, 10], to manipulate task queues [7, 8, 10], and to actually perform context
switches [8, 10]. Many of such direct costs are addressed in a number of recent studies that
focus on practical issues related to task scheduling [7, 8, 9, 10].
However, in addition to the direct costs, task preemption introduces indirect costs due to
cache memory, which is used in almost all computing systems today. In computing systems
with cache memory, when a task is preempted, a large number of memory blocks 1 belonging
to the task are displaced from the cache memory between the time the task is preempted
and the time the task resumes execution. When the preempted task resumes its execution,
it spends a substantial amount of time to reload the cache with the previously displaced
memory blocks. Such cache reloading greatly increases the task execution time, which may
invalidate the result of schedulability analysis that overlooks the cache-related preemption
costs.
To rectify this problem, recent studies addressed the issue of incorporating cache-related
preemption costs into schedulability analysis [12, 13]. These studies assume that each cache
block used by a preempting task replaces from the cache a memory block that is needed by
a preempted task. This pessimistic assumption leads to a loose estimation of cache-related
preemption delay since the replaced memory block may not be useful to any preempted
task. For example, it is possible that the replaced memory block is one that is no longer
1 A block is the minimum unit of information that can be either present or not present in the cache-main
memory hierarchy [11]. We assume without loss of generality that memory references are made in the unit
of blocks.
needed or one that will be replaced without being re-referenced even when there were no
preemptions.
In this paper, we propose a schedulability analysis technique that considers the usefulness
of cache blocks in computing cache-related preemption delay. The goal is to reduce the
prediction inaccuracy resulting from the above pessimistic assumption. The proposed technique
consists of two steps. In the first step, we perform a per-task analysis to compute
the number of useful cache blocks at each execution point in a given task, where a useful
cache block at an execution point is defined as a cache block that contains a memory block
that may be re-referenced before being replaced by another memory block. The number of
useful cache blocks at an execution point gives an upper bound on cache-related preemption
cost that is incurred when the task is preempted at that point. The results of this
per-task analysis are given in a table that specifies the (worst case) preemption cost for a
given number of preemptions of the task. From this table, the second step derives the worst
case response times of tasks using a linear programming technique [14] and the worst case
response time equation [2, 6].
This paper is organized as follows: In Section II, we survey the related work. Section III
describes our overall approach to schedulability analysis that considers cache-related pre-emption
cost. Sections IV and V detail the two steps of the proposed schedulability analysis
technique focusing on direct-mapped instruction cache memory. Section VI presents the
results of our experiments to assess the effectiveness of the proposed approach. Section VII
describes extensions of the proposed technique to set-associative cache memory and also to
data cache memory. Finally, we conclude this paper in Section VIII.
II. RELATED WORK
A. Schedulability Analysis in Fixed-priority Scheduling
A large number of schedulability analysis techniques have been proposed within the fixed-priority
scheduling framework [2, 3, 4, 6]. Liu and Layland [4] show that the rate monotonic
priority assignment where a task with a shorter period is given a higher priority is optimal
when task deadlines are equal to their periods. They also give the following sufficient
condition for schedulability for a task set consisting of n periodic tasks
where C i is a worst case execution time (WCET) estimate of - i and T i is its period 2 . This
condition states that if the total utilization of the task set (i.e.,
) is lower than the
given utilization bound (i.e., n (2 1=n \Gamma 1)), the task set is guaranteed to be schedulable
under the rate monotonic priority assignment. Later, Lehoczky et al. develop a necessary
and sufficient condition for schedulability based on utilization bounds [3].
Another approach to schedulability analysis is the worst case response time approach [2, 6].
The approach uses the following recurrence equation to compute the worst case response
where hp(i) is the set of tasks whose priorities are higher than that of - i . In the equation,
the term
j2hp(i) d R i
eC j is the total interferences from higher priority tasks during R i and
C i is - i 's own execution time. The equation can be solved iteratively and the iteration
terminates when R i converges at a value. This R i value is compared against - i 's deadline
These notations will be used throughout this paper along with D i that denotes the deadline of - i where
We assume without loss of generality that - i has a higher priority than -
to determine the schedulability of - i .
Recently, Katcher et al. [10] and Burns et al. [7, 8] provided a methodology for incorporating
the cost of preemption into schedulability analysis. In these approaches, preemption costs
arising from interrupt handling, task queue manipulation, and context-switching are taken
into account in the schedulability analysis.
In this paper, we are also interested in incorporating the cost of preemption into schedulability
analysis. However, unlike the above studies, our main focus is on indirect preemption
costs due to cache memory, which is increasingly being used in real-time computing systems.
B. Caches in Real-time Systems
Cache memory is used in almost all computing systems today to bridge the ever increasing
speed gap between the processor and main memory. However, due to its unpredictable per-
formance, cache memory has not been widely used in real-time computing systems where the
guaranteed worst case performance is of great importance. The unpredictable performance
comes from two sources: intra-task interference and inter-task interference.
interference occurs when a memory block of a task conflicts in the cache with
another block of the same task. Recently, there has been considerable progress on the
analysis of intra-task interference due to cache memory and interested readers are referred
to [15, 16, 17, 18, 19].
Inter-task interference, which is the main focus of this paper, occurs when memory blocks
of different tasks conflict with one another in the cache. There are two ways to address
the unpredictability resulting from inter-task interference. The first way is to use cache
partitioning where cache memory is divided into disjoint partitions and one or more partitions
are dedicated to each real-time task [20, 21, 22, 23]. In these techniques, each task
is allowed to access only its own partitions and thus we need not consider inter-task in-
terference. There are two different approaches to cache partitioning: hardware-based and
software-based.
In hardware-based approaches, extra address-mapping hardware is placed between the
processor and cache memory to limit the cache access by each task to its own partitions
[20, 21, 22]. On the other hand, in software-based approaches a specialized compiler
and linker are used to map each task's code and data only to its assigned cache
partitions [23]. Cache partitioning improves the predictability of the system by removing
cache-related inter-task interference, but has a number of drawbacks. One common draw-back
of both the hardware and software-based approaches is that they require modification
of existing hardware or software. Another common drawback is that they limit the amount
of cache memory available to individual tasks. Finally, in the case of the hardware-based
approach, the extra address-mapping hardware may stretch the processor cycle time, which
affects the execution time of every instruction.
The other way to address the unpredictability resulting from inter-task interference is to
devise an efficient method for analyzing its timing effects. In [12], Basumallick and Nilsen
extend the rate monotonic analysis explained in the previous subsection to take into account
the inter-task interference. In this approach, the WCET estimate of a task - i is modified
as C 0
is the original WCET estimate of - i computed assuming that the
task executes without preemption and fl i is the worst case preemption cost that the task - i
might impose on preempted tasks. This modification is based on a pessimistic assumption
that each cache block used by a preempting task replaces from the cache a memory block
that is needed by a preempted task. In the approach, the total utilization of a given task set
computed from the modified WCET estimates is compared against the utilization bound
given by Equation (1) to determine the schedulability of the task set.
One drawback of this approach is that it suffers from a pessimistic utilization bound given
by Equation (1), which approaches 0.693 for a large n [4]. Many task sets that have total
utilization higher than this bound can be successfully scheduled by the rate monotonic
priority assignment [3]. To rectify this problem, Busquets-Mataix et al. in [13] propose a
technique based on the response time approach. This technique makes the same pessimistic
assumption that each cache block used by a preempting task replaces from the cache a
memory block that is needed by a preempted task. This assumption leads to the following
equation for computing the worst case response time of a task:
d
e \Theta (C j
is the cache-related preemption cost that task - j might impose on lower priority
tasks. The term fl j is computed by multiplying the number of cache blocks used by task - j
and the time needed to refill a cache block.
Both the utilization bound based and response time based approaches assume that each
cache block used by a preempting task replaces from the cache a memory block that is
needed by a preempted task. This pessimistic assumption leads to a loose estimation of
cache-related preemption delay since it is possible that the replaced memory block is one
that is no longer needed or one that will be replaced without being re-referenced even when
the lower priority task is executed without preemption.
III. OVERALL APPROACH
This section overviews our proposed schedulability analysis technique that aims to minimize
the overestimation of cache-related preemption delay due to the pessimistic assumption
explained in the previous section. For this purpose, the response time equation is augmented
as follows in the proposed
d
where PC i (R i ) is the total cache-related preemption delay of task - i during R i , i.e., the
total cache reloading times of - 1 during R i .
22 t 2231
Fig. 1. Example of PC i (R i ).
The meaning of PC i (R i ) can be best explained by an example such as the one given in
Figure
1. In the example, there are three tasks,
Each arrow in the figure denotes a
point where a task is preempted and each shaded rectangle denotes cache reloading after
the corresponding task resumes execution. With these settings, PC 3 (R 3 ), which is the total
cache-related preemption delay of task - 3 during R 3 , is the total sum of cache reloading
times of - 1 , - 2 and - 3 during R 3 , which corresponds to the sum of shaded rectangles in the
figure.
The augmented response time equation can be solved iteratively as follows.
d
(R 0
. (4)
R k+1
(R k
As before, this iterative procedure terminates when R m
converged R i value is compared against - i 's deadline to determine the schedulability of - i .
To compute PC i (R k
i ) at each iteration, we take the following two-step approach.
1. Per-task analysis: We statically analyze each task to determine the cache-related
preemption cost at each execution point. This is the cost the task pays when it is
preempted at the execution point and is upper-bounded by the number of useful cache
blocks at that execution point. Based on this information and information about the
worst case visit counts of execution points, we construct the following preemption cost
table for each task.
# of preemptions 1
cost
In the table, f k is the k-th marginal preemption cost that is the cost the task pays in
the worst case for its k-th preemption over the preemption.
2. Preemption delay analysis: We use a linear programming technique to compute
(R k
i ) from the preemption cost tables of tasks and a set of constraints on the
number of preemptions of a task by higher priority tasks.
The following two sections detail the two steps.
IV. PER-TASK ANALYSIS OF USEFUL CACHE BLOCKS
In this section, we describe a per-task analysis technique to obtain the preemption cost table
of each task. We initially focus on the case of direct-mapped 3 instruction cache memory
in this section. In Section VII, we discuss extensions for set-associative cache memory and
also for data cache memory.
As an example of cache-related preemption cost, consider a direct-mapped cache with four
cache blocks. Assume that the cache has m 3 at time t in cache blocks c 0 ,
Further assume that the following memory block references are
made after t.
3 In a direct-mapped cache, each memory block can be placed exactly in one cache block whose index is
given by memory block number modulo number of blocks in the cache.
In this example, the useful cache blocks at time t are cache blocks c 1 and c 2 since they contain
memory blocks m 5 and m 6 , respectively, that are re-referenced before being replaced.
On the other hand, cache blocks c 0 and c 3 are not useful at time t since they have m 0 and
3 that are replaced by m 4 and m 7 without being re-referenced. If a preemption occurs at
time t, the memory blocks m 5 and m 6 contained in cache blocks c 1 and c 2 may be replaced
by memory blocks of intervening tasks and thus need to be reloaded into the cache after
resumption. The additional time to reload these useful cache blocks is the cache-related
preemption cost at time t. Note that this additional cache reload time is not needed if the
task is not preempted. In the following, we explain a technique for estimating the number
of useful cache blocks at any point in a program.
A. Estimation of the Number of Useful Cache Blocks
Our technique for estimating the number of useful cache blocks is based on data flow
analysis [24] over the task's program expressed in the control flow graph 4 (CFG). To give
4 In a CFG, each node represents a basic block, while edges represent potential flow of control between
basic blocks [25]
in
{ } m x
{ }
m a
{ }
(b) cache state at p
Useful
Useful cache block j
cache block i
(a) control flow to and from p
{ }
Fig. 2. Analysis on the usefulness of cache blocks.
an intuitive idea about this data flow analysis, consider the CFG given in Figure 2. In the
figure, a pair (c; m) denotes a reference to memory block m that is mapped to cache block
c. The CFG has two incoming paths to the execution point p, i.e., in 1 and in 2 , and two
outgoing paths from p, i.e., out 1 and out 2 . If the control flow came through incoming path
in 1 , cache block c i would contain memory block m a at point p since m a is the last reference
to cache block c i before reaching p. Similarly, cache block c i would have memory block m b
at point p if the control has come through the incoming path in 2 . Thus, either m a or m b
may reside in cache block c i at p depending on the incoming path. If either of them is the
first reference to cache block c i in an outgoing path from p, the cache block may be reused
and thus is defined as being useful at point p. The outgoing path out 2 is such a path and
thus cache block c i is defined to be useful at p.
For a more formal description, we define reaching memory blocks (RMBs) and live memory
blocks (LMBs) for each cache block that are similar to reaching definitions and live variables
used in traditional data flow analysis [24]. The set of reaching memory blocks of cache block
c at point p, denoted by RMB c
, contains all possible states of cache block c at point p where
a possible state corresponds to a memory block that may reside in the cache block at the
point. For a memory block to reside in cache block c, first, it should be mapped to cache
block c. Furthermore, it should be the last reference to the cache block in some execution
path reaching p. The set of live memory blocks of cache block c at point p, denoted by
p , is defined similarly and is the set of memory blocks that may be the first reference
to cache block c after p.
With these definitions, a useful cache block at point p can be defined as a cache block
whose RMBs and LMBs have at least one common memory block. In Figure 2, RMB c i
is fm a ; m b g and LMB c i
p is fm a g. Thus, cache block c i is defined to be useful at point p.
In the following, we explain how to compute RMBs of cache blocks at various execution
points of a given program. We initially focus on RMBs at the beginning and end points
of basic blocks 5 . The RMBs at other points can easily be derived from the RMBs at the
basic block boundaries as we will see later.
To formulate the problem of computing RMBs as a data flow problem, we define a set
gen c [B]. This set is either null or contains a single memory block. It is null if basic block
does not have any reference to memory blocks mapped to cache block c. On the other hand,
if the basic block B has at least one reference to a memory block mapped to c, gen c [B]
contains as its unique element the memory block that is the last reference to the cache
block c in the basic block. Note that in the latter case the memory block in gen c [B] is the
one that will reside in the cache block c at the end of the basic block B. Also note that
gen c [B] defined in this manner can be computed locally for each basic block.
As an example, consider the CFG given in Figure 3. The CFG shows instruction memory
block references made in each basic block. Assuming that the instruction cache is direct
mapped and has two blocks, gen c 0 [B 1 ] is fm 2 g since m 2 is the memory block whose reference
is the last reference to c 0 in B 1 . The gen c [B] sets for other basic blocks and cache blocks
can be computed similarly and are given in Figure 3.
With gen c [B] defined in this manner, the RMBs of c just before the beginning of B and
just after the end of B, which are denoted by RMB c
IN [B] and RMB c
OUT [B], respectively,
can be computed from the following two equations.
P a predecessor of B
OUT [B] (5)
gen c [B] if gen c [B] is not null
IN [B] otherwise
The first equation states the memory blocks that reach the beginning of a basic block
can be derived from those that reach the ends of the predecessors of B. The second equa-
5 A basic block is a sequence of consecutive instructions in which flow of control enters at the beginning
and leaves at the end without halt or possibility of branching except at the end [24].
{ }3
c }
{
{ }4
{ }c 1 3
{ }
{ }10 }
c }
Fig. 3. Example of gen c [B].
tion states that RMB c
OUT [B] is equal to gen c [B] if gen c [B] is not null and RMB c
IN [B]
otherwise 6 . These data flow equations can be solved using a well-known iterative approach
[24]. It starts with RMB c
and iteratively converges to the desired values of RMB c
IN 's and RMB c
OUT 's. The iterative
process can be described procedurally as follows.
Algorithm 1: Find RMBs of cache blocks at the beginning and end of each basic block
assuming that gen c [B] has been computed for each basic block B and cache block c.
/* initialize RMB c
IN [B] and RMB c
OUT for all B's and c's */
for each basic block B do
for each cache block c do
begin
6 This equation can be rewritten as
where set kill c [B] is the set of reaching memory blocks of cache block c killed in basic block B. The set
kill c [B] is obtained as follows: (1) it is null if gen c [B] is null, and (2) it is Mc \Gamma gen c [B] if gen c [B] is not
null where Mc is the set of all memory blocks mapped to c in the program. This rewritten form is more
commonly used in traditional data flow analysis.
IN [B] := ;;
OUT [B] := gen c [B];
change := true;
while change do
begin
change := false;
for each basic block B do
for each cache block c do
begin
IN [B] :=
P a predecessor of B
OUT [P ];
oldout := RMB c
OUT [B];
OUT [B] := gen c [B]
else RMB c
OUT [B] := RMB c
IN [B]
if RMB c
OUT [B] 6= oldout then change := true
As we indicated earlier, the RMBs at other points within a basic block can be computed
from the RMBs at the beginning of the basic block. Assume that the basic block has the
following sequence of instruction memory block references.
The references are processed sequentially starting from (c clear that
1 is in cache block c 1 at the point following the reference. No other conflicting memory
blocks can be in c 1 at this point. Therefore, the RMBs of c 1 just after the reference
is simply fm 1 g. However, the RMBs of other cache blocks are the same as those just before
IN [B]. In general, the RMBs of c i after
those of other cache blocks are the same as those before (c
The problem of computing LMBs can be formulated similarly to the case of RMBs. The
difference is that the LMB problem is a backward data flow problem [24] in that the in sets
(i.e., LMB c
IN [B]) are computed from the out sets (i.e., LMB c
OUT [B]) whereas the RMB
problem is a forward data flow problem [24] in that the out sets (i.e., RMB c
OUT [B]) are
computed from the in sets (i.e., RMB c
IN [B]). In the LMB problem, the set gen c [B] is
either a set with only one element corresponding to the memory block whose reference is
the first reference to cache block c in basic block B, or null if none of the references from
B are to memory blocks mapped to c.
Using gen c [B] defined in this manner, the following two equations relate the LMB c
IN [B]
and LMB c
OUT [B].
S a successor of B
IN [S]
IN [B] (6)
gen c [B] if gen c [B] is not null
OUT [B] otherwise
An iterative algorithm similar to the one for computing RMBs can be used to solve this
backward data flow problem. The difference is that the algorithm starts with LMB c
all B's and c's and uses the above two equations instead
of those given in Equation (5).
After we compute LMBs at the beginning and end of each basic block, the LMBs at
other points can be computed analogously to the case of RMBs. The difference is that
the processing of references is backward starting from the end of the basic block rather
than forward starting from the beginning. In the LMB problem, the LMBs of c i before
a reference and those of other cache blocks are the same as those after
After the usefulness of each cache block is determined at each point by computing the
intersection of the cache block's RMBs and LMBs at the point, it is trivial to calculate
the total number of useful cache blocks at the point; we simply have to count the useful
cache blocks at that point. By multiplying this total number of useful cache blocks and the
time to refill a cache block, the worst case cache-related preemption cost at the point can
be computed.
B. Derivation of the Preemption Cost Table
This subsection explains how to construct the preemption cost table of a task whose k-th
entry is the cost the task pays in the worst case for its k-th preemption over the
th preemption. The preemption cost table is constructed from two types of information:
(1) the preemption cost at each point, and (2) the worst case visit count of each point,
which can be directly derived from the CFG of the given program and the loop bound of
each loop in the program. The construction assumes the worst case preemption scenario
since we cannot predict, in advance, where preemptions will actually occur. The worst
case preemption scenario occurs when the first preemption is at the point with the largest
preemption cost (i.e., the point with the largest number of useful cache blocks), and then
the second preemption at the point with the next largest preemption cost, and so on. This
worst case preemption scenario should be assumed for our analysis to be safe.
From the above worst case preemption scenario, the entries of the preemption cost table
are filled in as follows. First, we pick a point p 1 that has the largest preemption cost. We
then fill in the first entry up to the v p 1
-th entry with that preemption cost where v p 1
is
the worst case visit count of p 1 . After that, we pick the point that has the second largest
preemption cost and perform the same steps starting from the (v p 1
1)-th entry. This
process is repeated until the number of entries in the preemption cost table is exhausted.
Assuming that the number of entries in the table is K, the K 0 -th marginal preemption
cost, where K 0 ? K, can be conservatively estimated to be the same as the K-th marginal
preemption cost since the marginal preemption cost is non-increasing.
By applying the per-task analysis explained in this section to all the tasks in the task set,
we can obtain the following set of preemption cost tables, one for each task, where f i;j is
the j-th marginal preemption cost of - i .
# of preemptions 1
cost f 1;1 f 1;2 f 1;3
# of preemptions 1
cost f 2;1 f 2;2 f 2;3
# of preemptions 1
cost f 3;1 f 3;2 f 3;3
# of preemptions 1
cost f n;1 f n;2 f n;3
V. CALCULATION OF THE WORST CASE
PREEMPTION DELAYS OF TASKS
In this section, we explain how to compute a safe upper bound of PC i (R k
used in Equation
(4) in Section III from the preemption cost table. We formulate this problem as an
integer linear programming problem with a set of constraints.
We first define g j;l as the number of invocations of - j that are preempted at least l times
during a given response time R k
. As an example, consider Figure 4 where task - j is invoked
three times during the given R k
. The first invocation of task - j , i.e., - j1 , is preempted
three times, and both the second and third invocations of - j , i.e., - j2 , - j3 , are preempted
once. From the definition of g j;l , g j;1 is 3, g j;2 is 1, and g j;3 is 1. Note that since the highest
priority task - 1 cannot be preempted,
Fig. 4. Definition of g j;l .
If we assume that we know the g j;l values that give the worst case preemption scenario
among tasks, we can calculate the worst case cache-related preemption delay of - i during
(R k
(R k
where f j;l is the l-th marginal preemption cost of - j . Note that this total cache-related
preemption delay of - i includes all the delay due to the preemptions of - i and those of
higher priority tasks during R k
In general, however, we cannot determine exactly which g j;l combination will give the worst
case preemption delay to task - i . For our analysis to be safe, we should conservatively
assume a scenario that is guaranteed to be worse than any actual preemption scenario.
Such a conservative scenario can be derived from constraints that any valid g j;l combination
should satisfy. We give a number of such constraints on g j;l 's in the following. First, g j;l
for a given interval R k
i cannot be larger than the number of invocations of - j during that
interval. Thus, we have
In the formulation, N j is the maximum number of preemptions that a single invocation of
experience during R k
. An upper bound of such an N j value can be calculated as
a=1 d R j
a=1 d R k
are the worst case response times of higher priority tasks -
which should be available when the worst case response time of - i is computed. From this,
the index l of g j;l can be bounded by N j in the formulation.
Second, the number of preemptions of task - j during the given interval R k
i cannot be larger
than the total number of invocations of - during that interval since only the
arrivals of tasks with priorities higher than that of - j can preempt - j . Thus, we have
T a
More generally, the total number of preemptions of - during the given interval R k
cannot be larger than the total number of invocations of - during that interval.
Thus, we have
Na
T a
Note that this constraint subsumes the previous constraint.
The maximum value of PC i (R k
's satisfying the above constraints is a safe upper
bound on the total cache-related preemption delay of task - i during R k
i . This problem can
be formulated as an integer linear programming problem as follows:
maximize
(R k
subject to
Constraint 1
Constraint 2
Na
T a
At each iteration of the iterative procedure explained in Section III, this integer linear
programming problem is solved to compute the PC i (R k
application of this
iterative procedure is given in the Appendix.
VI. EXPERIMENTAL RESULTS
To assess the effectiveness of the proposed approach, we predicted the worst case response
times of tasks from sample task sets using the proposed technique and compared them with
those predicted using previous approaches. For validation purposes, the predicted worst
case response times were also compared with measured response times.
Our target machine is an IDT7RS383 board with a 20 MHz R3000 RISC CPU, R3010 FPA
(Floating Point Accelerator), and an instruction cache and a data cache of 16 Kbytes each.
Both caches are direct mapped and have block sizes of 4 bytes. SRAM (static RAM) is used
as the target machine's main memory and the cache refill time is 4 cycles. Although the
target machine has a timer chip that provides user-programmable timers, their resolution is
too low for our measurement purposes. To accurately measure the execution and response
times of tasks, we built a daughter board that implements a timer with a resolution of one
machine cycle.
For our experiments, we also implemented a simple fixed-priority scheduler based on the
tick scheduling explained in [7]. The scheduler manages two queues: run queue and delay
queue. The run queue maintains tasks that are ready to run and its tasks are ordered by
their priorities. The delay queue maintains tasks that are waiting for their next periods and
its tasks are ordered by their release times. The scheduler is invoked by timer interrupts
that occur every 160,000 machine cycles. When invoked the scheduler scans the delay
queue and all the tasks in the delay queue with release times at or before the invocation
time of the scheduler are moved to the run queue. If one of the newly moved tasks has a
higher priority than the currently running task, the scheduler performs a context switch
between the currently running task and the highest priority task. When a task completes its
execution, it is placed into the delay queue and the next highest priority task is dispatched
from the run queue.
To take into account the overheads associated with the scheduler, we used the analysis
technique explained in [7]. In this technique, the scheduler overhead S i during response
time R i is given by
where
is the number of scheduler invocations during R i .
is the number of times that the scheduler moves a task from the delay queue
to the run queue during R i .
ffl C int is the time needed to service a timer interrupt (413 machine cycles in our exper-
iments).
set Task Period WCET # instruction # useful
memory blocks cache blocks
(unit: machine cycles) (unit: blocks)
I
Task set specifications.
ffl C ql is the time needed to move the first task from the delay queue to the run queue
(142 machine cycles in our experiments).
ffl C qs is the time needed to move each additional task from the delay queue to the run
queue (132 machine cycles in our experiments).
A detailed explanation of this equation is beyond the scope of this paper and interested
readers are referred to [7].
We used three sample task sets in our experiments and their specifications are given in
Table
I. The first column of the table is the task set name and the second column lists
the tasks in the task set. Four different tasks were used: FFT, LUD, LMS, and FIR. The
task FFT performs the FFT and the inverse FFT operations on an array of 8 floating
point numbers using the Cooley-Tukey algorithm [26]. LUD solves 10 simultaneous linear
equations by the Doolittle's method of LU decomposition [27] and FIR implements a 35
point Finite Impulse Response (FIR) filter [28] on a generated signal. Finally, LMS is a 21
point adaptive FIR filter where the filter coefficients are updated on each input signal [28].
FIRData Section
mapped to
non-cacheable
area
LMS
FFT
LUD
Instruction cache
Memory
Scheduler Scheduler
Fig. 5. Code placement for task set T 3
The table also gives in the third and fourth columns the period and the WCET of each
task in the task set, respectively. We used the measured execution times of tasks as their
WCETs since tight prediction of tasks' WCETs and accurate estimation of cache-related
preemption delay are two orthogonal issues. The measured execution time of a task was
obtained by executing the task without preemption. This execution time includes the time
for initializing the task and also the time for two context switches: one context switch to
the task itself and the other from the task to another task upon completion. The table also
gives the total number of instruction memory blocks and the maximum number of useful
cache blocks of each task in the fifth and sixth columns, respectively.
In the experiments, we intentionally placed code for tasks in such a way that caused conflicts
among memory blocks from different tasks although the instruction cache in the target
machine is large enough to hold all the code used by the tasks. This is because we expect
that such a case is typical of large-scale real-time systems. Figure 5 shows such a code
placement for task set T 3 . Furthermore, since we consider the preemption delay related to
instruction caching only (cf. Section VII), we disabled data caching by mapping data and
4,449,284 1,365,026 3,113,858 29,600 3,113,778 29,520 3,104,178 19,920 3,073,229
(unit: machine cycles)
II
Worst case response time predictions and measured response times.
stack segments of tasks to non-cacheable area.
Table
II shows the predicted worst case response time of the lowest priority task in each
task set. Four different methods were used to predict the worst case response time of the
task: A is the method where the worst case preemption cost is assumed to be the cost to
completely refill the cache. C is the method explained in [13]. U is the method where the
worst case preemption cost is assumed to be the cost to completely reload the code used
by a preempted task. Finally, P is the method proposed in this paper where the worst case
preemption cost is assumed to be the cost to reload the maximum number of useful cache
blocks.
In the table, the worst case response time predictions by the above four methods are denoted
by RA , RC , RU , R P , respectively. Also denoted by \Delta M is the predicted worst case cache-related
preemption delay in method M. It is the difference between the worst case response
time predictions by method M with and without cache-related preemption costs.
The results show that the proposed technique gives significantly tighter predictions for
cache-related preemption delay than the previous approaches. This results from the fact
that, unlike the other approaches, the proposed approach considers only useful cache blocks
when computing cache-related preemption costs. In one case (task set T 1 ), the proposed
technique gives a prediction that is 60% tighter than the best of the previous approaches
(1304 cycles vs. 3392 cycles).
However, there is still a non-trivial difference between R P and the measured response time.
This difference results from a number of sources. First, contrary to our pessimistic assumption
that all the useful cache blocks of a task are replaced from the cache between the
time the task is preempted and the time the task resumes execution, not all of them were
replaced on a preemption during the actual execution. Second, many of actual preemptions
occurred at execution points other than the execution point with the maximum number of
useful cache blocks. Finally, the worst case preemption scenario assumed in deriving the
upper bound on the cache-related preemption delay by the linear programming technique
did not occur during the actual execution.
Another point we can note from the results is that the cache-related preemption delay
(i.e., \Delta) occupies only a small portion of the worst case response time (less than 1% for
most cases). This results from the following two reasons. First, the WCETs of tasks were
unrealistically large in our experiments since we disabled data caching. This diminished
the relative impact of the cache-related preemption delay on the worst case response time.
Second, since the target machine uses SRAM as its main memory, the cache refill time is
much smaller than that of most current computing systems, which ranges from 8 cycles
to more than 100 cycles when DRAM is used as main memory [11]. If DRAM were used
instead, the worst case cache-related preemption delay would have occupied a much greater
portion of the worst case response time. Furthermore, since the speed improvement of
processors is much faster than that of DRAMs [11], we expect that the worst case cache-related
preemption delay will occupy an increasingly large portion of the worst case response
time in the future.
To assess the impact of the cache-related preemption delay on the worst case response time
in a more typical setting, we predicted the worst case response time for task set T 1 as we
increase the cache refill time while enabling data caching. Figures 6-(a) and (b) show \Delta and
\Delta=(W orst Case Response Time), respectively, for this new setting. The results show
that as the cache refill time increases, \Delta increases linearly for all the four methods. This
results in a wider gap between the cache-related preemption delay predicted by method P
Cache Refill Time200000A
U
(a)
Cache Refill Time0.5/
(Worst
Case
Response
A
U
(b)
Fig. 6. Cache refill time vs. \Delta and \Delta=(W orst Case Response Time)
and those by the other methods as the cache refill time increases. As a result, the task set is
deemed unschedulable by methods A, C, and U when the cache refill time is more than 40,
190, and 210 cycles, respectively. On the other hand, the task set is schedulable by P even
when the cache refill time is more than 300 cycles. For methods C and U , there are sudden
jumps in \Delta when the cache refill time is about 120 cycles. These jumps occur when an
increase in the worst case response time due to increased cache refill time causes additional
invocations of higher priority tasks. The results also show that as the cache refill time
increases the cache-related preemption delay takes a proportionally larger percentage in
the worst case response time. As a result, even for method P , the cache-related preemption
delay takes about 10% of the worst case response time when the cache refill time is 100
cycles. For other methods, the cache preemption delay takes a much higher percentage of
the worst case response time.
VII. EXTENSIONS
A. Set Associative Caches
In computing the number of useful cache blocks in Section IV, we considered only the
simplest cache organization called the direct-mapped cache organization where each memory
block can be placed in only one cache block. In a more general cache organization called
the n-way set-associative cache organization, each memory block can be placed in any one
of the n blocks in the mapped set whose index is given by memory block number modulo
number of sets in the cache. This set-associative cache organization requires a policy called
the replacement policy that decides which block to replace to make room for a new block
among the blocks in the mapped set. The least recently used (LRU) policy, which replaces
the block that has not been referenced for the longest time, is typically used for that purpose.
In the following, we explain how to compute the maximum number of useful cache blocks
for set-associative caches assuming the LRU replacement policy.
According to our definition in Section IV, the set RMB c
contains all possible states of
cache block c at execution point p. In the case of direct-mapped caches, a possible state
corresponds to a memory block that cache block c may have at execution point p. This
interpretation of a state needs to be extended for set-associative caches since they are
indexed in the unit of cache sets rather than in the unit of cache blocks. A state of a
cache set for an n-way set-associative cache is defined by a vector (m 1 ;m 2
1 is the least recently referenced block and m n the most recently referenced block. In
the following, we formulate the problem of computing RMBs for set-associative caches in
data flow analysis terms. As for direct-mapped caches, we initially focus on RMBs at the
beginnings and ends of basic blocks.
We define the sets RMB c
IN [B] and RMB c
OUT [B] as the sets of all possible states of cache set
c at the beginning and end of basic block B, respectively. The set gen c [B] contains the state
of cache set c generated in basic block B. Its element has up to n distinct memory blocks
that are referenced in basic block B and are mapped to cache set c. More specifically, it is
either empty (when none of the memory blocks mapped to cache set c are referenced in basic
block B) or a singleton set whose only element is a vector (gen c
In the vector, the component gen c
n [B] is the memory block whose last reference in basic
block B is the last reference to the cache set c in B. Similarly, the component gen c
is the memory block whose last reference in B is the last reference to c in B excepting the
references to the memory block gen c
n [B]. In general, gen c
the memory block whose last reference in B is the last reference to c in B excepting the
references to memory blocks gen c
As an example, consider a cache with two sets. Assume that the following memory block
references are made to cache set 0 in a basic block B.
According to the definition of gen c [B], when each cache set has four blocks (i.e., 4-way
set-associative cache), the set gen c 0 [B] is f(m Similarly, when each cache
set has eight blocks (i.e., 8-way set-associative cache), the set gen c0 [B] is f(null, null, null,
With this definition of gen c [B], the sets RMB c
IN [B] and RMB c
OUT [B], whose elements are
now vectors with n memory blocks, are related as follows.
P a predecessor of B
OUT [B] (12)
f(gen c
if gen c
if gen c
if gen c
IN [B]
if gen c [B] is empty
As in the case of direct-mapped caches, the RMBs of cache set c at points other than the
beginning and end of basic block B can be derived from RMB c
IN [B] and the memory block
references within the basic block. Assume that the basic block has the following sequence
of memory block references
reference to memory block m i that is mapped to cache set c i . As before,
the references are processed sequentially starting from (c processings needed for
are as follows. For each element (rmb in the RMB of c i , if
updated to (rmb
since rmb j is now the most recently referenced memory block in the cache set. On the other
is updated to (rmb
Note that it is only the RMB of c i that needs to be updated by the reference
the reference does not affect the states of any other cache sets.
The set LMB c
for set-associative caches contains all possible reference sequences to cache
set c after p and each reference sequence has sufficient information to determine for each
block in cache set c whether it is re-referenced before being replaced. For an n-way set-associative
cache with the LRU replacement policy, such information corresponds to n
distinct memory blocks referenced after p. For this reason, the set gen c [B] for the LMB
problem is defined to be either empty or a singleton set f(gen c
whose components are the first n distinct memory blocks referenced in basic block B and
mapped to cache set c. More specifically, gen c
1 [B] is the memory block whose first reference
in B is the first reference to c in B and gen c
2 [B] is the memory block whose first reference
in B is the first reference to c in B excepting the references to memory block gen c
1 [B] and
so on.
The sets LMB c
IN [B] and LMB c
OUT [B], which correspond to the sets of all possible reference
sequences to cache set c after the beginning and end of basic block B, respectively, are
related as follows.
S a successor of B
IN [S]
f(gen c
if gen c
OUT [B] f(gen c
if gen c
OUT [B] f(gen c
if gen c
OUT [B]
if gen c [B] is empty
After the LMBs at the beginnings and ends of basic blocks are computed, the LMBs at
other points within basic blocks can be computed in an analogous manner to the case of
RMBs.
Once the sets RMB c
p and LMB c
are computed for each execution point p, the calculation
of the maximum number of useful blocks in cache set c at p is straightforward. For each
element (rmb; lmb) in RMB c
, we compute the number of cache hits that would
result if references from the memory blocks in lmb are applied to the cache set state defined
by rmb. We then pick the element (rmb that yields the largest number of
cache hits, which gives the maximum number of useful cache blocks in cache set c at p.
The total number of useful cache blocks at p is computed by summing up the maximum
numbers of useful cache blocks of all the cache sets in the cache. From this information,
the preemption cost table can be constructed as in the case of direct-mapped cache.
B. Data Cache Memory
Until now, we have focused on preemption costs resulting from the use of instruction cache
memory. In this subsection, we explain an extension of the proposed technique needed for
data cache memory.
Unlike instruction references, some data references have addresses that are not fixed at
compile-time. For example, references from a load/store instruction used to implement an
array access have different addresses. These data references complicate a direct application
of the proposed technique to data cache memory since the technique requires that the
addresses of references from each basic block be fixed. Such references also complicate
the WCET analysis of tasks and most WCET analysis techniques take a very conservative
approach to them. Fortunately, this conservative approach greatly simplifies the adaptation
of the proposed technique for data cache memory. We take the extended timing schema
approach [19] as our example in the following discussion.
In the WCET analysis based on extended timing schema approach, if a load/store instruction
references more than one memory block, it is called a dynamic load/store instruction
[29] and two cache miss penalties are assumed for each reference from it; one cache
miss penalty is because the reference may miss in the cache and the other because it may
replace a useful cache block. In the analysis of preemption costs resulting from the use of
data cache memory, if a load/store instruction is not dynamic, references from it can be
handled in exactly the same way as in the case of instruction references since their addresses
in a CFG are fixed. Also, because the extended timing schema approach assumes that all
the references from a dynamic load/store instruction miss in the cache, they cannot contribute
useful cache blocks. Furthermore, since the approach conservatively assumes that
every one of them replaces a useful cache block in deriving the WCET estimate, we can
completely ignore them when computing RMBs and LMBs.
VIII. CONCLUSION
Cache memory introduces unpredictable variation to task execution time when it is used
in real-time systems where preemptions are allowed among tasks. We have proposed a new
schedulability analysis technique that takes such execution time variation into account. The
proposed technique proceeds in two steps. In the first step, a per-task analysis technique
constructs for each task a table called the preemption cost table. This table gives for a
given number of preemptions an upper bound on the cache-related delay caused by them.
Then, the second step computes the worst case response time of each task using a linear
programming technique that takes as its input the preemption cost table obtained in the
first step. Our experimental results showed that the proposed technique gives a prediction of
the worst case cache-related preemption delay that is up to 60% tighter than that obtained
from previous approaches. This improved prediction accuracy results from the fact that
the proposed technique considers only useful cache blocks in deriving the worst case cache-related
preemption delay.
A number of extensions are possible for the analysis technique explained in this paper. For
example, the per-task preemption cost information can be made more accurate. In the
per-task analysis in Section IV, a cache block is considered as useful if it is useful in at least
one path. Many such paths, however, cannot be taken simultaneously as the example in
Figure
shows. In the example, cache block c i is useful only when the flow of control is
from in 1 to out 2 . On the other hand, cache block c j is useful only when the flow of control
is from in 2 to out 1 . These two flows of control are not compatible with each other and
only one of the two cache blocks can be useful at any one time. Nevertheless, both cache
blocks are considered as useful in our data flow analysis. In order to rectify this problem,
preemption cost should be computed on a path basis. Our initial attempt based on this
idea is described in [30].
Another interesting extension to our proposed analysis technique is to consider the intersection
of cache blocks used by a preempted task and those used by the higher priority
tasks that are released while the former task is preempted [12, 13]. For this purpose, the
proposed technique can be augmented as follows: (1) perform the data flow analysis explained
in Section IV for the preempted task and (2) count only the useful cache blocks
that are mapped to the intersection of cache blocks used by the preempted task and those
used by the higher priority tasks released during the preemption. Although this approach
is more accurate than the approach explained in this paper, it requires a large number of
analyses, i.e., one analysis for each preemption instance. We are currently working on an
approximate technique that is similar to the above approach but trades accuracy for low
analysis complexity.
Appendix
Consider a task set consisting of three tasks
preemption cost tables are
given by
# of preemptions 1
cost
# of preemptions 1
cost 6 5 4 4 3 3 2
Note that we do not need the preemption cost table for the highest priority task - 1 since it
cannot be preempted.
The worst case response time of the lowest priority task - 3 can be computed as follows:
(R 0
(R 0
(R 0
The PC 3 (R 0
can be computed by solving the following integer linear programming
problem.
Maximize
(R 0
subject to
Constraint 1
e
R 0T 3
e
Constraint 2
e
e
In the above problem formulation, we use the fact that since task - 1 is the highest priority
task, it cannot be preempted and thus g This also gives N
which is the maximum number of preemptions a single invocation of task - 2 can experience,
can be computed by dividing the worst case response time of - 2 by the period of task - 1 . The
worst case response time of - 2 , which is equal to 49, must have been computed beforehand
and thus is available when we compute the worst case response time of task - 3 . This gives
2. N 3 , which is the maximum number of preemptions of task - 3 , can be computed by
dividing R 0
3 by the periods of tasks - 1 and - 2 (cf. Equation (9)).
After solving this integer linear programming problem, we have (PC 3 (R 0
this gives R 1
3 value is used in the
next iteration to compute R 2
3 ) is obtained by solving the following integer linear programming problem.
Maximize
3;4 \Theta f 3;4
subject to
Constraint 1
e
e
Constraint 2
e
e
The solution to this integer linear programming problem gives (PC 3 (R 1
and this, in turn, gives R 2
When we repeat the same procedure with R 2
we have R 3
Thus, the procedure
converges and R 2
is a safe upper bound on the worst case response time of
task - 3 . Since this worst case response time is smaller than task - 3 's deadline (=400), task
- 3 is schedulable even when cache-related preemption delay is considered.
Acknowledgments
The authors are grateful to Jos'e V. Busquets-Mataix for helpful suggestions and comments
on an earlier version of this paper.
--R
"Some Results of the Earliest Deadline Scheduling Al- gorithm,"
"Finding Response Times in a Real-Time System,"
"The Rate Monotonic Scheduling Algorithm: Exact Characterization and Average Case Behavior,"
"Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment,"
"Dynamic Scheduling of Hard Real-Time Tasks and Real-Time Threads,"
"An Extendible Approach for Analyzing Fixed Priority Hard Real-Time Tasks,"
"Effective Analysis for Engineering Real-Time Fixed Priority Schedulers,"
"The Impact of an Ada Run-time System's Performance Characteristics on Scheduling Models,"
"Accounting for Interrupt Handling Costs in Dynamic Priority Task Systems,"
"Engineering and Analysis of Fixed Priority Schedulers,"
Computer Architecture A Quantitative Approach.
"Cache Issues in Real-Time Systems,"
"Adding Instruction Cache Effect to Schedulability Analysis of Preemptive Real-Time Systems,"
Linear and Nonlinear Programming.
"Bounding Worst-Case Instruction Cache Performance,"
"Integrating the Timing Analysis of Pipelining and Instruction Caching,"
"Worst Case Timing Analysis of RISC Processors: R3000/R3010 Case Study,"
"Efficient Microarchitecture Modeling and Path Analysis for Real-Time Software,"
"An Accurate Worst Case Timing Analysis Technique for RISC Pro- cessors,"
"SMART (Strategic Memory Allocation for Real-Time) Cache Design,"
"SMART (Strategic Memory Allocation for Real- Time) Cache Design Using the MIPS R3000,"
"Allocating SMART Cache Segments for Schedulability,"
"Software-Based Cache Partitioning for Real-time Applications,"
High Performance Compilers for Parallel Computing.
DFT/FFT and Convolution Algorithm: Theory
Elementary Numerical Analysis.
C Algorithms for Real-Time DSP
"Efficient Worst Case Timing Analysis of Data Caching,"
"Calculating the Worst Case Preemption Costs of Instruction Cache,"
--TR
--CTR
Anupam Datta , Sidharth Choudhury , Anupam Basu, Using Randomized Rounding to Satisfy Timing Constraints of Real-Time Preemptive Tasks, Proceedings of the 2002 conference on Asia South Pacific design automation/VLSI Design, p.705, January 07-11, 2002
Yudong Tan , Vincent J. Mooney, III, WCRT analysis for a uniprocessor with a unified prioritized cache, ACM SIGPLAN Notices, v.40 n.7, July 2005
M. Kandemir , G. Chen , W. Zhang , I. Kolcu, Data Space Oriented Scheduling in Embedded Systems, Proceedings of the conference on Design, Automation and Test in Europe, p.10416, March 03-07,
Hiroshi Nakashima , Masahiro Konishi , Takashi Nakada, An accurate and efficient simulation-based analysis for worst case interruption delay, Proceedings of the 2006 international conference on Compilers, architecture and synthesis for embedded systems, October 22-25, 2006, Seoul, Korea
Mahmut Kandemir , Guilin Chen, Locality-Aware Process Scheduling for Embedded MPSoCs, Proceedings of the conference on Design, Automation and Test in Europe, p.870-875, March 07-11, 2005
Accounting for cache-related preemption delay in dynamic priority schedulability analysis, Proceedings of the conference on Design, automation and test in Europe, April 16-20, 2007, Nice, France
Hemendra Singh Negi , Tulika Mitra , Abhik Roychoudhury, Accurate estimation of cache-related preemption delay, Proceedings of the 1st IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis, October 01-03, 2003, Newport Beach, CA, USA
Hiroyuki Tomiyama , Nikil D. Dutt, Program path analysis to bound cache-related preemption delay in preemptive real-time systems, Proceedings of the eighth international workshop on Hardware/software codesign, p.67-71, May 2000, San Diego, California, United States
I. Kadayif , M. Kandemir , I. Kolcu , G. Chen, Locality-conscious process scheduling in embedded systems, Proceedings of the tenth international symposium on Hardware/software codesign, May 06-08, 2002, Estes Park, Colorado
Sheayun Lee , Sang Lyul Min , Chong Sang Kim , Chang-Gun Lee , Minsuk Lee, Cache-Conscious Limited Preemptive Scheduling, Real-Time Systems, v.17 n.2-3, p.257-282, Nov. 1999
Yudong Tan , Vincent J. Mooney III, Timing Analysis for Preemptive Multi-Tasking Real-Time Systems with Caches, Proceedings of the conference on Design, automation and test in Europe, p.21034, February 16-20, 2004
Yudong Tan , Vincent Mooney, Timing analysis for preemptive multitasking real-time systems with caches, ACM Transactions on Embedded Computing Systems (TECS), v.6 n.1, February 2007
Jan Staschulat , Rolf Ernst, Scalable precision cache analysis for preemptive scheduling, ACM SIGPLAN Notices, v.40 n.7, July 2005
Zhang , Chandra Krintz, Adaptive code unloading for resource-constrained JVMs, ACM SIGPLAN Notices, v.39 n.7, July 2004
Johan Strner , Lars Asplund, Measuring the cache interference cost in preemptive real-time systems, ACM SIGPLAN Notices, v.39 n.7, July 2004
Jan Staschulat , Rolf Ernst, Multiple process execution in cache related preemption delay analysis, Proceedings of the 4th ACM international conference on Embedded software, September 27-29, 2004, Pisa, Italy
Sungpack Hong , Sungjoo Yoo , Hoonsang Jin , Kyu-Myung Choi , Jeong-Taek Kong , Soo-Kwan Eo, Runtime distribution-aware dynamic voltage scaling, Proceedings of the 2006 IEEE/ACM international conference on Computer-aided design, November 05-09, 2006, San Jose, California
Chang-Gun Lee , Kwangpo Lee , Joosun Hahn , Yang-Min Seo , Sang Lyul Min , Rhan Ha , Seongsoo Hong , Chang Yun Park , Minsuk Lee , Chong Sang Kim, Bounding Cache-Related Preemption Delay for Real-Time Systems, IEEE Transactions on Software Engineering, v.27 n.9, p.805-826, September 2001
Nikil Dutt , Alex Nicolau , Hiroyuki Tomiyama , Ashok Halambi, New directions in compiler technology for embedded systems (embedded tutorial), Proceedings of the 2001 conference on Asia South Pacific design automation, p.409-414, January 2001, Yokohama, Japan | cache memory;fixed-priority scheduling;preemption;schedulability analysis;real-time system |
279654 | Checkpointing Distributed Shared Memory. | Distributed shared memory (DSM) is a very promising programming model for exploiting the parallelism of distributed memory systems, because it provides a higher level of abstraction than simple message passing. Although the nodes of standard distributed systems exhibit high crash rates only very few DSM environments have some kind of support for fault-tolerance.In this article, we present a checkpointing mechanism for a DSM system that is efficient and portable. It offers some portability because it is built on top of MPI and uses only the services offered by MPI and a POSIX compliant local file system.As far as we know, this is the first real implementation of such a scheme for DSM. Along with the description of the algorithm we present experimental results obtained in a cluster of workstations. We hope that our research shows that efficient, transparent and portable checkpointing is viable for DSM systems. | INTRODUCTION
Distributed Shared Memory (DSM) systems provide the shared memory programming
model on top of distributed memory systems (i.e. distributed memory multiprocessors or
networks of workstations). DSM is appealing because it combines the performance and
scalability of distributed memory systems with the ease of programming of shared-memory
machines.
Distributed Shared Memory has received much attention in the past decade and several DSM
systems have been presented in the literature [Eskicioglu95][Raina92][Nitzberg91]. However,
most of the existing prominent implementations of DSM systems do not provide any support
for fault-tolerance [Carter91] [Keleher94] [Li89] [Johnson95]. This is a limitation that we
wanted to overcome in our DSM system.
When using parallel machines and/or workstation clusters the user should be aware that the
likelihood of a processor failure increases with the number of processors, and the failure of
just one process(or) will lead to the crash or hang-up of the whole application. Distributed
systems represent a cost-effective solution for running scientific computations, but at the same
time they are more vulnerable to the occurrence of failures. A study presented in [Long95]
shows that we can expect (in average) a failure every 8 hours in a typical distributed system
composed by 40 workstations, where each machine exhibits an MTBF of 13 days. If a
program that runs on such a system takes more than 8 hours to execute then it would be very
difficult to finish its execution, unless there is some fault tolerance support to assure the
continuity of the application. Parallel machines are considerably more stable, but even so they
present a relatively low MTBF. For instance, the MTBF of a large parallel machine like the
Intel Paragon XP/S 150 (with 1024 processors) from Oak Ridge National Laboratory is in the
order of 20 hours [ORNL95].
For the case of long-running scientific applications it is essential to have a checkpointing
mechanism that would assure the continuity of the application despite the occurrence of
failures. Without such mechanism the application would have to be restarted from scratch and
this can be very costly for some applications.
In this paper, we will present a checkpointing scheme for DSM systems that despite being
transparent to the application it is quite general, portable and efficient. The scheme is quite
general because it does not depend of any specific feature of our DSM system. In fact, our
system implements several protocols of consistency and models of consistency, but the
checkpointing scheme was made independent of the protocols.
The scheme is also quite portable, since it was implemented on top of MPI and nothing was
changed inside the MPI layer. This is why we call it DSMPI [Silva97]. The scheme only
requires a POSIX compliant file system, and makes use of the likckpt tool [Plank95] for taking
the local checkpoints of processes. That tool works for standard UNIX machines.
Finally, DSMPI is an efficient implementation of checkpointing. Usually, efficiency depends
on the write latency to the stable storage but also on the characteristics of the checkpointing
protocol. While the first feature mainly depends on the underlying system, the second one is
under our control. We have implemented a non-blocking coordinated checkpointing
algorithm. It does not freeze the whole application while the checkpoint operation is being
done, as blocking algorithms do. The only problem of non-blocking algorithms over the
blocking ones is the need to record in stable storage some of the in-transit messages that cross
the checkpoint line. However, we have exploited the semantics of DSM messages and we
achieved an important optimization: no in-transit message has to be logged in stable storage.
Some results were taken using a distributed stable storage and we have observed a maximum
overhead of 6% for a extremely short interval between checkpoints of 2 minutes. With a
more realistic interval, in order of tens of minutes or even hours, the overhead would fall to
insignificant values.
The rest of the paper is organized as follows: section 2 describes the general organization of
DSMPI and its protocols. Section 3 presents the transparent scheme that is based on a non-blocking
coordinated checkpointing algorithm. Section 4 compares our algorithm with other
schemes. Section 5 presents some performance results, and finally section 6 concludes the
paper.
2.
OVERVIEW
OF DSMPI
This section gives a brief description about DSMPI [Silva97].
2.1 Main Features
DSMPI is a parallel library implemented on top of MPI [MPI94]. It provides the abstraction
of a globally accessed shared memory: the user can specify some data-structures or variables
to be shared and that shared data can be read and/or written by any process of an MPI
application.
The most important guidelines that we took into account during the design of DSMPI were:
1. assure full portability of DSMPI programs;
2. provide an easy-to-use and flexible programming interface;
3. support heterogeneous computing platforms;
4. optimize the DSM implementation to allow execution efficiency.
5. provide support for checkpointing.
For the sake of portability DSMPI does not use any memory-management facility of the
operating system, neither requires the use of any special compiler or linker. All shared data
and the read/write operations should be declared explicitly by the application programmer.
The sharing unit is a program variable or a data structure. DSMPI can be classified as a
structure-based DSM as opposed to page-based DSM systems, like IVY [Li89].
It does not incur in the problem of false sharing because the unit of shared data is completely
related to existing objects (or data structures) of the application. It allows the use of
heterogeneous computing platforms since the library knows the exact format of each shared
data object. Most of the other DSM systems are limited to homogeneous platforms.
DSMPI allows the coexistence of both programming models (message passing and shared
data) within the same MPI application. This has been considered as a promising solution for
parallel programming [Kranz93].
Concerning absolute performance, we can expect applications that use DSM to perform
worse than their message passing counterparts. However, this is not always true. It really
depends on the memory-access pattern of the application and on the way the DSM system
manages the consistency of replicated data.
We tried to optimize the accesses to shared data by introducing three different protocols of
data replication and three different models of consistency that can be adapted to each
particular application in order to exploit its semantics. With such facilities we expect DSM
programs to be competitive with MPI programs in terms of performance. Some performance
results collected so far corroborate this expectation [Silva97].
2.2 Internal Structure
In DSMPI there are two kinds of processes: application processes and daemon processes.
The latter ones are responsible for the management of replicated data and the protocols of
consistency. Since the current implementations of MPI are not thread-safe we had to
implement the DSMPI daemons as separate processes. This is a limitation of the current
version of DSMPI that will be relaxed as soon as there is some thread-safe implementation of
MPI. All the communication between daemons and application processes is done by message
passing. Each application process has access to a local cache that is located in its own address
space where it keeps the copies of replicated data objects. The daemon processes maintain the
master copies of the shared objects. DSMPI maintains a two-level memory hierarchy: a local
cache and a remote shared memory that is located in and managed by the daemons. The
ownership of the data objects is implemented through a static distributed scheme.
2.3 DSM Protocols
We provided some flexibility in the accesses to shared data by introducing three different
protocols of data replication and three different models of consistency that can be adapted to
each particular application in order to exploit its semantics.
Shared objects can be classified in two main classes: single-copy or multi-copy. The multicopy
class replicates the object among all the processes that perform some read request on it.
In order to assure consistency of replicated data the system can use a write-invalidate protocol
or a write-update protocol [Stumm90A]. This is a parameter that can be tuned by the
application programmer.
In order to exploit execution efficiency, we have also implemented three different models of
consistency:
1. Sequential Consistency (SC), as proposed in the IVY system [Li89];
2. Release Consistency (RC), that implements the protocol of the DASH multiprocessor
3. Lazy Release Consistency (LRC), that implements a similar protocol as proposed in the
TreadMarks system [Keleher94].
It has been shown that the LRC protocol is able to introduce some significant improvements
over the other two models. The flexibility provided by DSMPI in terms of different models
and protocols is an important contribution to the overall performance of DSMPI.
2.4 Programming Interface
The library provides a C interface and the programmer calls the DSMPI functions in the
same way it calls any MPI routine. The complete interface is composed of routines: it
includes routines for initialization and clean termination, object creation and declaration, read
and write operations, and routines for synchronization like semaphores, locks and barriers.
The full interface is described in [Silva97].
3. TRANSPARENT CHECKPOINTING ALGORITHM
When devising the transparent checkpointing algorithm for our DSM system we tried to
achieve some objectives, like transparency, portability, low performance overhead, low
memory overhead and the ability to tolerate partial and total failures of the system.
Satisfying all the previous guidelines is not an easy task and has not been possible in other
proposals. Most of the existing checkpointing schemes for DSM are mainly concerned with
transparency. Transparent recovery implemented at the operating system level or at the DSM-
layer is an attractive idea. However, those schemes involve significant modifications in the
system which make them very difficult to port to other systems. For the sake of portability,
checkpointing should be made independent of the underlying system as much as possible.
3.1 Motivations
Our checkpointing scheme is meant to be quite general because it does not depend on any
specific feature of our DSM system. Our system implements several protocols of consistency
and models of consistency, but the checkpointing scheme was made independent of the
protocols.
The scheme itself offers some degree of portability. It was implemented on top of MPI that
per se already provides a high-level of portability since it was accepted as the standard for
message-passing. Nothing was changed inside the MPI layer. The scheme only requires a
POSIX compliant file system, and makes use of the libckpt tool [Plank95] for taking the local
checkpoints of processes. That tool works for standard UNIX machines.
We are taking transparent checkpoints and this means that it is not possible to assure
checkpoint migration between machines with different architectures. However, the checkpoint
mechanism itself can be ported to other DSM environments.
3.2 Coordinated Checkpointing
Our guideline was to adapt some of the checkpointing techniques used in message passing
systems, since checkpointing mechanisms have been widely studied in message-passing
environments. Two methods for taking checkpoints are commonly used: coordinated
checkpointing and independent checkpointing. In the first method, processes have to
coordinate between themselves to ensure that their local checkpoints form a consistent system
state. Independent checkpointing requires no coordination between processes but it can result
in some rollback propagation. To avoid the domino effect and to reduce the rollback
propagation message logging is used together with independent checkpointing.
Independent checkpointing and message logging was not a very encouraging option, because
DSM systems generate more messages than message-passing programs. Thus, we have chosen
a coordinated checkpointing strategy for DSMPI. The reasons were manifold: it minimizes the
overhead during failure-free operation since it does not need to log messages; it limits the
rollback to the previous checkpoint; it avoids the domino-effect; it uses less space in stable
storage; it does not require a complex garbage-collection algorithm to discard obsolete
checkpoints; and finally, it is the most suitable solution to support job-swapping.
It was shown in [Elnozahi92][Plank94] that coordinated checkpointing is a very effective
solution for message-passing systems. Some experimental results have shown that the
overhead of synchronizing the local checkpoints is negligible when compared with the
overhead of writing the checkpoints to disk.
Implementing the checkpointing algorithm underneath the DSM layer and in a transparent
way to that layer is a possible alternative to provide fault-tolerance as was suggested in
[Carter93]. However, this would be a very simplistic approach since the DSM system
exchanges several messages not related to the application. Such approach would result in
extra overhead and it would not exploit the characteristics of the DSM system.
3.3 System Model
We assume that the DSM system only uses message passing to implement its protocols.
There is no notion of global time and there is no guarantee that processor clocks are
synchronized in some way.
Processors are assumed to be fail-stop: there is no provision to detect or tolerate any kind of
malicious failures. When a processor fails it stops sending messages and does not respond to
other parts of the system. Processor failures are detected by the underlying communication
system (MPI) through the use of time-outs. When one of the processes fails, the MPI layer
sends a SIGKILL to all the other processes (application and daemons) that will terminate the
complete application. MPI makes extensive use of static groups (in fact, the whole set of
processes belong at the beginning to a MPI_WORLD_COMM group). If some process fails,
some collective operations that involve the participation of the processes will certainly hang-up
the application. Thus, it makes sense that a failure of just one process should result in the
rollback of the entire application.
Communication failures are also dealt by the communication system (MPI). The underlying
message-passing system provides a reliable and FIFO point-to-point communication service.
Stable storage is implemented on a shared disk that is assumed to be reliable. If the disk is
attached to a central file server all the checkpoints become available to the working processors
of the system. If stable storage is implemented on the local disk of every processor (assuming
that each host has a disk) then the application can only recover if the failed host is also able to
recover.
3.4 Checkpoint Contents
A checkpoint in a DSM system should include: the DSM data (e.g. pages/objects and the
DSM directories) and the private data of each application.
Some schemes do not save the private data of processes and thus are not able to recover the
whole state of the computation [Stumm90B]. They leave to the application programmer the
responsibility of saving the computation state to assure the continuity of the application. Other
schemes assume, for the sake of simplicity, that private data and shared data are allocated in
DSM [Kermarrec95]. We depart from that assumption and consider that private data is not
allocated in the DSM system. Besides, some of the private data like processor registers,
program counter and process stack are certainly not part of the DSM data.
In our scheme we have to checkpoint the application processes as well as the DSM daemons
since they maintain most of the DSM relevant data. DSM daemons save the shared objects
and the associated DSM directories. Saved directories reflect the location of the default and
current owners of the shared objects. Some optimizations are made by the system: read-only
objects are checkpointed only once, and replicated shared objects are checkpointed only by
one daemon (the one that maintains its ownership).
Each process (application or daemon) saves its checkpoint into a separate file. A global
checkpoint is composed by N different checkpoint files and a status_file that keeps the status
of the checkpoint protocol execution. This file is maintained by the checkpointing coordinator
and is used during the phase of recovery to determine the last committed checkpoint, as well
as to ensure the atomicity in the operation of writing a global checkpoint.
Checkpointing Algorithm
Since checkpointing schemes for message passing systems are very well stabilized, we
decided to adopt one of the most used techniques to implement a non-blocking global
checkpointing [Elnozahy92][Silva92].
The main difficulty of implementing non-blocking coordinated checkpoint is to guarantee
that the global saved state is consistent. Messages in-transit at the time of a global checkpoint
are the main concern for saving a consistent snapshot of a distributed application.
For instance, messages that are sent after the checkpoint of the process sender and are
received before the checkpoint of the receiver are called orphan messages [Silva92]. This sort
of messages violates the consistency of the global checkpoint, and thus should be avoided by
the algorithm. Other messages that are sent before the checkpoint of the sender and are
received after the checkpoint of the destination process are called missing messages. Usually
the algorithm should keep track of their occurrence and replay them during the recovery
operation.
However, we have considered some features of the DSM system that were important for the
implementation of the algorithm, namely: the interaction between processes is not done by
explicit messages but through object invocations that are RPC-like interactions; there is both
shared and replicated data; the DSM system exchanges some additional messages that are only
related to the DSM protocols and do not affect the application directly; and finally, there is a
DSM directory that is maintained throughout the system.
These features were taken into account and some of them were exploited to introduce some
optimizations. The resulting scheme presents a novel but important feature over those non-blocking
algorithms oriented to message-passing: it does not need to record any cross-
checkpoint message in stable storage.
The operation of checkpointing is triggered periodically by a timer mechanism. One of the
daemons acts like the coordinator (the Master daemon) and is responsible for initiating a
global checkpoint and coordinating the steps of the protocol. Only one process is given the
right to initiate a checkpointing session in order to avoid multiple sessions and an
uncontrolled high-frequency of checkpoint operations. Since there is always one DSMPI
daemon that is elected as the Master (during the startup phase) we can guarantee that if this
daemon is the checkpoint coordinator, checkpointing will be adequately spaced in time.
Each global checkpoint is identified by a monotonically increasing number - the Checkpoint
Number (CN). In the first phase of the protocol, the coordinator daemon increments its own
CN and broadcasts a "TAKE_CHKP" message to all the other daemons and application
processes. Upon receiving this message each of the other processes takes a tentative
checkpoint, increments the local CN and sends a message "TOOK_CHKP" to the coordinator.
After taking the tentative checkpoint, every process is allowed to continue with its
computation. The application does not need to be frozen during the execution of the
checkpointing protocol. This is an important feature to avoid interference with the application
and to reduce the checkpointing overhead.
In the second phase of the protocol, the daemon broadcasts a "COMMIT" message after
receiving all the responses (i.e. "TOOK_CHKP") from the participants. Upon receiving a
"COMMIT" message the tentative checkpoints are transformed into permanent checkpoints and
the previous checkpoint files are deleted. All these phases are recorded into the status_file by
the Master Daemon.
Usually, the broadcast message "TAKE_CHKP" is received by all the processes in some
order that preserves the causality. However, due to the asynchrony of the system it is possible
that some situations may violate the causal consistency.
An important aspect is that every shared object is tagged with the local CN value and every
message sent by the DSM system is tagged with the CN of the sender.
The CN value piggybacked in the DSM messages prevents the occurrence of orphan
messages: if a daemon receives a message with higher CN than the local one, then it has to
take a tentative checkpoint before consuming that message and changing its internal state. If
later on, it receives a "TAKE_CHKP" message tagged with an equal CN, then it discards that
message since the corresponding tentative checkpoint was already taken.
The CN value is also helpful in identifying missing messages: if an incoming message
carries a CN value lower than the current local one, it means it was sent in the previous
checkpoint interval and is a potential missing message.
The checkpointing algorithm has to distinguish between the messages used by the read/write
operations and the other messages used by the DSM protocols. Using the semantics of DSM
protocols we can avoid the unnecessary logging of some potential missing messages.
Daemon processes run in a cycle and when they receive a "TAKE_CHKP" message they take
a local snapshot of their internal state, including the DSM directories. Application processes
get "TAKE_CHKP" orders when they execute DSMPI routines: whenever they read from the
local cache, perform some remote object invocation, or access some synchronization variable.
All the invocations on shared objects or synchronization variables involve a two-way
interaction: invocation and response. During the period that a process is waiting for a response
it remains blocked and thus does not change its internal state. The only interactions that do not
fit in this structure are messages related to the DSM protocols. Usually these messages are
originated by the daemon processes and do not require an RPC-like interaction. For the sake
of clarity, let us distinguish between three different cases:
Case 1: Messages Process-to-Daemon
These interactions are started by an application process that wants to perform a read/write
operation into a shared object or gain access to a synchronization variable. Each process
maintains a directory with the location of the owners of the objects, locks, semaphores and
barriers. If it wants to access any of them it sends an invocation message to the respective
daemon. Then it blocks while waiting for the response. Let us consider two different scenarios
that should be handled by the algorithm:
1.1 - The process (P i ) is running in the checkpoint interval N, but the daemon (D k ) is still
running in the previous checkpoint interval (i.e.
1.2 - The process (P i ) has its CN equal to (N-1) and the daemon has already taken its N th
tentative checkpoint (CN=N);
An example of the first scenario is illustrated in Figure 1. Process P i already took its N th
checkpoint and performs a read access to a remote object that is owned by daemon D k that has
not yet received the corresponding "TAKE_CHKP" message. The "READ" message carries CN
equal to N: the daemon checks that and takes a local checkpoint before consuming the
message. Instead of a READ operation, it can be a WRITE, LOCK, UNLOCK, WAIT, SIGNAL
or BARRIER invocation. All these operations have an associated acknowledge or reply
message.
Figure
1: Forcing a checkpoint in the DSM daemon.
Figure
2 represents the second scenario, where the daemon has already taken its N th
checkpoint but the process started a read transaction in the previous checkpoint interval.
When it receives the "READ_REPLY" the process realizes it has to take a local checkpoint
and increment its local CN before consuming the message and continue with the computation.
Figure
2: Forcing a checkpoint in the application process.
However, this operation is not enough since the checkpoint CP(i,N) does not record the
"READ" invocation message. During recovery the process has to repeat that read transaction
again. To do that, the checkpoint routine has to record the "READ" invocation message in the
contents of the checkpoint. In some sense, we can say that there is a logical checkpoint
immediately before the sending of the "READ" message as represented in Figure 3. At the
time of the checkpoint, that invocation message belongs to the address space of the process.
Therefore, it is already included in the checkpoint. The only concern was to re-direct the
starting point after recovery to some point in the code immediately before the sending of the
message. For that purpose, a label was included in every DSMPI routine. After returning from
force checkpoint
a restart operation the control flow jumps to that label, and the invocation message is re-sent.
Thus, we can assure that the read transaction is repeated from its beginning.
Figure
3: The notion of a logical checkpoint.
Case 2: Messages Daemon-to-Process
These messages are started by the daemon processes and are related to the DSM protocols.
Usually they are INVALIDATE or UPDATE messages, depending on the replication protocol.
These messages do not follow an RPC-like structure and thus do not block the sending
daemon. Application processes consume these sort of messages when they execute the cache
refresh procedure. We can also identify two different scenarios:
2.1- A protocol message that can be a potential orphan message;
2.2- A protocol message that can be a potential missing message.
The first situation is represented in Figure 4: process P i receives a message that carries a
higher CN than the local one, and is forced to take a checkpoint before proceeding.
Figure
4: Potential orphan message.
The second scenario is illustrated in Figure 5: the INVALIDATE or UPDATE message is
sent in the previous checkpoint interval and received by the application process after taking its
local checkpoint. This is, theoretically, the example of a missing message and in the normal
case it would have to be recorded in stable storage in order to be replayed in case of recovery.
However, we do not need to log these missing INVALIDATE/UPDATE messages. There is
absolutely no problem if the message is not replayed in case of a rollback. The reason is
simple: during the recovery procedure, every application process has to clean its local cache.
logical checkpoint
After that, if the process accesses some shared object it has to perform a remote operation to
the owner daemon from where it gets the most up-to-date version of the object.
Figure
5: Example of a missing message.
Case 3: Messages Daemon-to-Daemon
When using a static distributed ownership scheme daemons only communicate between
themselves during the startup phase and during the creation of new objects, that also happens
during the startup phase. With a dynamic ownership scheme messages between daemons are
sent all the time since the ownership of an object can move from daemon to daemon. The
current version of DSMPI follows a static distributed scheme but the next version will provide
a dynamic distributed scheme as well.
We will consider a dynamic distributed ownership scheme similar to the one presented in
[Li89] where each shared data object has an associated owner that is changing as data
migrates throughout the system. When a process wants to write into a migratory object the
system changes its location to the daemon associated with that process. As object location can
change frequently during the program execution, every process and daemon has to maintain a
guess of the probable owner for each shared object. When a process needs a copy of the data
it sends a request to the probable owner. If this daemon has that data object it returns the data,
otherwise it forwards the request to the new owner. This forwarding can go further until the
current owner is found in the chain of the probable owners. The current owner will send the
reply to the asking process that receives the data and updates its value for the probable owner
if the reply was not received from the expected daemon.
Sometimes this scheme can be inefficient since the request may be forwarded many times
before reaching the current owner. This inefficiency can be reduced if all the daemons
involved in forwarding a request are given the identity of the current owner. We will consider
this optimization, which involves the sending of ownership update messages from the current
owner to the daemons belonging to the forward chain.
Let us now see what are the implications of this forward-based scheme to the checkpointing
algorithm. The messages exchanged between daemons can be divided into three different
classes:
3.1- forward messages that are sent on behalf of read/write transactions;
3.2- owner_update messages to announce the current owner of some object;
3.3- the transfer of some object from a current owner to the next owner.
The first kind of messages follows the same rule stated for Case 1 (i.e. transactions started
by a process). That rule was explained with the help of Figures 1, 2 and 3. The only difference
is that all the daemon processes involved in the forwarding chain have to apply that rule.
The case of owner_update messages are treated in a similar way to Case 2: if the
owner_update message carries a CN higher than the CN at the destination daemon then that
message is a potential orphan message and should force a checkpoint at the destination before
proceeding. The rule for case 2.1 (explained in Figure 4) applies to this case.
If the owner_update message carries a lower CN than the destination daemon it corresponds
to potential missing message (like in Figure 5). In the normal case, it should be logged to be
replayed in case of recovery. However, once again we realized that there is no problem if we
do not log messages. The system will still be able to ensure a consistent recovery of the
application. During the recovery procedure the distributed directory will be reconstructed
through the use of broadcast to update the current ownership of the objects. This means that
those owner_update messages can be lost during recovery. The object directory will be
updated in any case.
The transfer of data from one daemon to the new owner is included in the forward protocol:
when the reply is sent from the current owner to the original process a copy of the data is also
sent (or is first sent) to the daemon associated to that process. This daemon will be the new
owner. The rules stated previously are also used in this case.
Since object ownership can change frequently during the execution of the checkpoint
protocol, it is necessary to take care and avoid a shared object to be checkpointed more than
once. We solve this problem in a very simple way: the current owner is responsible for
checkpointing the shared object. If it transfers the object to another daemon after taking its
checkpoint, that object will not be checkpointed again. The CN value tagged with each object
is used to prevent a migratory object to be checkpointed more than once.
To summarize, our checkpointing algorithm follows a non-blocking coordinated strategy. It
avoids the occurrence of orphan messages and detects the potential missing messages. We do
not log any missing message in stable storage but the system will still ensure a consistent
recovery of the application. To achieve this optimization we have exploited some of the
characteristics of the DSM protocols.
Our scheme works for sequential consistency and relaxed consistency models. It is also
independent of the replication protocol (write-update or write-invalidate). This fact allows a
wide applicability of this algorithm to other different DSM systems.
3.6 Recovery Procedure
In our case, the application recovery involves the roll back of all the processes to the
previous checkpoint. We do not see this as a drawback, but rather as an imposition of the
underlying communication system (MPI). Nevertheless, it suits well our goals: to use
checkpointing for job-swapping as well, and to tolerate any number of failures. Thus, the
recovery procedure is quite simple: all the processes have to roll back to the previously
committed checkpoint.
The determination of the last committed checkpointed is obtained from the status_file. After
restoring the local checkpoints on each process, they still have to perform some actions before
re-starting the execution:
(i) the object location directory is constructed and updated through all the processes. In the
case of a static distribution, this operation can be bypassed;
(ii) for every shared object defined as multi-copy the owner daemon resets its associated
copy-set list;
(iii) each application process cleans its local private cache and updates the object location
directory, if necessary.
Only after these steps are processes allowed to resume their computation. Cleaning the
private cache of the processes during recovery does not introduce a visible overhead, and
allows a simpler operation during the checkpoint operation since some potential missing
messages exchanged on behalf of the DSM protocols do not need to be logged.
4. Comparison with other schemes
Some other coordinated checkpointing algorithms have been proposed in the literature. The
algorithm presented in [Janakiraman94] extends the checkpoint/rollback operations only to
the processes that have communicate directly or indirectly with the process initiator. That
algorithm uses a 2-phase commit protocol during which all the processes participating in the
checkpoint session have to suspend their computations, and all the messages in-transit have to
be flushed to their destinations. Their algorithm waits for the completion of all on-going
read/write operations before proceeding with the checkpointing protocol. Only after all the
pending read/write operations have to be terminated the processors begin sending their
checkpoints to stable storage. This may result in a higher checkpoint latency and performance
overhead since they use a blocking strategy.
In [Cabillic95] is presented an implementation of consistent checkpointing in a DSM
system. Their approach relies on the integration of global checkpoints with synchronization
barriers of the application. The scheme was implemented on top of the Intel Paragon and
several optimizations were included, like incremental, non-blocking and pre-flushing
checkpointing techniques. They have shown that copy-on-write checkpointing can be an
important optimization to reduce the checkpointing overhead. In the recovery operation of that
scheme all the processes are forced to roll back to the last checkpoint, as in our case. The only
limitation of this scheme is that it does not work with all applications: if there is no barrier()
within an application the system is never able to checkpoint.
[Costa96] also presents a similar checkpointing scheme, that relies on the garbage collector
mechanism to achieve a global consistent state of the system. It is based on a full-blocking
checkpointing approach.
In [Kaashoek92] was presented a global consistent checkpointing mechanism for the Orca
parallel language. It was very easy to implement because that DSM implementation is based
on total-order broadcast communication. All the processes receive all broadcast messages in
the same order to assure consistency of updates in replicated objects. The checkpointing
messages are also broadcasted and inserted in the total order of messages. This ensures the
consistency of the global checkpoint. Unfortunately, MPI does not have that characteristic.
[Choy95] presented a definition for consistent global states in sequentially consistent shared
memory systems. They have also presented a lazy checkpoint protocol that assures global
consistency. However, lazy checkpointing schemes may result in a high checkpoint latency,
which is not desirable for job swapping purposes.
Other different recovery schemes not based on coordinated checkpointing were also
presented in the literature. Some of them [Wu89][Janssens93] were based on communication-
induced checkpointing: every process is allowed to take checkpoints independently but, before
communicating with another one, they are forced to checkpoint in order to avoid rollback
propagation and inconsistencies. Communication-induced checkpointing is sensitive to the
frequency of inter-process communication or synchronization in the application. This may
introduce a high performance overhead and an uncontrolled checkpoint frequency.
Another solution for recovery is based on independent checkpointing and message logging
[Richard93]. However, we did not find this option very encouraging because DSM systems
generate more messages than message passing programs. Even considering some possible
optimizations [Suri95], message logging would incur in a significant additional performance
and memory overhead.
A considerable set of proposals [Wilkinson93][Neves94][Stumm90B][Brown94]
[Kermarrec95] are only able to tolerate single processor failures in the system. While this
goal is meaningful for distributed systems, where we can expect that machine failures are
uncorrelated, the same is not true for parallel machines where total or multiple failures are as
likely as partial failures. We require our checkpointing mechanism to be able to tolerate any
number of failures.
Although those different approaches could be interesting for other systems, we did not find
them the most suitable for our system and we decided to adopt a coordinated checkpointing
strategy.
5. Performance Results
In this section we present some results about the performance and memory overhead of our
transparent checkpointing scheme. The results were collected in a distributed system
composed of 4 Sun Sparc4 workstations connected by a 10 Mb/s Ethernet.
5.1 Parallel Applications
To conduct the evaluation of our algorithm we used the following six typical parallel
applications
. TSP: solves the Traveling Salesman Problem using a branch-and-bound algorithm.
. NQUEENS: solves the placement problem of N-queens in a N-size chessboard.
. SOR: solves Laplace's equation on a regular grid using an iterative method.
. GAUSS: solves a system of linear equations using the method of Gauss-elimination.
. ASP: solves the All-pairs Shortest Paths problem using Floyd's algorithm.
. NBODY: this program simulates the evolution of a system of many bodies under the
influence of gravitational forces.
1 For lack of space we refer the interested reader to [Silva97] for more details about the applications.
5.2 Performance Overhead
We have made some experiments with the transparent checkpointing algorithm in a
dedicated network of Sun Sparc workstations. Every processor has a local disk and access to a
central file server through an Ethernet network. To take a local checkpoint of each process we
used the libckpt tool in its fully transparent mode [Plank95]. None of the optimizations of that
tool were used.
Two levels of stable storage were used: the first level used the local disks of the processors,
while the second level used a central server that is accessible to all the processors through the
NFS protocol. Writing checkpoints to the local disks is expected to be much faster than
writing to a remote central disk. However, the first scheme of stable storage is only able to
recover from transient processor failures. If a processor fails in a permanent way and is not
able to restart, then its checkpoint can not be accessed by any other processor of the network
and recovery becomes impossible. The central disk does not have this problem (assuming the
disk itself is reliable).
Considering that stable storage is implemented on a central file server, Table 1 shows the
time to commit and the corresponding overhead per checkpoint for all the applications.
Usually, the time it takes to commit a global checkpoint is higher than the overhead produced.
This is because the algorithm follows a non-blocking approach and the application processes
do not need to wait for the completion of the protocol. If the algorithm were based on a
blocking approach, the overhead per checkpoint would be roughly equal to the whole time it
takes to commit. So, in the Table we can observe that, in the overall, the non-blocking nature
of the algorithm allows some reduction in the checkpoint overhead.
Application Size Chkp
(Kbytes)
Time Commit
GAUSS (1024) 8500 130.763 128.832
Table
1: Time to commit and overhead per checkpoint using the central disk.
The time to take a checkpoint depends basically on four factors: (i) the size of the
checkpoint; (ii) the access time to stable storage; (iii) the synchronization structure of the
application; (iv) and the granularity of the tasks.
Checkpoint operations are only performed inside DSMPI routines. This means that if an
application is very asynchronous and coarse-grain it takes some time more to perform a global
checkpoint when compared with a more synchronous application. These factors are important
but, in practice, the dominant factor is actually the operation of writing the checkpoint files to
stable storage. Reducing the size of the checkpoints is a promising solution to attenuate the
performance overhead. Another way is to use a stable storage with faster access.
Table
2 shows the overhead per checkpoint considering the two different levels of stable
storage. As can be seen, the difference between the figures is considerable: in some cases it is
more than one order of magnitude. Using the Ethernet and the NFS central file server is really
a bottleneck for the checkpointing operation. Nevertheless, it ensures a global accessible
stable storage device where checkpoints can be made available even in the occurrence of a
permanent failure of some processor.
Application Size Chkp
(Kbytes)
(sec) (local)
(sec) (central)
GAUSS (1024) 8500 1.186 128.832
GAUSS (2048) 35495 4.284 1127.654
Table
2: Overhead per checkpoint (local vs central disk).
Table
3 shows the difference in the overall performance overhead considering the two levels
of stable storage and different intervals between checkpoints. We present the results for the
SOR application, that was executed for an average time of 4 hours.
Application Interval
between
chkp
Table
3: Total performance overhead (local vs. central disk).
The average overhead for checkpointing can be tuned by changing the checkpoint interval. In
Table
3 we can see that the maximum overhead observed when using the local disk was 6.4%.
The corresponding overhead with the central file server was up to 332%. This shows that if
we consider a distributed stable storage scheme the performance can become interesting.
Nevertheless, two minutes is a very conservative interval between checkpoints. Long-running
applications do not need to be checkpointed so often and 20 minutes is a more
acceptable interval. For this case, the performance overhead when using the local disks was
0.6%, which is a very small value. The same interval with the central disk as stable storage
presented an overhead of 30.7 %.
An interesting strategy would be the integration of both stable storage levels: that is, the
application is checkpointed periodically to the central server, and in the meantime it can also
be checkpointed to the local disks of the processors. If the application fails due to a transient
perturbation and all the processors are able to restart, then they can recover from the
checkpoints saved in each local disk (if this one correspond to the last committed checkpoint).
If some of the processors is affected by a permanent outage then the application can be
restarted from the last checkpoint located in the central disk.
A possible solution to make the distributed stable storage scheme resilient to a permanent
failure of one processor, is to implement a sort of logical ring where each processor should
copy its local checkpoint file to the next processor's disk. This can be done after the global
checkpoint being committed and in a concurrent way. This lazy update scheme would not
introduce any delay in the commit operation: only some additional traffic in the network, that
can be regulated if we use a token-based policy and perform each remote checkpoint file copy
in a sequential way. Obviously, if we want to tolerate n permanent processor failures we have
to replicate each checkpoint file by n+1 locals disks of the network.
We measured the performance overhead when using both levels of stable storage and some
of the results are presented in Figure 6. For each checkpoint in the central disk we performed
K checkpoints to the local disks. The factor K was changed from 0 up to 10. Figure 6 shows
the overhead reduction for the SOR application with 512x512 grid points.515253545
Factor K
Figure
Two-level stable storage (SOR 512).
For instance, if the user wants an overhead lower than 5% then the factor K should be 9, 3, 1
and 0 when using a checkpoint interval of 2, 5, 10 and 20 minutes, respectively.
If a permanent failure occurs in one of the processors of the system then in the worst case the
application will loose approximately 20 minutes of computation in any of the four previous
cases. The advantage still goes for an interval of 2 minutes and K equal to 9, since in the
occurrence of a transient failure it will lose less computation.
Figure
7 shows the corresponding values for the SOR application with 1024x1024
Factor K
Figure
7: Two-level stable storage (SOR 1024).
The same analysis can be done, considering a watermark of 10% for the performance
overhead: when checkpointing the application with an interval of 2, 5, 10 and 20 minutes the
factor K should be 11, 4, 1 and 0, respectively. If the user requires an overhead lower than 5%
then K should be 8, 4 and 1, with an interval of 5,10 and 20 minutes, respectively.
6. CONCLUSIONS
As far as we know, this is the first implementation of a non-blocking coordinated algorithm
in a real DSM system. DSMPI provides different protocols and models of consistency, and
our algorithm works with all of them. The checkpointing scheme is general-purpose and can
be adapted to other DSM systems that use any protocol of replication or model of consistency.
Some results were taken considering a distributed stable storage scheme and we have
observed a maximum overhead of 6% for an interval between checkpoints of 2 minutes.
With a checkpoint interval of 20 minutes the performance overhead was 0.6%. The same
interval with the stable storage implemented in a central NFS-file server presented an
overhead of 30.7 %.
The algorithm herein presented offers an interesting level of portability and efficiency.
Though, we plan to enhance some of the features of DSMPI in the next release that will be
implemented on MPI-2. We look forward for a thread-safe version of MPI in order to re-design
the DSMPI daemons and implement some of the optimization techniques proposed in
[Cabillic95]. We hope that this line of research would give some contribution to a standard
and flexible checkpointing tool that can be used in real production codes.
Acknowledgments
The work herein presented was conducted when the first author was a visitor at EPCC
(Edinburgh Parallel Computing Centre). The visit was made possible due to the TRACS
programme. The first author was supported by JNICT on behalf of the "Programa Ci-ncia"
(BD-2083-92-IA).
7.
--R
"Dynamic Snooping in a Fault-Tolerant Distributed Shared Memory"
"The Performance of Consistent Checkpointing in Distributed Shared Memory Systems"
"Implementation and Performance of Munin"
"Network Multicomputer Using Recoverable Distributed Shared Memory"
"On Distributed Object Checkpointing and Recovery"
"Lightweight Logging for Lazy Release Consistency Consistent Distributed Shared Memory"
"The Performance of Consistent Checkpointing"
"A Comprehensive Bibliography of Distributed Shared Memory"
"Coordinated Checkpointing-Rollback Error Recovery for Distributed Shared Memory Multicomputers"
"Relaxing Consistency in Recoverable Distributed Shared Memory"
"CRL: High-Performance All-Software Distributed Shared Memory"
"Transparent Fault-Tolerance in Parallel Orca Programs"
"TreadMarks: Distributed Shared Memory on Standard Workstations and Operating Systems"
"A Recoverable Distributed Shared Memory Integrating Coherence and Recoverability"
"Integrating Message-Passing and Shared- Memory: Early Experience"
"The Directory-based Cache Coherence Protocol for the DASH Multiprocessor"
"Memory Coherence in Shared Virtual Memory Systems"
"A Longitudinal Survey of Internet Host Reliability"
"A Message Passing Interface Standard"
"A Checkpoint Protocol for an Entry Consistent Shared Memory System"
"Distributed Shared Memory: A Survey of Issues and Algorithms"
data available in: http://www.
"Performance Results of ickp - A Consistent Checkpointer on the iPSC/860"
"Libckpt: Transparent Checkpointing Under Unix"
"Virtual Shared Memory: A Survey of Techniques and Systems"
"Using Logging and Asynchronous Checkpointing to Implement Recoverable Distributed Shared Memory"
"Global Checkpoints for Distributed Programs"
"Implementation and Performance of DSMPI"
"Algorithms Implementing Distributed Shared Memory"
"Fault-Tolerant Distributed Shared Memory Algorithms"
"Reduced Overhead Logging for Rollback Recovery in Distributed Shared Memory"
"Implementing Fault-Tolerance in a 64-bit Distributed Operating System"
"Recoverable Distributed Shared Virtual Memory: Memory Coherence and Storage Structures"
--TR
Memory coherence in shared virtual memory systems
Algorithms Implementing Distributed Shared Memory
Distributed Shared Memory
Implementation and performance of Munin
Transparent fault-tolerance in parallel Orca programs
Integrating message-passing and shared-memory
A checkpoint protocol for an entry consistent shared memory system
CRL
On distributed object checkpointing and recovery
Lightweight logging for lazy release consistent distributed shared memory
The directory-based cache coherence protocol for the DASH multiprocessor
The performance of consistent checkpointing in distributed shared memory systems
A longitudinal survey of Internet host reliability
A Recoverable Distributed Shared Memory Integrating Coherence and Recoverability
Reduced Overhead Logging for Rollback Recovery in Distributed Shared Memory
Virtual Shared Memory: A Survey of Techniques and Systems | portability;fault-tolerance;checkpointing;distributed shared memory |
280273 | Using permutations in regenerative simulations to reduce variance. | We propose a new estimator for a large class of performance measures obtained from a regenerative simulation of a system having two distinct sequences of regeneration times. To construct our new estimator, we first generate a sample path of a fixed number of cycles based on one sequence of regeneration times, divide the path into segments based on the second sequence of regeneration times, permute the segments, and calculate the performance on the new path using the first sequence of regeneration times. We average over all possible permutations to construct the new estimator. This strictly reduces variance when the original estimator is not simply an additive functional of the sample path. To use the new estimator in practice, the extra computational effort is not large since all permutations do not actually have to be computed as we derive explicit formulas for our new estimators. We examine the small-sample behavior of our estimators. In particular, we prove that for any fixed number of cycles from the first regenerative sequence, our new estimator has smaller mean squared error than the standard estimator. We show explicitly that our method can be used to derive new estimators for the expected cumulative reward until a certain set of states is hit and the time-average variance parameter of a regenerative simulation. | INTRODUCTION
The regenerative method is a simulation-output-analysis technique for estimating
certain performance measures of regenerative stochastic systems; see [Crane and
Iglehart 1975]. The basis of the approach is to divide the sample path into i.i.d.
segments (cycles), where the endpoints of the segments are determined by a sequence
of stopping times. Many stochastic systems have been shown to be regenerative
[Shedler 1993], and the regenerative method results in asymptotically valid
confidence intervals.
In this paper we propose a new simulation estimator for a performance measure
of a regenerative process having two different sequences of regeneration times, and
study its small-sample behavior. The idea of our approach is as follows. First simulate
a fixed number of regenerative cycles from the first sequence of regeneration
times, and compute one estimate. We construct another estimator by dividing up
the original sample path into segments with endpoints given by the second sequence
of regeneration times, and creating a new sample path by permuting the segments
(except for the initial and final segments). We then compute a second estimate of ff
from the new permuted path. We show that this estimate has the same distribution
as the original one. Our new estimator is finally constructed as the average of the
estimates over all possible permutations. This strictly reduces variance when the
estimator is not a purely additive function of the sample path. We show that to
compute our new estimators, one does not have to actually calculate all permutations
and the average over all of them. Instead, we derive formulas for the new
estimators, where the expressions can be easily computed by accumulating some
extra quantities during the simulation. The storage requirements of our methods
are fixed and do not grow as the simulation run length increases. Hence, there is
little extra computational effort or storage needed to construct our new estimators.
For a run length of any fixed number of cycles from the first regenerative se-
quences, the new estimator has the same expected value as the standard estimator
and lower variance; thus, it has lower mean squared error. While it turns out that
our method has no effect on the standard regenerative ratio estimator for certain
steady-state performance measures, the basic technique can still be beneficially applied
to a rich class of other performance measures, and in this paper, we consider
three specific examples.
First, we derive a new estimator for the second moment of the cumulative reward
during a regenerative cycle. We show that the standard regenerative variance
estimator fits into this framework. Hence, our estimator will result in a variance
estimator having no more variability than the standard one. This is important
because one measure of the quality of an output-analysis methodology is the variability
of the half-width of the resulting confidence interval [Glynn and Iglehart
1987], which is largely influenced by the variance of the variance estimator.
We also construct a new estimator for the cumulative reward until some set of
states is first hit, which includes the mean time to failure as a special case. Here,
the performance measure can be expressed as a ratio of expectations, and we apply
Using Permutations in Regenerative Simulations to Reduce Variance \Delta 3
our technique to the numerator and denominator separately.
In some sense our method reuses the collected data to construct a new estimator,
and as such, it is related to other statistical techniques. For example, the bootstrap
[Efron 1979] takes a given sample and resamples the data with replacement. In
contrast, one can think of our approach as resampling the data without replacement
(i.e., permuting the data), and then averaging over all possible resamples. Other
related methods include U-statistics (Chapter 5 of [Serfling 1980]), V -statistics [Sen
1977], and permutation tests (e.g., [Conover 1980]).
The rest of the paper is organized as follows. In Section 2, we discuss our assumptions
and the standard estimator of a generic performance measure ff. We present
the basic idea of how to construct our new estimator using a simple example in
Section 3. Section 4 contains a more formal description of our method. Section 5
describes the new estimator for the second moment of the cumulative reward over a
regenerative cycle and shows how these results can be used to derive a new estimator
of the variance parameter arising in a regenerative simulation. We also discuss
here the special case of continuous-time Markov chains. In Section 6 we derive
new estimators for the expected cumulative reward until some set of states is hit.
We analyze the storage and computational costs of our new estimator in Section 7.
We present in Section 8 the results of some simulation experiments comparing our
new estimators with the standard ones. Section 9 discusses directions for future re-
search. Most of the proofs are collected in Appendix A. Also, we give pseudo-code
for one of our estimators in Appendix B. (Calvin and Nakayama [1997] present the
basic ideas of our approach, without proofs, in the setting of discrete-time Markov
chains.)
2. GENERAL FRAMEWORK
Let X be a continuous-time stochastic process having sample paths that are right
continuous with left limits on a state space S ae ! d . Note that we can handle
discrete-time :g in this framework by letting
X btc for all t - 0, where bac is the greatest integer less than or equal to a.
be an increasing sequence of nonnegative finite
stopping times. Consider the random pair (X; T ) and the shift
We define the pair (X; T ) to be a regenerative process (in the classic sense) if
(i). f' T (i) (X; are identically distributed;
(ii). for any i - 0, ' T (i) (X; T ) does not depend on the "prehistory"
See p. 19 of [Kalashnikov 1994] for more details. This definition allows for so-called
delayed regenerative processes (e.g., Section 2.6 of [Kingman 1972]).
be two distinct increasing sequences of nonnegative finite stopping times such that
are both regenerative processes. For example, if X is an ir-
reducible, positive-recurrent, discrete-time or continuous-time Markov chain on a
countable state space S, then we can define T 1 and T 2 to be the sequences of hitting
M. Calvin and M. K. Nakayama
times to the states v 2 S and w 2 S, respectively, where we assume that
and w 6= v.
Our goal is to estimate some performance measure ff, which we will do by generating
a sample path segment ~
of a fixed number m 1
of regenerative 1-cycles of our regenerative process. Here, we use the terminology
"1-cycles" to denote cycles determined by the sequence T 1 ; i.e., the ith 1-cycle is
the path segment (i)g. We similarly define "2-cycles"
relative to the sequence T 2 . Now we define the standard estimator of ff based on
the sample path ~
1-cycles to be
where h j hm1 is some function. This general framework includes many performance
measures of interest.
Example 1. Suppose
"/ Z
for some function g : S ! !, where p - 1. Then we can define h( ~
X) by
h( ~
Y (g;
where
Z
for k - 1. Note that here b
X) is an unbiased estimator of ff. We will examine
this example with in Section 5.
Example 2. Suppose that
where for k - 1,
now is the variance parameter arising from a regenerative simulation. (More details
are given in Section 5.1.) Then we can define h( ~
X) by
h( ~
where
Using Permutations in Regenerative Simulations to Reduce Variance \Delta 5
Note that b
X) is the standard regenerative estimator of oe 2 . We will return to this
example in Section 5.1.
Example 3. Suppose we are interested in computing
set of states F ae S. Thus, j is the
expected cumulative reward until hitting F conditional on T 1 and the mean
time to failure is a special case. It can be shown that
where
and
with a - see [Goyal et al. 1992]. To estimate j, we generate
sample paths ~
each consisting of m 1 1-cycles, and we use ~
X 1 to estimate
- and ~
X 2 to estimate fl. We can either let ~
independent of ~
We examine the estimation of the numerator and denominator in (6) separately.
First, if we want to estimate ff = -, then we define the function h by
h( ~
-( ~
where
Z
with
Fg. On the other hand, if we want to
estimate then we define the function h by
h( ~
where
and 1f \Delta g is the indicator function of the event f \Delta g. Thus, the standard estimator
of j is
-( ~
We will return to this example in Section 6.
6 \Delta J. M. Calvin and M. K. Nakayama
3. BASIC IDEA
Our goal now is to create a new estimator for ff. We begin by giving a heuristic
explanation of how it is constructed by considering the simple example illustrated
in
Figure
1. For simplicity, we depict a continuous sample path on a continuous
state space S. The T 1 sequence corresponds to hits to the state v, and the T 2
sequence corresponds to hits to state w. The top graph shows the original sample
path generated having regenerative 1-cycles. For this path, there are
occurrences of stopping times from sequence T 2 . To make it easier to
see the individual 2-cycles, each is depicted using a different line style. Now we
can construct new sample paths from the original path by permuting the 2-cycles,
resulting in (M possible paths. The second graph shows one such
permuted path. Here, the original third 2-cycle is now first, the original first 2-cycle
is now second, and the original second 2-cycle is now third. The new 1-cycle times
are T 0
and the new 2-cycle times are T 0
3. The
third graph contains another permuted path, in which the original second 2-cycle
is now first, the original third 2-cycle is now second, and the original first 2-cycle
is now third. The 1-cycle times are now T 00
and the new 2-cycle
times are T 00
3. Note that for each new path, the number of 1-cycles
is the same as in the original path, but the paths of some of the 1-cycles have
changed. We show in Theorem 1 that all of the paths have the same distribution.
For each possible path, we can compute an estimator of ff based on the m 1 new
1-cycles by applying the performance function h j hm1 to it. Our new estimator
is then the average over all estimators constructed.
It turns out that we do not actually have to construct all permuted paths to
calculate the value of our new estimator. The basic reason for this is that we can
break up any sample path into a collection of segments of different types. After
any permutation, the path changes, but the collection of segments does not. To
calculate our new estimator for a given (original) path, we need to determine the
different ways the segments can be put together when 2-cycles are permuted. In
particular, since we form an estimator based on the 1-cycles for every permutation,
we want to understand how 1-cycles are formed from the segments.
Another key factor that will allow us to explicitly compute our new estimator
without actually computing all permutations is that for the performance measures
we consider, the contribution of each 1-cycle to the overall estimator can be expressed
as a function of the segments in the cycle. For instance, in Example 1 with
2, we can express the contribution from each 1-cycle as the square of the sum
of contributions of the segments in the cycle.
We now examine more closely the four different types of path segments that can
arise. We focus on the example in Figure 1.
(1) The first type is a 1-cycle that does not contain a hit to w. The segments of this
type in the original path of the figure are the first and third 1-cycles; i.e., the
segment from T 1 (0) to T 1 (1) and the segment from T 1 (2) to T 1 (3). Segments of
this type never change under permutation, although they may occur at different
times. For example, the third 1-cycle in the original path appears as the fourth
1-cycle in the second permuted path. This segment is the third 1-cycle in the
first permuted path, but it occurs at a different time. The first 1-cycle in our
Using Permutations in Regenerative Simulations to Reduce Variance \Delta 7
Original
Path
Another
Permuted
One
Permuted
Path
Fig. 1. A sample path and some corresponding permuted paths.
M. Calvin and M. K. Nakayama
example always appears in the same place in all permutations.
(2) Now consider any 2-cycle in which state v is not hit, such as the third 2-cycle
in the original path in the figure. After any permutation this 2-cycle will be in
the interior of some 1-cycle. For example, the third 2-cycle in the original path
is in the interior of the fifth 1-cycle in the original path, and in the interior of
the second (resp., third) 1-cycle in the first (resp., second) permuted path.
(3) The next type of segment goes from w to v before hitting w again. No matter
how the 2-cycles are permuted, this type of segment is always the end of some
1-cycle. For example, consider the path segment from T 2 (0) to T 1 (2) in the
original sample path. In this path, the segment is the end of the second 1-
cycle. In the first permuted path, this segment is again the end of the second
1-cycle, but this new second 1-cycle is different from that in the original path.
On the other hand, this segment in the second permuted path is the end of the
third 1-cycle. In general, any segment that goes from w to v before hitting w
again will be the end of some 1-cycle in any permuted path.
(4) The final type of segment goes from v to w before hitting v again. In any
permutation, this segment will be the beginning of a 1-cycle. For example,
consider the path segment from T 1 (3) to T 2 (1) in the original sample path. In
this path, the segment is the beginning of the fourth 1-cycle. In the second
permuted path this segment is the beginning of the fifth 1-cycle. In the first
permuted path, the segment is again the beginning of the fourth 1-cycle. In
general, any segment that goes from v to w before hitting v again will be the
beginning of some 1-cycle in any permuted path.
Note that the original sample path in Figure 1 consists of segments of types
appearing in the following order: 1, 4, 3, 1, 4, 3, 4, 2, 3. In any permutation,
the segments will appear in a different order, but the collection of segments never
changes.
Recall that for each permuted path, we compute an estimate of the performance
measure ff based on the m 1 1-cycles. So now we examine how 1-cycles can be
constructed from the permutations. For the original path, we divide up the 1-
cycles into those that hit state w and those that do not. The ones that do not hit
w are type-1 segments and are unaffected by permutations.
Now we examine how permutations affect 1-cycles that hit w. These cycles always
start with a type-4 segment, followed by some number (possibly zero) of segments
of type 2, and end with a type-3 segment. For example, the fifth 1-cycle in the
original path starts with the type-4 segment from T 1 (4) to T 2 (2), is followed by the
type-2 segment from T 2 (2) to T 3 (3), and concludes with the type-3 segment from
Also, the fourth 1-cycle in the original path begins with a type-
4 segment from T 1 (3) to T 2 (1) and terminates with a type-3 segment from T 2 (1)
to T 1 (4). This characterization of 1-cycles holds not only for the original sample
path, but also for any permuted path. Moreover, for any 2-cycle that hits v in the
original path, the type-3 segment and type-4 segment in it will always be in the
same 2-cycle in any permutation, and so these two segments can never be in the
same 1-cycle since the type-3 segment will always be the end of one 1-cycle and the
segment will always be the beginning of the following 1-cycle. For example,
the type-3 segment from T 2 (0) to T 1 (2) and the type-4 segment from T 1 (3) to T 2 (1)
Using Permutations in Regenerative Simulations to Reduce Variance \Delta 9
are always in the same 2-cycle in any permutation, and as such, they are always
in successive 1-cycles. Also, in our example the first type-4 segment from T 1 (1) to
and the last type-3 segment from T 2 (3) to T 1 (5) will never be in the same
1-cycle. Any other pair of type-3 segment and type-4 segment will be in the same
1-cycle in some permutation. Thus, to construct all 1-cycles that hit w that are
possible under permutations of 2-cycles, we have to consider all valid pairings of
the type-4 and type-3 segments, and allocate the type-2 segments among the pairs.
The proofs in Sections A.2 and A.3 basically use this reasoning.
4. FORMAL DEVELOPMENT OF GENERAL METHOD
Now we more formally show how to construct our new estimator. We begin with
some new notation. Let X be the space of paths S that are
right continuous with left limits on [0; i(x)). For define a new element
and
Thus, the new path x 1 obtained by concatenating x 2 on to the end of x 1 .
Given the original sample path ~
X, which consists of m 1 1-cycles, we begin by
constructing a new sample path ~
X 0 from ~
X such that ~
equality in distribution. This is done by first taking the original sample path ~
X and
determining the number of times M 2 that the stopping times from the sequence
occur during the m 1 1-cycles. Note that if M then the path ~
X has
no 2-cycles. If M 2, then there is only one 2-cycle. Assume now that M 2 - 3.
Then for the given path ~
X, we can now look at the (M 2-cycles in the path.
We generate a uniform random permutation of the within the
path ~
, and this gives us our new sample path ~
which also has m 1 1-cycles.
More specifically, define M
~
X . If M 2 - 3, then we break up the path ~
~
where
~
is the initial path segment until the first time a stopping time from sequence T 2
occurs,
~
is the final path segment from the last time a stopping time from sequence T 2 occurs
until the end of the path, and
~
is the kth 2-cycle of the original path ~
1)) be a uniform random permutation of 1. Then we define our new
M. Calvin and M. K. Nakayama
path ~
to be
~
which is the original path ~
X with the 2-cycles permuted. Note that ~
have
the same number m 1 of 1-cycles, and we prove in Section A.1 that ~
Now for the new sample path ~
, we can calculate
which is just the estimator obtained from the new sample path ~
and is based
on m 1 1-cycles (recall that h j hm1 ). The number of possible paths ~
0 we can
construct from ~
X is N( ~
which depends on ~
X and is therefore
random. We label these paths ~
each of which has the
same distribution as ~
X, and for each one we construct b ff( ~
finally define
our new estimator for ff to be
e
h( ~
Another way of looking at our new estimator is as follows. We first generate the
original path ~
X and use it to construct the N( ~
X) new paths ~
X (N) . We
then choose one of the new paths at random uniformly from ~
this be ~
X, we can think of b
standard estimator of ff since
it has the same distribution as b
X). Then we construct our new estimator e
to be the conditional expectation of b
respect to the uniform random
choice of ~
given the original path ~
X. That is, if E denotes expectation with
respect to choosing ~
X 0 from the uniform distribution on ~
then we write
e
Assuming that E[-ff( ~
the new estimator has the same mean as the
original since
because ~
X. Moreover, decomposing the variance by conditioning on ~
us
implies that the variance of the new estimator
e
is no greater than that of the original estimator -
X). This
calculation, combined with the fact that ~
(which will be proved in Section
A.1), establishes the following theorem.
Theorem 1. Let T 1 and T 2 be two distinct sequences of stopping times, and
construct the estimator e
X) defined by (11). Assume that E[bff( ~
Using Permutations in Regenerative Simulations to Reduce Variance \Delta 11
X)], and
and so the mean squared error of our new estimator e
X) is no greater than
that of the original estimator b ff( ~
X). Strict inequality is obtained in (12) unless
In Theorem 1 we see that there is no variance reduction when for every possible
original sample path ~
X, the value of the function h in (1) is unaffected by permutations
of the 2-cycles. For example, this is the case in Example 1 with
since
h( ~
Z
Z
Z
Z
Z
and so e
X). Similarly, by choosing g(x) j 1, we see that permuting
2-cycles does not alter the estimator for E[-(1)]. Thus, our method has no effect
on the standard ratio estimator for steady-state performance measures ff that can
be expressed as
However, for p ? 1 in Example 1, we have in general that h( ~
so typically e
X). Also, we usually have that the standard time-average
variance estimator in Example 2 for a regenerative simulation will differ from the
new estimator defined by (11). Finally, applying the above idea separately to the
numerator and denominator in the ratio expression for the mean cumulative reward
until hitting some set of states F as in Example 3 will result in a new estimator.
5. ESTIMATING THE SECOND MOMENT OF CUMULATIVE CYCLE REWARD
For our new estimator e
X) to be computationally efficient, we need to calculate
explicitly the conditional expectation in (11) without having to construct all
possible permutations. We first do this for Example 1 with
and our standard estimator of ff is
where we have dropped the dependence of Y on g to simplify the notation. Our
new estimator of ff is then
e
M. Calvin and M. K. Nakayama
is the same as Y (k) except that it is for the sample path ~
than ~
X.
Now to explicitly calculate (15) in this particular setting, we will divide up
the original path into segments using the approach described in Sections 3 and
4. We need some new notation to do this. For our two sequences of stopping
times denote the set of indices of the
1-cycles in which a T 2 stopping time occurs, and define the complementary set
specifically, H(1;
jg. We analogously define the set H(2; 1) with the roles of
some lg, which is the first occurrence of a stopping time
from sequence T 2 after the 1)st stopping time from the sequence T 1 . Similarly
define e
some lg, which is the last occurrence
of a stopping time from sequence T 2 before the kth occurrence of the stopping-time
sequence T 1 . Then, for k 2 H(1; 2), we let
which is the contribution to Y (k) until a stopping time from sequence T 2 occurs,
and let
Y
Z
e
which is the contribution to Y (k) from the last occurrence of a stopping time from
sequence T 2 in the kth 1-cycle until the end of the cycle. Also, for l 2 J(2; 1), let
Y 22
Z
which is the integral of g(X(t)) over the lth 2-cycle in which there is no occurrence
of a stopping time from sequence T 1 . We now define B k ae J(2; 1) to be the set of
indices of those 2-cycles that do not contain any occurrences of the stopping times
from the sequence T 1 and that are between It then follows
that for k 2 H(1; 2),
Hence,
\Theta Y 12
A
Using Permutations in Regenerative Simulations to Reduce Variance \Delta 13
Y 22 (l)
Y 22 (l)
In the last expression, the first term does not change if we replace the original
sample path ~
X with the new sample path ~
the last term does change.
In Section A.2, we compute explicitly the conditional expectation of (16), when ~
is replaced with ~
with respect to a random permutation given the original path
~
X.
The expression for this involves some more notation. Define
and
Finally, define fi l to be the lth smallest element of the set H(1; 2) for
and define fi
is the index in H(1; 2) that occurs just before k if k is
not the first index and is the last element in H(1; 2) if k is the first element. The
following theorem is proved in Section A.2. (Pseudo-code for our estimator is given
in
Appendix
B.)
Theorem 2. Suppose we want to estimate ff defined in (13), and assume that
our new estimator is given by e
and otherwise by
e
\Theta Y 12
Y
Y 22
Y 22
l6=m
Y 22 (l)Y 22 (m)C A : (17)
The estimator satisfies E[eff( ~
X) is
the standard estimator of ff as defined in (14).
5.1 A New Estimator for the Time-Average Variance
We can use Theorem 2 to construct a new estimator for the variance parameter in a
regenerative simulation of the process X . We start by first giving a more complete
explanation of Example 2 in Section 2.
cost function. Define
Z tf(X(s)) ds:
14 \Delta J. M. Calvin and M. K. Nakayama
Since X is a regenerative process, there exists some constant r such that r t ! r
as Theorem 2.2 of [Shedler 1993]). Also, r satisfies
the ratio formula Assuming that E[Z(f ;
exists a finite positive constant oe such that
oe
as t !1. The constant oe 2 is called the time-average variance of X and is given in
(3). Given the central limit theorem described by (18), construction of confidence
intervals for r therefore effectively reduces to developing a consistent estimator for
oe 2 . The quality of the resulting confidence interval is largely dependent upon the
quality of the associated time-average variance estimator.
The standard consistent estimator of oe 2 is b
X) defined in (4). Note
that b oe 2 ( ~
X) can be expressed as
Now we define our new estimator e
X) to be the conditional expectation of b oe 2 ( ~
with respect to a random permutation of 2-cycles, given the original sample ~
X .
Hence, letting b r 0 , Y be the corresponding values of b r, Y (f \Gamma
k), and -(k) for the sample path ~
we get that
\Theta P m1
since
k=1 -(k) is independent of the permutation of
2-cycles. Also, observe that
Z
r
is independent of the permutation of 2-cycles, so
\Theta P m1
i.e., we can replace b r 0 with b r. The following is a direct consequence of Theorem 2.
Corollary 3. Suppose we want to estimate oe 2 defined in (3), and assume that
our new estimator e oe 2 ( ~
X) is given by (19), where the
numerator is as in (17) with the function
r. The estimator satisfies
X)] and Var[eoe 2 ( ~
X)].
5.2 Continuous-Time Markov Chains
We now consider the special case of an irreducible, positive-recurrent, continuous-time
Markov chain on a countable state space S having
generator matrix
be the embedded discrete-time Markov chain, and
be the sequence of random holding times of the continuous-time
Markov chain; i.e., Wn is the time between the nth and (n 1)st transitions
Using Permutations in Regenerative Simulations to Reduce Variance \Delta 15
of X . Define A
which is the time of
the nth transition. It is well known that conditional on V , the holding time in
state Vn is exponentially distributed with mean 1=-(Vn ) and that W i and W j are
(conditionally) independent for i 6= j.
Assume that the sequences of stopping times T 1 and T 2 correspond to hitting
times to fixed states v 2 S and w 2 S, respectively, with w 6= v, and assume that
specifically, define - 1
which is the sequence of hitting times to state v for
the discrete-time Markov chain. Similarly, define -
and
Suppose that we want to estimate ff as defined in (13), and our standard estimator
of ff is given in (14). Now note that
Z
Using discrete-time conversion [Hordijk et al. 1976; Fox and Glynn 1990] gives us
\Theta W 2
since are conditionally independent given V . For a function f
!, let
which is the cumulative reward over the kth cycle for the discrete-time chain V ,
and define the functions to be g 1
Therefore, we get
and
To create our new estimator of ff, we then compute the conditional expectation of
X) with respect to a random uniform permutation of 2-cycles given the original
M. Calvin and M. K. Nakayama
path ~
X. Define
The last term in (20) is independent of permutations of 2-cycles, and so we get the
following expression for e -
, which follows from Theorems 1 and 2.
Theorem 4. Suppose X is an irreducible, positive-recurrent, continuous-time
Markov chain on a countable state space S, and we want to estimate ff defined in
(13). Assume that T 1 and T 2 correspond to the hitting times to states v and w,
respectively, with w 6= v. Assume that E[Y (g our new estimator
is given by e -
X) is defined by (21), which can be computed from (17) with the function
1 . The estimator satisfies E[e -
is the standard estimator of ff as defined in (14).
If we had instead first converted to discrete time and then computed -
the discrete-time Markov chain and its conditional expectation with respect to the
permutation, we would have obtained e -
X) as our estimator for ff. However,
since E[ -
function g 6= 0, E[e -
X) is biased.
On the other hand, our estimator e -
X) is unbiased.
6. EXPECTED CUMULATIVE REWARD UNTIL HITTING A SET
Recall that we can express the expected cumulative reward until a hitting time
given in (5) as the ratio in (6), and the standard estimator of j is defined
in (7). Also, recall that the numerator - is estimated using the sample path ~
and the denominator fl is estimated from path ~
We will examine both the cases
when ~
are independent.
In the context of estimating the mean time to failure of highly reliable Markovian
systems, Goyal, Shahabuddin, Heidelberger, Nicola, and Glynn [1992] and
Shahabuddin [1994] estimate - and fl independently; i.e., ~
are indepen-
dent. This is useful because then different sampling techniques can be applied to
estimate the two quantities. In particular, fl is the probability of a rare event and
so it is estimated using importance sampling. On the other hand, we can efficiently
estimate - using naive simulation (i.e., no importance sampling). Below, we do not
apply importance sampling to estimate fl, but one can also derive a new estimator
of fl when using importance sampling.
Our new estimator of j is defined as
e j( ~
e
-( ~
e
Using Permutations in Regenerative Simulations to Reduce Variance \Delta 17
where
e
-( ~
-( ~
e fl( ~
Now to explicitly calculate the numerator and denominator, we will divide up the
original path into segments using the approach described in Sections 3 and 4. We
need some new notation to do this. For k 2 H(1; 2), let
I 12
I 22
I
with
Fg. Hence, I 12 (k) (resp., I 21 (k))
is the indicator of whether the set F is hit in the initial 1-2 segment (resp., final
2-1 segment) of the 1-cycle with index k 2 H(1; 2). Also, I 22 (l) is the indicator
whether the set F is hit in the 2-cycle with index l 2 J(2; 1).
We first consider the denominator fl. To derive the new estimator of fl from
permuting the 2-cycles, we first write
The first term on the right-hand side is independent of permutations of the 2-cycles.
For the second term we note that for k 2 H(1; 2),
I 12 (k); max l2Bk
I 22 (l); I 21 (k)
Thus,
e
I 12 (k); max
I 22 (l); I 21 (ae(k))
where ae(k) is the index of the I 21 variable that follows the I 12 (k) variable after
a permutation of the 2-cycles, and B 0
l is the same as B l except that B 0
l is after a
permutation. We work out in Section A.3 the conditional expectation appearing
above.
We now examine the estimation of -. Note that the standard estimator of -
satisfies
-( ~
The first term is not affected by permuting the 2-cycles, but the second term is.
M. Calvin and M. K. Nakayama
For
Y
j!l
Y
where
Z T (2)
F (l)-T2 (l)
Z T (1)
e
Hence, to compute the new estimator, we need to compute the conditional expectation
of the second term in (24), which we can do by using the representation for
given in (25); this is done in Section A.3.
To present what the new estimator actually works out to, we need some more
notation. Define
Also, define
I 22 (l);
r
D 22 (l)I 22 (l):
Then we have the following result, whose proof is given in Section A.3.
Theorem 5. Suppose we want to estimate j in (6), and assume that
1. Then, our new estimator is given by (22) where
Using Permutations in Regenerative Simulations to Reduce Variance \Delta 19
(i). e -( ~
-( ~
e
-( ~
r
(ii). e fl( ~
The estimators e
-( ~
-( ~
-( ~
-( ~
-( ~
are the standard
estimators of - and fl, respectively.
In Theorem 5 the variables used in part (i) are defined for the sample path ~
and the variables in part (ii) are for the sample path ~
. For example, h 12 in
part (i) is the cardinality of the set H(1; 2) for the path ~
in part (ii) it is
the same but instead for the path ~
Theorem 5 shows that our new estimator for j has unbiased and lower-variance
estimators for both the numerator and denominator, but the effect on the resulting
ratio estimator is more difficult to analyze rigorously. Instead, we now heuristically
examine the bias and variance of the ratio estimator.
To do this, we generically let -
- fl, and -
-fl be estimators of -, fl, and j,
respectively. Then using first- and second-order Taylor series expansions, we have
the following approximations for the bias and variance of - j:
Cov
and
see p. 181 of [Mood et al. 1974].
We now use these approximations to analyze the standard and new estimators
for j. First, consider the case when ~
are independent. Then b
-( ~
are independent, so Cov
-( ~
Similarly, e
-( ~
are independent, so Cov
-( ~
it follows from Theorem 5
and (26) and (27) that
J. M. Calvin and M. K. Nakayama
and
where we use the notation a - b to mean that a is approximately no greater than b.
Hence, the mean square error of e j( ~
approximately no greater than that
of b j( ~
In the case when ~
-( ~
Also, we have that
Cov
-( ~
-( ~
by Lemma 2.1.1 of [Bratley et al. 1987]. But since the variances of the new estimators
of the numerator and denominator are smaller than those for the original esti-
mators, we cannot compare the biases and variances of b j( ~
However, we examine this case empirically in Section 8 and find that there is a
variance reduction and smaller mean squared error.
7. STORAGE AND COMPUTATION COSTS
We now discuss the implementation issues associated with constructing our new
estimator e ff( ~
X) given in (17) for the case when ff is defined in (13). First note that
the first term in the second line of (17), excluding the factor 2=(h
Y
Y
Also, the last term in the last line of (17) satisfies
l6=m
Y 22 (l)Y 22 (m) =@ X
Y 22 (k)A\Gamma@ X
Y 22
Hence, to construct our estimator e
X), we need to calculate the following quantities
-the sum of the Y (k) 2 over the 1-cycles k 2 J(1; 2);
-the sums of the Y 12 (k), Y 21 (k), Y 12 over the 1-cycles k 2 H(1; 2);
-the sum of the Y 12 (k)Y 21 (/(k)) over the 1-cycles k 2 H(1; 2);
-the sums of the Y 22 (k) and Y 22 (k) 2 over the 2-cycles k 2 J(2; 1).
To compute these quantities in a simulation, we do not have to store the entire
sample path but rather only need to keep track of the various cumulative sums as
the simulation progresses. Also, the amount of storage required is fixed and does
not increase with the simulation run length. Therefore, compared to the standard
estimator, the new estimator can be constructed with little additional computational
effort and storage. (Pseudo-code for this estimator is given in Appendix B.)
Using Permutations in Regenerative Simulations to Reduce Variance \Delta 21
A similar situation holds when estimating j using the estimator defined in Theorem
5.
We conclude this section with a rough comparison of the work required for the
new estimator with that for the standard regenerative method when estimating ff
given in (13). Let W s be the (random) amount of work to generate a particular
sample path of m 1 1-cycles in a discrete-event simulation, where W s includes the
work for the random-variate generation, determining transitions, and appropriately
updating data structures needed in the sample-path generation. This quantity is
the same for our new method and the standard method.
We now study the work needed for the output analysis required by the standard
regenerative method. After every transition in the simulation, we need to update
the value of the current cycle-quantity Y (g; k); see (2). Let ' 1 denote this (deter-
ministic) amount of work, and if there are N total transitions in the sample path,
then the total work for updating the Y (g; during the entire simulation is N' 1 . At
the end of every 1-cycle, we have to square the current cycle quantity Y (g;
add it to its accumulator; see (14). Let ' 2 denote this (deterministic) amount of
work required at the end of each 1-cycle, and since there are m 1 1-cycles along the
path, the total work for accumulating the sum of the Y (g; Therefore,
the cumulative work (including sample-path generation and output analysis) for
the standard regenerative method is
Now we determine the amount of work needed for the output analysis of our new
permutation method. By examining the pseudo-code in Appendix B, we see that
after every transition, a single accumulator is updated. Every time a stopping time
from either sequence T 1 or T 2 occurs, we compute either a square or a product of
two terms and update at most three accumulators, and the amount of work for
this is essentially at most 3' 2 . Since the number of times this needs to be done is
the cumulative work for our permutation method is
Therefore, the ratio of cumulative work of our permutation method relative to the
standard regenerative method is
Typically in a regenerative simulation, the amount of time W s required to generate
the sample path is much greater than the time needed to perform the output
analysis, and so RW will usually be close to 1. Hence, the overhead of using our
method in a simulation will most likely be very small, and this is what we observed
in our experimental results in the next section.
8. EXPERIMENTAL RESULTS
Our example is based on the Ehrenfest urn model. The transition probabilities for
this discrete-time Markov chain are given by P
s
s:
22 \Delta J. M. Calvin and M. K. Nakayama
In our experiments we take 8. The stopping-time sequences T 1 and T 2 for
our regenerative simulation correspond to hitting times to the states v and w,
respectively, and so state v is the return state for the regenerative simulation.
We ran several simulations of this system to estimate two different performance
measures: oe 2 , which is the time-average variance constant from Section 5.1, and j,
which is the mean hitting time to a set F from Section 6. For each performance
measure, we ran our experiments with several different choices for v, and for each v,
we examined all possible choices for w. Choosing no effect on the resulting
estimator, so this corresponds the standard estimator. We ran 1,000 independent
replications for each choice of v and w. Tables 1-3 and 5-6 present the results from
estimating the two performance measures, giving the sample average and sample
variance of our new estimator over the 1,000 replications. The average cycle lengths
change with different choices of v; in order to make the results somewhat comparable
across the tables, we changed the number of simulated cycles for each case so that
the total expected number of simulated transitions remains the same. For Table 1,
corresponding to simulated 1,000 cycles, and a greater number for the
other tables. For example, the expected cycle length is 3:5 times as long for state
1 as for state 2, so in Table 2, we simulated 3,500 cycles. Since our new estimator
reduces the variance but at the cost of extra computational effort, we also compare
the efficiencies (inverse of the product of the variance and the time to generate the
estimator) of our new estimator and the standard one, as suggested by Hammersley
and Handscomb [1964] and Glynn and Whitt [1992].
8.1 Results from Estimating Variance
We first examine the results from estimating the time-average variance oe 2 with cost
function performed 3 experiments, corresponding to return states
and 4, and these results are given in Tables 1-3, respectively.
The transition probabilities are symmetric around state 4 (the mean of the binomial
stationary distribution), so our first choice of return state
is fairly far from the mean. Notice that the variability of the variance estimator is
smaller with w near the mean state 4, and that the variance reduction is greater
v. The reason for this is that the excursions from v that go below 1 have
little variability; because of the strong restoring force of the Ehrenfest model, such
excursions tend to be very brief. On the other hand, excursions that get as far
as the mean are likely to be quite long (and thus the contribution to the variance
estimator tends to have large variability). In the second table we ran the same
experiment with obtained similar results.
In
Table
3 we examine the same model, but now with our return state v chosen
to be the stationary mean, 4. The first thing to notice is that, compared with the
other choices of the return state, the variance reduction is relatively small. State 4
is the best return state in the sense of minimizing the variance of the regenerative-
variance estimator. Therefore, for this example, it appears that our estimator is a
significant improvement over the standard regenerative estimator if the standard
regenerative estimator is based on a relatively "bad" return state. However, if one
is able to choose a near-optimal return state to begin with, our estimator yields
a modest improvement. (Unfortunately, there are no reliable rules for choosing a
priori a good return state.) Comparing the three tables, we see that the minimum
Using Permutations in Regenerative Simulations to Reduce Variance \Delta 23
Table
1. Estimating variance with
w Avg. of eoe 2 Sample Var.
Table
2. Estimating variance with 2.
w Avg. of eoe 2 Sample Var.
variability does not change much across tables (0:17 for Table 3, and 0:18 for the
other tables). This example suggests that it may be possible to compensate for a
bad choice of v by an appropriate choice of w.
Finally, we illustrate the computational burden of our method. Table 4 shows
the work required for the results obtained in Table 1. The first column gives, for
each choice of w, the relative work (CPU time for generating the sample path and
output analysis) required for our new estimator, that is, the CPU time with our new
estimator divided by the CPU time for the standard regenerative method. (The
row with corresponds to the standard regenerative method, so all entries are
the other entries are normalized with respect to these.) The second column gives
the relative variance; that is, the sample variance of our new estimator divided by
the sample variance for the standard regenerative estimator. The last column gives
Table
3. Estimating variance with
w Avg. of eoe 2 Sample Var.
M. Calvin and M. K. Nakayama
Table
4. Comparison of efficiency,
w Relative Work Relative Var. Relative Efficiency
the relative efficiency; that is, the inverse of the product of the relative work from
column 1 and the relative variance from the second column.
Notice that in the cases where the variance reduction is small, the increase in
work is also small. More work is needed when there is a larger variance reduction,
but this is still no more than a few percent increase. Note that in the best case
the efficiency was improved by nearly a factor of eight. Because little
additional work is needed by our method, within each of the other tables, the run
times are approximately the same for the different values of w. Therefore, one can
roughly approximate the relative efficiencies for Tables 2 and 3 as the ratio of the
sample variances for states v and w.
It should also be noted that the Ehrenfest model considered here is very simple
compared with typical simulation models. The additional work required to compute
our new estimators is independent of the model, and so if the work to generate the
sample path is much larger than for the Ehrenfest model, the relative increase in
work would be correspondingly small.
8.2 Results from Estimating Hitting Times to a Set
We now consider estimating j, which is the mean hitting time to a set of states F
starting from a state v for our Ehrenfest model with reward function g(x) j 1.
We take which is hit infrequently. Tables 5 and 6 show our results from
generating 1,000 independent replications for In
each replication, we generated a sample path which we used to compute the new
estimators for both the numerator and denominator. Hence, using the terminology
of Section 6, we let ~
. For each path, we generated 1,000 cycles for
and 2,500 cycles for 4, so that the expected sample path length is the same
in each case. We calculated (i.e., not using simulation) the theoretical values to be
in addition to examining the
variance of our new estimator, we can also study the mean squared error. Note
that the theoretical value of j depends on the starting state v.
First observe that there is no change in the estimates of j and their variances
for certain choices of w. This is due to the fact that permuting w-cycles in these
cases has no effect on the estimators of either the numerator or denominator. For
the other values of w, the (relative) variance reduction is significantly greater when
Table
than when (Table 5). In addition, although the absolute
bias is the greatest for both choices of v, its magnitude is quite small,
Using Permutations in Regenerative Simulations to Reduce Variance \Delta 25
Table
5. Estimating expected hitting time to state 7 with
w Avg. of ej Sample Var. MSE
Table
6. Estimating expected hitting time to state 7 with 2.
w Avg. of ej Sample Var. MSE
and when examining the mean squared error, the variance reduction overwhelms
the effect of the bias.
We now explain why the choice of w that results in the most variance reduction
is In the original sample path without permuting the w-cycles, the set F
is hit a certain number of times. By permuting the w-cycles, we get a variance
reduction if there are some v-cycles that hit w but not F and if within a particular
v-cycle that hits F , there is more than one w-cycle that hits F and does not hit v.
Permuting the w-cycles then can distribute the hits to F to more of the v-cycles.
The amount of variance reduction in estimating fl is largely determined by the
difference between the maximum and minimum number of v-cycles that hit F from
permuting the w-cycles. Choosing results in no variance reduction
because we are working with a birth-death process and so the process always hits
F no later than it hits w within a v-cycle, and so permuting w-cycles has no effect.
Of the remaining choices for maximizes the number of w-cycles that hit
F , hence the largest variance reduction. Therefore, in general, we suggest that the
state w should be chosen so that w 62 F and it is as "close" as possible to the set
F to maximize the number of w-cycles that hit F .
9. DIRECTIONS FOR FUTURE RESEARCH
We are currently investigating how to construct confidence intervals based on our
new permuted estimators. This is a difficult problem because of the complexity of
the estimators. Another area on which we are currently working is determining how
to choose the two sequences of regenerative times T 1 and T 2 when there are more
than two possibilities. For example, this arises when simulating a Markov chain,
26 \Delta J. M. Calvin and M. K. Nakayama
since successive hits to any fixed state form a regenerative sequence. We explored
this to some degree experimentally in Section 8, but further study is needed.
ACKNOWLEDGMENTS
The authors would like to thank the Editor in Chief and the two anonymous referees
for their helpful comments on the paper.
A. PROOFS
A.1 Proof of Theorem 1
We need only prove that ~
that is, the paths have the same distribution when
2-cycles are permuted. Recall M 2 is the number of times that stopping times in
the sequence T 2 occur by time
as in (8)-(10). For the path ~
to be the number of times a
stopping time from sequence T 1 occurs in ~
to be the number of times that stopping times from the sequence T 1 occur outside
of the 2-cycles (we do not need to bother with the case since then we do
not change the path).
If f is a (measurable) function mapping sample paths to nonnegative real values,
then
i-m1;n
i-m1;n
for some (measurable) functions f n;i , and so it suffices to show that
~
is invariant under 2-cycle permutations, where the A i are (measurable) sets. Note
that
~
\Theta P (M
Given M 2 and L, the initial 1-2 segment (i.e., ~
final 2-1 segment (i.e., ~
are conditionally independent of the 2-cycles, so the last probability can be written
\Theta P
In examining the effect of permutations of the 2-cycles, we need consider only the
Using Permutations in Regenerative Simulations to Reduce Variance \Delta 27
last probability, which we rewrite
~
Since we are interested in the effect of permutations, we only look at the numerator
of the last expression:
and we are finally left with the task of showing that for any permutation oe,
P@ ~
But this follows from the fact that
~
~
Therefore, ~
, and the theorem is proved. Notice that we only used the
conditional exchangeability of the cycles, and not the full independence.
A.2 Proof of Theorem 2
Recall that in Section 5 we defined
for the original sample path ~
X. Using a permuted path ~
instead of the
original path ~
X in (16), we get
\Theta Y 0
A
22 (l) +@ X
22 (l)A1
28 \Delta J. M. Calvin and M. K. Nakayama
where the Y variables and sets are the same as the Y; H; J; B variables
and sets, respectively, with ~
replacing ~
X in (16). Recall that E is the conditional
expectation operator corresponding to a random (uniform) permutation of 2-cycles
(as was done when constructing the path ~
X 0 from ~
X) given the original sample
path ~
X. Also, recall that we define our new estimator to be
e
which we will now show is equivalent to (17).
First note that by our construction of the path ~
X 0 from ~
X, the first term in (28)
does not change when replacing ~
X with ~
\Theta Y 0
\Theta Y 12
Now we compute the conditional expectation of the second term of (28). We can
assume that h 12 - 3. Let ae(k) be the index of the Y 21 segment that follows the
segment after a permutation of the 2-cycles. Note that ae(k) 6= /(k) since
Y 12 (k) and Y 21 (/(k)) are always in the same 2-cycle, no matter how the 2-cycles
are permuted. Any of the other h indices from H(1; 2) are equally likely,
however, so that
Y
For the second summand in the second term of (28), we have
Y 22 (l)7 5
Y 22 (l)7 5
Y 22 (l)7 5
Y 22 (l)7 5
Y
Y 22 (l)
Using Permutations in Regenerative Simulations to Reduce Variance \Delta 29
Y 22 (l)5
Y
Y 22 (l)5
Y 22 (l): (31)
For the third summand in the second term of (28), we have
Y 22 (l)
Y 22 (l) 1 fl2Bk gA3
l6=m
Now note that
Y 22 (l) 2
Y 22
Also,
l6=m
Y 22 (l)Y 22 (m) 1 fl2Bk;m2Bk g7 5
l6=m
Y 22 (l)Y 22 (m)
l6=m
Y 22 (l)Y 22 (m)
Now use the fact that
which follows from
Lemma 6. Suppose that p white balls, numbered 1 to p, are placed along with q
black balls into p+q boxes arranged in a line, with each box getting exactly one ball.
Apply a uniformly chosen random permutation to the balls. Then the probability
that ball 1 and ball 2 are not separated by a black ball is 2=(2
To apply this lemma to (32), we let be the number of 2-cycles that
include one of the T 1 stopping times, and p be the number of remaining 2-cycles.
Proof. Let D be the number of boxes in between the boxes containing ball 1
M. Calvin and M. K. Nakayama
and ball 2, and let L i be the box number containing ball i
Given that the probability that balls 1 and 2 are not separated by at least
black ball is the probability that all q black balls are chosen from the p+q
boxes that are not between ball 1 and ball 2, which is
Thus the desired probability is
Hence,
Y 22 (l)
l6=m
Y 22 (l)Y 22 (m): (33)
Using Permutations in Regenerative Simulations to Reduce Variance \Delta 31
Finally, putting together (29), (30), (31), and (33), we get that e
X) is as in (17).
The unbiasedness and variance reduction follow directly from Theorem 1.
A.3 Proof of Theorem 5
We first need the following result.
Lemma 7. Suppose that q black balls, r white balls, and p red balls are placed at
random in q +p+ r boxes arranged in a line (one ball per box). The probability that
there are no white balls in an interval formed by two particular black balls (or the
start or end of the boxes) is q=(q
Proof. Count boxes from the left until a non-red ball is encountered. The
desired probability is the probability that the first non-red ball is a black ball. Since
of the non-red balls q are black and r are white, this probability is q=(q
Now we prove Theorem 5. First recall our definitions of I 12 (k), I 22 (l), I 21 (k),
22 (l), and D 21 (k) given in Section 6. Also recall
(23). The first term in (23) is independent of the permutations, and so we now
consider the second term.
Note that I 12 (i), I 22 (j), and I 21 (k) are independent for any i; j; k. Then
I 12 (k); max
I 22 (l); I 21 (ae(k))
I 12 (k); max
I 22 (l); I 21 (ae(k))
I 12 (k); max
I 22 (l); I 21 (ae(k))
I 22
I 22
where we have used Lemma 7.
We now examine the estimation of -. Recall (24) and (25). The first term in (24)
is not affected by permuting the 2-cycles, but the second term is, and so we now
M. Calvin and M. K. Nakayama
examine the second term. First note that
since the D 12 (k) are unaffected by permutations of 2-cycles. Also,
Y
Y
Y
I 22
Finally, after the permutation of the 2-cycles, define R(k) to be the number of 2-
cycles in J(2; 1) that immediately follow the path segment corresponding to D 12 (k),
and let be the indices of those 2-cycles in J(2; 1) that
immediately follow the path segment corresponding to D 12 (k) in the order they
appear. Then
Y
j!l
D 22 (ffi l (k))
Y
j!l
Using Permutations in Regenerative Simulations to Reduce Variance \Delta 33
Lemma 8.
D 22 (ffi l
r
Proof. Suppose m balls are placed in n - m boxes in a line. Let Z be the
number of empty boxes on the left end. Then
for
and
(substitute
use the identity
To get the mean number of D 22 's that do not hit F , we use the above formula with
Finally, putting together Lemma 8 and (34), (35), and (36), we get our new
estimator for -. The unbiasedness and variance reduction of our two estimators
34 \Delta J. M. Calvin and M. K. Nakayama
follow directly from Theorem 1.
B. PSEUDO-CODE
B.1 Estimator in Theorem 2
Below is the pseudo-code for the estimator in Theorem 2. (The pseudo-code for estimator
in Theorem 5 is similar.) Note that it is specifically for a discrete-event simulation
of a continuous-time process. Discrete-event processes
can be handled by letting the inter-event time \Delta always be 1. The estimator is
denoted by alpha.
number of regenerative 1-cycles to simulate;
k / 0; // counter for number of regenerative 1-cycles
cardinality of H(1; 2)
sumy12 / 0; // sum of the Y 12 (k) over H(1; 2)
sum of the Y 21 (k) over H(1; 2)
sumy22 / 0; // sum of the Y 22 (k) over J(2; 1)
sumysq / 0; // sum of the Y
sumy12sq / 0; // sum of the Y 12
sumy21sq / 0; // sum of the Y 21
sumy22sq / 0; // sum of the Y 22
sum of the Y 12 (k)Y 21 (/(k)) over H(1; 2)
accum / 0; // accumulator for Y over the current segment
laststoptime / or 2) of last stopping time to occur
generate initial state x;
do while (k ! m)
generate inter-event time \Delta;
accum
generate next state x;
occurs at current time t) then
else
lasty21 / accum;
endif
accum / 0;
endif
occurs at current time t) then
Using Permutations in Regenerative Simulations to Reduce Variance \Delta
else
endif
else
sumy22
endif
accum / 0;
endif
alpha / sumysq/m;
else
endif
--R
A Guide to Simulation
A new variance-reduction technique for regenerative simulations of Markov chains
Practical Nonparametric Statistics
Simulating stable stochastic systems
methods: Another look at the jackknife.
A joint central limit theorem for the sample mean and regenerative variance estimator.
The asymptotic efficiency of simulation estimators.
Monte Carlo Methods.
Topics on Regenerative Processes.
Regenerative Phenomena.
Introduction to the Theory of Statistics
Some invariance principles relating to jackknifing and their role in sequential analysis.
Approximation Theorems of Mathematical Statistics.
Importance sampling for highly reliable Markovian systems.
Regenerative Stochastic Simulation.
--TR
A guide to simulation (2nd ed.)
Discrete-time conversion for simulating finite-horizon Markov processes
The asymptotic efficiency of simulation estimators
A Unified Framework for Simulating Markovian Models of Highly Dependable Systems
Importance sampling for the simulation of highly reliable Markovian systems
--CTR
James M. Calvin , Peter W. Glynn , Marvin K. Nakayama, On the small-sample optimality of multiple-regeneration estimators, Proceedings of the 31st conference on Winter simulation: Simulation---a bridge to the future, p.655-661, December 05-08, 1999, Phoenix, Arizona, United States
James M. Calvin , Marvin K. Nakayama, Exploiting multiple regeneration sequences in simulation output analysis, Proceedings of the 30th conference on Winter simulation, p.695-700, December 13-16, 1998, Washington, D.C., United States
James M. Calvin , Marvin K. Nakayama, Output analysis: a comparison of output-analysis methods for simulations of processes with multiple regeneration sequences, Proceedings of the 34th conference on Winter simulation: exploring new frontiers, December 08-11, 2002, San Diego, California
Michael A. Zazanis, Asymptotic Variance Of Passage Time Estimators In Markov Chains, Probability in the Engineering and Informational Sciences, v.21 n.2, p.217-234, April 2007
James M. Calvin , Marvin K. Nakayama, SIMULATION OF PROCESSES WITH MULTIPLE REGENERATION SEQUENCES, Probability in the Engineering and Informational Sciences, v.14 n.2, p.179-201, April 2000
Wanmo Kang , Perwez Shahabuddin , Ward Whitt, Exploiting regenerative structure to estimate finite time averages via simulation, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.17 n.2, p.8-es, April 2007
James M. Calvin , Marvin K. Nakayama, Improving standardized time series methods by permuting path segments, Proceedings of the 33nd conference on Winter simulation, December 09-12, 2001, Arlington, Virginia
James M. Calvin , Peter W. Glynn , Marvin K. Nakayama, The semi-regenerative method of simulation output analysis, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.16 n.3, p.280-315, July 2006 | variance reduction;permutations;efficiency improvement;regenerative simulation |
280293 | Intelligent Adaptive Information Agents. | Adaptation in open, multi-agent information gathering systems is important for several reasons. These reasons include the inability to accurately predict future problem-solving workloads, future changes in existing information requests, future failures and additions of agents and data supply resources, and other future task environment characteristic changes that require system reorganization. We have developed a multi-agent distributed system infrastructure, RETSINA (REusable Task Structure-based Intelligent Network Agents) that handles adaptation in an open Internet environment. Adaptation occurs both at the individual agent level as well as at the overall agent organization level. The RETSINA system has three types of agents. Interface agents interact with the user receiving user specifications and delivering results. They acquire, model, and utilize user preferences to guide system coordination in support of the users tasks. Task agents help users perform tasks by formulating problem solving plans and carrying out these plans through querying and exchanging information with other software agents. Information agents provide intelligent access to a heterogeneous collection of information sources. In this paper, we concentrate on the adaptive architecture of the information agents. We use as the domain of application WARREN, a multi-agent financial portfolio management system that we have implemented within the RETSINA framework. | Introduction
Adaptation is behavior of an agent in response to unexpected
(i.e., low probability) events or dynamic environments. Examples
of unexpected events include the unscheduled failure
of an agent, an agent's computational platform, or underlying
information sources. Examples of dynamic environments include
the occurrence of events that are expected but it is not
known when (e.g., an information agent may reasonably expect
to become at some point overloaded with information re-
quests), events whose importance fluctuates widely (e.g., price
information on a stock is much more important while a trans-action
is in progress, and even more so if certain types of news
become available), the appearance of new information sources
and agents, and finally underlying environmental uncertainty
(e.g., not knowing beforehand precisely how long it will take
to answer a particular query).
We have been involved in designing, building, and analyzing
multi-agent systems that exist in these types of dynamic
and partially unpredictable environments. These agents handle
adaptation at several different levels, from the high-level
multi-agent organization down to the monitoring of individual
method executions. In the next section we will discuss the
individual architecture of these agents. Then, in the section
entitled "Agent Adaptation" we will discuss the problems and
solutions to agent adaptation at the organizational, planning,
scheduling, and execution monitoring levels. In particular, we
will discuss how our architecture supports organizational and
planning-level adaptation currently and what areas are still under
active investigation. We will discuss schedule adaptation
only in passing and refer the interested reader to work else-
where. Finally, we will present a detailed model and some
experiments with one particular behavior, agent self-cloning,
for execution-level adaptation.
Agent Architecture
Most of our work in the information gathering domain to
date has been centered on the most basic type of intelligent
agent: the information agent, which is tied closely to a single
data source. The dominant domain level behaviors of
an information agent are: retrieving information from external
information sources in response to one shot queries (e.g.
"retrieve the current price of IBM stock"); requests for periodic
information (e.g. "give me the price of IBM every
monitoring external information sources for
the occurrence of given information patterns, called
monitoring requests, (e.g. "notify me when IBM's price increases
by 10% over $80"). Information originates from external
sources. Because an information agent does not have
control over these external information sources, it must ex-
tract, possibly integrate, and store relevant pieces of information
in a database local to the agent. The agent's information
processing mechanisms then process the information in the
local database to service information requests received from
other agents or human users. Other simple behaviors that
are used by all information agents include advertising their
capabilities, managing and rebuilding the local database when
necessary, and polling for KQML messages from other agents.
An information agent's reusable behaviors are facilitated by
its reusable agent architecture, i.e. the domain-independent
abstraction of the local database schema, and a set of generic
software components for knowledge representation, agent
control, and interaction with other agents. The generic software
components are common to all agents, from the simple
information agents to more complex multi-source information
agents, task agents, and interface agents. The design of
useful basic agent behaviors for all types of agents rests on
a deeper specification of agents themselves, and is embodied
in an agent architecture. Our current agent architecture is
an instantiation of the DECAF (Distributed, Environment-Centered
Agent Framework) architecture (Decker et al.
1995).
Control: Planning, Scheduling, and Action Execution
The control process for information agents includes steps for
planning to achieve local or non-local objectives, scheduling
the actions within these plans, and actually carrying out these
actions. In addition, the agent has a shutdown and an initialization
process. The agent executes the initialization process
upon startup; it bootstraps the agent by giving it initial objectives
to poll for messages from other agents and to advertise its
capabilities. The shutdown process is executed when the agent
either chooses to terminate or receives an uncontinueable error
signal. The shutdown process assures that messages are
sent from the terminating agent asserting goal dissolution to
client agents and requesting goal dissolution to server agents
(see the section on planning adaptation).
The agent planning process (see Figure 1) takes as input the
agent's current set of goals G (including any new, unplanned-
for goals G n ), and the set of current task structures (plan in-
stances) T . It produces a new set of current task structures
(Williamson, Decker, & Sycara 1996).
ffl Each individual task T represents an instantiated approach
to achieving one or more of the agent's goals G-it is a
unit of goal-directed behavior. Every task has an (optional)
deadline.
ffl Each task consists of a partially ordered set of subtasks
and/or basic actions A. Currently, tasks and actions are
related by how information flows from the outcomes of one
task or action to the provisions of anther task or action. Sub-tasks
may inherit provisions from their parents and provide
outcomes to their parents. Each action also has an optional
deadline and an optional period. If an action has both a period
and a deadline, the deadline is interpreted as the one
for the next periodic execution of the basic action.
The most important constraint that the planning/plan retrieval
algorithm needs to meet (as part of the agent's overall
properties) is to guarantee at least one task for every goal until
the goal is accomplished, removed, or believed to be unachievable
(Cohen & Levesque 1990). For information agents, a
common reason that a goal in unachievable is that its specification
is malformed, in which case a task to respond with
the appropriate KQML error message is instantiated. An information
agent receives in messages from other agents three
important types of goals:
1. Answering a one-shot query about the associated database.
2. Setting up a periodic query on the database, that will be
run repeatedly, and the results sent to the requester each
time (e.g., "tell me the price of IBM every
3. Monitoring a database for a change in a record, or the addition
of a new record (e.g., "tell me if the price of IBM
drops below $80 within 15 minutes of its occurrence").
The agent scheduling process in general takes as input the
agent's current set of task structures T , in particular, the set
of all basic actions, and decides which basic action, if any, is
to be executed next. This action is then identified as a fixed
intention until it is actually carried out (by the execution com-
ponent). Constraints on the scheduler include:
ffl No action can be intended unless it is enabled.
ffl Periodic actions must be executed at least once during their
period (as measured from the previous execution instance)
(technically, this is a max invocation separation constraint,
not a "period").
ffl Actions must begin execution before their deadline.
ffl Actions that miss either their period or deadline are considered
to have failed; the scheduler must report all failed
actions. Sophisticated schedulers will report such failures
(or probable failures) before they occur by reasoning about
action durations (and possibly commitments from other
agents) (Garvey & Lesser 1995).
ffl The scheduler attempts to maximize some predefined utility
function defined on the set of task structures. For the
information agents, we use a very simple notion of utility-
every action needs to be executed in order to achieve a task,
and every task has an equal utility value.
In our initial implementation, we use a simple earliest-
deadline-first scheduling heuristic. A list of all actions is constructed
(the schedule), and the earliest deadline action that
is enabled is chosen. Enabled actions that have missed their
deadlines are still executed but the missed deadline is recorded
and the start of the next period for the task is adjusted to help
it meet the next period deadline. When a periodic task is
chosen for execution, it is reinserted into the schedule with a
deadline equal to the current time plus the action's period.
The execution monitoring process takes the agent's next intended
action and prepares, monitors, and completes its exe-
cution. The execution monitor prepares an action for execution
by setting up a context (including the results of previous
actions, etc.) for the action. It monitors the action by optionally
providing the associated computation-limited resources-
for example, the action may be allowed only a certain amount
of time and if the action does not complete before that time is
up, the computation is interrupted and the action is marked
as having failed. Upon completion of an action, results are
recorded, downstream actions are passed provisions if so indi-
cated, and runtime statistics are collected.
Planner Scheduler Execution
Monitor
Task Structures
Plan Library
query
task
montr
task
montr
task
run-query
run-query
send-results
register-trigger
register-trigger
Schedule
I.G.
task poll-for-msgs
Current
Action
run-query
Control Flow
Data Flow
(ask-all .)
(DB-monitor .)
(DB-monitor .)
site specific
external
interface code
Mirror of External DB
extra attributes
Registered triggers
Goals/Requests
Current Activity Information
Figure
1: Overall view of data and control flow in an information agent.
Agent Adaptation
In this section we briefly consider several types of adaptation
supported by this individual agent architecture in our current
and previous work. These types include organizational,
planning, scheduling, and execution-time adaptation. We
are currently actively involved in expanding an agent's adaptation
choices at the organizational and planning levels-in
this short paper we will only describe how our architecture
supports organizational and planning-level adaptation, what
we have currently implemented, and what directions we are
currently pursuing. We have not, in our current work, done
much with schedule adaptation; instead we indicate future
potential by pointing to earlier work within this general architecture
that addresses precisely schedule adaptation. Fi-
nally, we present a fairly comprehensive account of one type
of execution-time adaptation ("self-cloning").
Organizational Adaptation
It has been clear to organizational theorists since at least the
60's that there is no one good organizational structure for human
organizations (Lawrence & Lorsch 1967). Organizations
must instead be chosen and adapted to the task environment
at hand. Most important are the different types and qualities
of uncertainty present in the environment (e.g., uncertainty
associated with inputs and output measurements, uncertainty
associated with causal relationships in the environment, the
time span of definitive feedback after making a decision (Scott
1987)). Recently, researchers have proposed that organizations
grow toward, and structure themselves around, sources
of information that are important to them because they are
sources of news about how the future is (evidently) turning
out (Stinchcombe 1990).
In multi-agent information systems, one of the most important
sources of uncertainty revolves around what information
is available from whom (and at what cost). We have
developed a standard basic advertising behavior that allows
agents to encapsulate a model of their capabilities and send
it to a "matchmaker" information agent (Kuokka & Harada
1995). Such a matchmaker agent can then be used by a multi-agent
system to form several different organizational struc-
tures(Decker, Williamson, & Sycara 1996):
Uncoordinated Team: agents use a basic shared behavior for
asking questions that first queries the matchmaker as to
who might answer the query, and then chooses an agent
randomly for the target query. Very low overhead, but potentially
unbalanced loads, reliability limited by individual
data sources, and problems linking queries across multiple
ontologies. Our initial implementation used this organization
exclusively.
Federations: (e.g., (Wiederhold, Wegner, & Cefi 1992;
Genesereth & Katchpel 1994; Finin et al. 1994)) agents
give up individual autonomy over choosing who they will
do business with to a locally centralized "facilitator" (an
extension of the matchmaker concept) that "brokers" re-
quests. Centralization of message traffic potentially allows
greater load balancing and the provision of automatic translation
and mediation services. We have constructed general
purpose brokering agents, and are currently conducting an
empirical study of matchmaking vs. brokering behavior.
Of course, a hybrid organization is both possible and compelling
in many situations.
Economic Markets: (e.g., (Wellman 1993)) agents use price,
reliability, and other utility characteristics with which to
choose another agent. The matchmaker can supply to each
agent the appropriate updated pricing information as new
agents enter and exit the system, or alter their advertise-
ments. Agents can dynamically adjust their organization
as often as necessary, limited by transaction costs. Potentially
such organizations provide efficient load balancing
and the ability to provide truly expensive services (expen-
sive in terms of the resources required). Both brokers and
matchmakers can be used in market-based systems (corre-
sponding to centralized and decentralized markets, respec-
tively).
Bureaucratic Functional Units: Traditional
manager/employee groups of a single multi-source information
agent (manager) and several simple information
agent (employees). By organizing into functional units, i.e.,
related information sources, such organizations concentrate
on providing higher reliability (by using multiple underlying
sources), simple information integration (from partially
overlapping information), and load balancing. "Manag-
ing" can be viewed as brokering with special constraints
on worker behavior brought about by the manager-worker
authority relationship.
This is not an exhaustive list. Our architecture has supported
other explorations into understanding the effects of
organizational structures (Decker 1996).
Planning Adaptation
The "planner" portion of our agent architecture consists of
a new hierarchical task network based planner using a plan
formalism that admits sophisticated control structures such
as looping and periodic tasks (Williamson, Decker, & Sycara
1996). It has features derived from earlier classical planning
work, as well as task structure representations such as
TCA/TCX (Simmons 1994) and T-MS (Decker & Lesser
1995). The focus of planning in our system is on explicating
the basic information flow relationships between tasks, and
other relationships that affect control-flow decisions. Most
control relationships are derivative of these more basic rela-
tionships. Final action selection, sequencing, and timing are
left up to the agent's local scheduler (see the next subsection).
Some types of adaptation expressed by our agents at this level
in our current implementation include:
Adapting to failures: At any time, any agent in the system
might be unavailable or might go off-line (even if you are
in the middle of a long term monitoring situation with that
agent). Our planner's task reductions handle these situations
so that such failures are dealt with smoothly. If alternate
agents are available, they will be contacted and the
subproblem restarted (note that unless there are some sort
of partial solutions, this could still be expensive). If no alternate
agent is available, the task will have to wait. In the
future, such failures will signal the planner for an opportunity
to replan.
Multiple reductions: Each task can potentially be reduced in
several different ways, depending on the current situation.
Thus even simple tasks such as answering a query may be
result in very different sequences of actions (looking up an
agent at the matchmaker; using a already known agent, using
a cached previous answer).
Interleaved planning and execution: The reduction of some
tasks can be delayed until other, "information gathering"
tasks, are completed.
Previous work has focussed on coordination mechanisms
alone. In particular, the Generalized Partial Global Planning
family of coordination mechanisms is a domain-independent
approach to multi-agent scheduling- and planning-level coordination
that works in conjunction with an agent's existing
local scheduler to adapt a plan by adding certain constraints
(Decker & Lesser 1995). These include commitments to do a
task with a minimum level of quality, or commitments to do
a task by a certain deadline. If the resulting plan can be successfully
scheduled, these local commitments can be communicated
to other agents where they become non-local commitments
to those agent's local schedulers. Not all mechanisms
are needed in all environments. Nagendra-Prasad has begun
work on learning which mechanisms are needed in an environment
automatically (Prasad & Lesser 1996).
Scheduling Adaptation
In our current work, we have been using a fairly simple earliest
deadline first scheduler that does little adaptation besides
adjusting the deadlines of periodic (technically "max invocation
separation constrained") actions that miss or are about
to miss their initial deadlines. Also, agents can dynamically
change their information request periods which affect only
the scheduling of the related actions.
Earlier work within this architecture has used a more sophisticated
"Design-to-Time" scheduling algorithm, which
adapts the local schedule in an attempt to maximize schedule
quality while minimizing missed deadlines (Garvey & Lesser
1995; Decker & Lesser 1995). In doing so, the scheduler
may choose from both "multiple methods" (different algorithms
that represent difference action duration/result quality
tradeoffs) and anytime algorithms (summarized by du-
ration/quality probability distribution tables (Zilberstein &
Russell 1992)).
Execution Adaptation
Within this architecture, previous execution-time adaptation
has focussed on monitoring actions (Garvey &
Lesser 1995). Recently, we have begun looking at load-
balancing/rebalancing behaviors such as agent cloning.
Cloning Cloning is one of an information agent's possible
responses to overloaded conditions. When an information
agent recognizes via self-reflection that it is becoming over-
loaded, it can remove itself from actively pursuing new queries
("unadvertising" its services in KQML) and create a new information
agent that is a clone of itself. To do this, it uses
a simple model of how it's ability to meet new deadlines is
related to the characteristics of it's current queries and other
tasks. It compares this model to a hypothetical situation that
describes the effect of adding a new agent. In this way, the
information agent can make a rational meta-control decision
about whether or not it should undertake a cloning behavior.
This self-reflection phase is a part of the agent's execution
monitoring process. The start and finish time of each action is
recorded as well as a running average duration for that action
class. A periodic task is created to carry out the calculations
required by the model described below.
The key to modeling the agent's load behavior is its current
task structures. Since one-shot queries are transient, and
simple repeated queries are just a subcase of database monitoring
queries, we focus on database monitoring queries only.
Each monitoring goal is met by a task that consists of three
activities; run-query, check-triggers, and send-results. Run-
query's duration is mostly that of the external query interface
function. Check-triggers, which is executed whenever the local
DB is updated and which thus is an activity shared by
all database monitoring tasks, takes time proportional to the
number of queries. Send-results takes time proportional to
the number of returned results. Predicting performance of an
information agent with n database monitoring queries would
thus involve a quadratic function, but we can make a simplification
by observing that the external query interface functions
in all of the information agents we have implemented so far
using the Internet (e.g., stock tickers, news, airfares) take an
order of magnitude more time than any other part of the system
(including measured planning and scheduling overhead).
If we let E be the average time to process an external query,
then with n queries of average period p, we can predict an idle
percentage of:
(1)
We validate this model in the next section.
When an information agent gets cloned, the clone could be
set up to use the resources of another processor (via an 'agent
server', or a migratable Java or Telescript program). However,
in the case of information agents that already spend the majority
of their processing time in network I/O wait states, an
overhead proportion O ! 1 of the En time units each period
are available for processing. 1 Thus, as a single agent becomes
overloaded as it reaches p=E queries, a new agent can
be cloned on the same system to handle another
queries. When the second agent runs on a separate processor,
1. This can continue, with the i th agent on the same
processor handling queries (note the diminishing
returns). We also demonstrate this experimentally in
the next section. For two agents, the idle percentage should
then follow the model
I 1+2
(2)
It is important to note how our architecture supports this
type of introspection and on-the-fly agent creation. The execution
monitoring component of the architecture computes
and stores timing information about each agent action, so that
the agent learns a good estimate for the value of E. The sched-
uler, even the simple earliest-deadline-first scheduler, knows
the actions and their periods, and so can compute the idle
percentage I%. In the systems we have been building, new
queries arrive slowly and periods are fairly long, in comparison
to E, so the cloning rule waits until there are
queries before cloning. In a faster environment, with new
queries arriving at a rate r and with cloning taking duration
C, the cloning behavior should be begun when the number
of queries reaches
Execution Adaptation: Experimental Results
We undertook an empirical study to measure the baseline performance
of our information agents, and to empirically verify
the load models presented in the previous section for both a
single information agent without the cloning behavior, and
an information agent that can clone onto the same processor.
We also wanted to verify our work in the context of a real
application (monitoring stock prices).
Our first set of experiments were oriented toward the
measurement of the baseline performance of an information
agent.
Figure
2 shows the average idle percentage, and the average
percentage of actions that had deadlines and that missed
Another way to recoup this time is to run the blocking external
query in a separate process, breaking run-query into two parts.
We are currently comparing the overhead of these two different uni-processor
solutions-in any case we stress that both behaviors are
reusable and can be used by any existing information agent without
reprogramming. Cloning to another processor still has the desired
effect.
them, for various task loads. The query period was fixed at
seconds, and the external query time fixed at 10 seconds (but
nothing else within the agent was fixed). Each experiment was
run for 10 minutes and repeated 5 times. As expected, the idle
time decreases and the number of missed deadlines increases,
especially after the predicted saturation point 6). The
graph also shows the average amount of time by which an action
its deadline.
The next step was to verify our model of single information
agent loading behavior (Equation 1). We first used a partially
simulated information agent to minimize variation factors external
to the information agent architecture. Later, we used a
completely real agent with a real external query interface (the
Security APL stock ticker agent).
On the left of Figure 3 is a graph of the actual and predicted
idle times for an information agent that monitors a simulated
external information source that takes a constant 10 seconds. 2
The information agent being examined was given tasks by a
second experiment-driver agent. Each experiment consisted
of a sequence of tasks (n) given to the information
agent at the start. Each task had a period of 60 seconds,
and each complete experiment was repeated 5 times. Each
experiment lasted 10 minutes. The figure clearly shows how
the agent reaches saturation after the 6th task as predicted by
the model 6). The idle time never quite drops below
10% because the first minute is spent idling between startup
activities (e.g., making the initial connection and sending the
batch of tasks). After adding in this extra base idle time, our
model predicts the actual utilization quite well (R
R 2 is a measure of the total variance explained by the model).
We also ran this set of experiments using a real external in-
terface, that of the Security APL stock ticker. The results are
shown graphically on the right in Figure 3. 5 experiments
were again run with a period of 60 seconds (much faster than
normal tasks. Our utilization
model also correctly predicted the performance of this real sys-
tem, with R and the differences between the model
and the experimental results were not significant by either t-tests
or non-parametric signed-rank tests. The odd utilization
results that occurred while testing were caused
by network delays that significantly changed the average value
of E (the duration of the external query). However, since the
agent's execution monitor measures this value during problem
solving, the agent can still react appropriately (the model still
fits fine).
Finally, we extended our model to predict the utilization for
a system of agents with the cloning behavior, as indicated in
the previous section. Figure 4 shows the predicted and actual
results over loads of 1 to 10 tasks with periods of 60 seconds,
repetitions. Agent 1 clones itself onto the same
processor when n ? 5. In this case, model R
2 All the experiments described here were done on a standard
timesharing Unix workstation while connected to the network.
. MD % - MDAmount
Number of Periodic Queries
Percentages
Average
Missed
Deadline
Amount
Figure
2: A graph of the average percentage idle time and average percentage of actions with deadlines that missed them for various
loads (left Y axis). Superimposed on the graph, and keyed to the right axis, are the average number of seconds by which a missed
deadline is missed.0.10.30.50.70.91
Number of Periodic Queries
Percentage
Idle
Time
Predicted
Actual
Information
Percentage
Idle
Time
Number of Periodic Queries
Predicted
Actual
Information Agent
with 10s simulated
External Interface
Figure
3: On the left, graph of predicted and actual utilization for a real information agent with a simulated external query
interface. On the right, the same graph for the Security APL stock ticker agent.
the differences between the model and the measured values
are not significant by t-test or signed-ranks. The same graph
shows the predicted curve for one agent (from the left side of
Figure
as a comparison. 3
Number of Periodic Queries
Percentage
Idle
agent
Predicted, with cloning (2 agents)
Actual, with cloning0.20.40.60.81
Figure
4: Predicted idle percentages for a single non cloning
agent, and an agent with the cloning behavior across various
task loads. Plotted points are the measured idle percentages
from experimental data including cloning agents.
Current & Future Work
This paper has discussed adaptation in a system of intelligent
agents at four different levels: organizational, planning,
scheduling, and execution. Work at the organizational and
planning levels is a current, active pursuit; we expect to return
to schedule adaptation as time and resources permit. Cur-
rently, we are conducting an empirical study into matchmak-
ers, brokers, and related hybrid organizations.
This paper also discussed a fairly detailed model of, and
experimentation with, a simple cloning behavior we have im-
plemented. Several extensions to this cloning model are being
considered. In particular, there are several more intelligent
ways with which to divide up the tasks when cloning occurs
in order to use resources more efficiently (and to keep queries
balanced after a cloning event occurs). These include:
ffl Partitioning existing tasks by time/periodicity, so that the
resulting agents have a balanced, schedulable set of tasks.
ffl Partitioning tasks by client: all tasks from agent 1 end up
at the same clone.
3 Since the potential second agent would, if it existed, be totally
idle from 1 the idle curve differs there in the cloning case.
ffl Partitioning tasks by class/type/content: all tasks about one
subject (e.g., the stock price of IBM) end up at the same
clone.
ffl For multi-source information agents, partitioning tasks by
data source: all tasks requiring the use of source A end up
at the same clone.
Acknowledgements
The authors would like to thank the reviewers for their helpful
comments. This work has been supported in part by
ARPA contract F33615-93-1-1330, in part by ONR contract
N00014-95-1-1092, and in part by NSF contract IRI-
9508191.
--R
Intention is choice with commitment.
Designing a family of coordination algorithms.
MACRON: an architecture for multi-agent cooperative information gathering
Modeling information agents: Advertisements
Task environment centered simulation.
In Prietula
KQML as an agent communication language.
Representing and scheduling satisficing tasks.
Software agents.
Communications of the ACM
On using KQML for matchmaking.
Organization and Envi- ronment
In AAAI Spring Symposium on Adaptation
Organizations: Rational
Structured control for autonomous robots.
Information and Organizations.
University of California Press.
A market-oriented programming environment and its application to distributed multicommodity flow problems
Toward megaprogramming.
Unified information and control flow in hierarchical task networks.
Constructing utility- driven real-time systems using anytime algorithms
--TR
--CTR
Claudia V. Goldman , Jeffrey S. Rosenschein, Partitioned multiagent systems in information oriented domains, Proceedings of the third annual conference on Autonomous Agents, p.32-39, April 1999, Seattle, Washington, United States
Qiuming Zhu , Stuart L. Aldridge , Tomas N. Resha, Hierarchical Collective Agent Network (HCAN) for efficient fusion and management of multiple networked sensors, Information Fusion, v.8 n.3, p.266-280, July, 2007
Marian (Misty) Nodine , Anne Hee Hiong Ngu , Anthony Cassandra , William G. Bohrer, Scalable Semantic Brokering over Dynamic Heterogeneous Data Sources in InfoSleuth", IEEE Transactions on Knowledge and Data Engineering, v.15 n.5, p.1082-1098, September
Terrence Harvey , Keith Decker , Sandra Carberry, Multi-agent decision support via user-modeling, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
K. S. Barber , C. E. Martin, Dynamic reorganization of decision-making groups, Proceedings of the fifth international conference on Autonomous agents, p.513-520, May 2001, Montreal, Quebec, Canada
Woodrow Barfield, Issues of law for software agents within virtual environments, Presence: Teleoperators and Virtual Environments, v.14 n.6, p.741-748, December 2005
Rey-Long Liu, Collaborative Multiagent Adaptation for Business Environmental Scanning Through the Internet, Applied Intelligence, v.20 n.2, p.119-133, March-April 2004
Keith Decker , Xiaojing Zheng , Carl Schmidt, A multi-agent system for automated genomic annotation, Proceedings of the fifth international conference on Autonomous agents, p.433-440, May 2001, Montreal, Quebec, Canada
Alessandro de Luna Almeida , Samir Aknine , Jean-Pierre Briot , Jacques Malenfant, A Predictive Method for Providing Fault Tolerance in Multi-agent Systems, Proceedings of the IEEE/WIC/ACM international conference on Intelligent Agent Technology, p.226-232, December 18-22, 2006
Keith Decker , Jinjiang Li, Coordinating Mutually Exclusive Resources using GPGP, Autonomous Agents and Multi-Agent Systems, v.3 n.2, p.133-157, June 2000
Fabien Gandon, Agents handling annotation distribution in a corporate semantic Web, Web Intelligence and Agent System, v.1 n.1, p.23-45, January
K. S. Barber , C. E. Martin, Flexible problem-solving roles for autonomous agents, Integrated Computer-Aided Engineering, v.8 n.1, p.1-15, January 2001
Fabien Gandon, Agents handling annotation distribution in a corporate semantic web, Web Intelligence and Agent System, v.1 n.1, p.23-45, January
Rey-Long Liu , Wan-Jung Lin, Incremental mining of information interest for personalized web scanning, Information Systems, v.30 n.8, p.630-648, December 2005
V. Lesser , K. Decker , T. Wagner , N. Carver , A. Garvey , B. Horling , D. Neiman , R. Podorozhny , M. Nagendra Prasad , A. Raja , R. Vincent , P. Xuan , X. Q. Zhang, Evolution of the GPGP/TMS Domain-Independent Coordination Framework, Autonomous Agents and Multi-Agent Systems, v.9 n.1-2, p.87-143, July-September 2004
David Camacho , Ricardo Aler , Daniel Borrajo , Jos M. Molina, A Multi-Agent architecture for intelligent gathering systems, AI Communications, v.18 n.1, p.15-32, January 2005
David Camacho , Ricardo Aler , Daniel Borrajo , Jos M. Molina, A multi-agent architecture for intelligent gathering systems, AI Communications, v.18 n.1, p.15-32, January 2005
Victor R. Lesser, Reflections on the Nature of Multi-Agent Coordination and Its Implications for an Agent Architecture, Autonomous Agents and Multi-Agent Systems, v.1 n.1, p.89-111, 1998
Larry Kerschberg , Doyle J. Weishar, Conceptual Models and Architectures for Advanced Information Systems, Applied Intelligence, v.13 n.2, p.149-164, September-October 2000 | intelligent agents;Distributed AI;agent architectures;Multi-Agent Systems;information gathering |
280345 | 3-D Scene Data Recovery Using Omnidirectional Multibaseline Stereo. | A traditional approach to extracting geometric information from a large scene is to compute multiple 3-D depth maps from stereo pairs or direct range finders, and then to merge the 3-D data. However, the resulting merged depth maps may be subject to merging errors if the relative poses between depth maps are not known exactly. In addition, the 3-D data may also have to be resampled before merging, which adds additional complexity and potential sources of errors.This paper provides a means of directly extracting 3-D data covering a very wide field of view, thus by-passing the need for numerous depth map merging. In our work, cylindrical images are first composited from sequences of images taken while the camera is rotated 360 about a vertical axis. By taking such image panoramas at different camera locations, we can recover 3-D data of the scene using a set of simple techniques: feature tracking, an 8-point structure from motion algorithm, and multibaseline stereo. We also investigate the effect of median filtering on the recovered 3-D point distributions, and show the results of our approach applied to both synthetic and real scenes. | Introduction
A traditional approach to extracting geometric information from a large scene is to compute multiple
(possibly numerous) 3-D depth maps from stereo pairs, and then to merge the 3-D data [Ferrie
and Levine, 1987; Higuchi et al., 1993; Parvin and Medioni, 1992; Shum et al., 1994]. This is not
only computationally intensive, but the resulting merged depth maps may be subject to merging er-
rors, especially if the relative poses between depth maps are not known exactly. The 3-D data may
also have to be resampled before merging, which adds additional complexity and potential sources
of errors.
This paper provides a means of directly extracting 3-D data covering a very wide field of view,
thus by-passing the need for numerous depth map merging. In our work, cylindrical images are
first composited from sequences of images taken while the camera is rotated 360 ffi about a vertical
axis. By taking such image panoramas at different camera locations, we can recover 3-D data of
the scene using a set of simple techniques: feature tracking, 8-point direct and iterative structure
from motion algorithms, and multibaseline stereo.
There are several advantages to this approach. First, the cylindrical image mosaics can be built
quite accurately, since the camera motion is very restricted. Second, the relative pose of the various
camera locations can be determined with much greater accuracy than with regular structure from
motion applied to images with narrower fields of view. Third, there is no need to build or purchase a
specialized stereo camera whose calibration may be sensitive to drift over time-any conventional
video camera on a tripod will suffice. Our approach can be used to construct models of building
interiors, both for virtual reality applications (games, home sales, architectural remodeling), and
for robotics applications (navigation).
In this paper, we describe our approach to generate 3-D data corresponding to a very wide field
of view (specifically 360 ffi ), and show results of our approach on both synthetic and real scenes.
We first review relevant work in Section 2 before delineating our basic approach in Section 3. The
method to extract wide-angle images (i.e., panoramic images) is described in Section 4. Section 5
reviews the 8-point algorithm and shows how it can be applied for cylindrical panoramic images.
Section 6 describes two methods of extracting 3-D point data: the first relies on unconstrained tracking
and using 8-point data input, while the second constrains the search for feature correspondences
to epipolar lines. We briefly outline our approach in modeling the data in Section 7-details of this
is given elsewhere [Kang et al., 1995a]. Finally, we show results of our approach in Section 8 and
close with a discussion and conclusions.
Relevant work
There is a significant body of work on range image recovery using stereo (a comprehensive survey
is given in [Barnard and Fischler, 1982]). Most work on stereo uses images with limited fields of
view. One of the earliest work to use panoramic images is the omnidirectional stereo system of
Ishigura [Ishigura et al., 1992], which uses two panoramic views. Each panoramic view is created
by one of the two vertical slits of the camera image sweeping around 360 ffi ; the cameras (which
are displaced in front of the rotation center) are rotated by very small angles, typically about 0.4 ffi .
One of the disadvantages of this method is the slow data accumulation, which takes about 10 mins.
The camera angular increments must be approximately 1/f radians, and are assumed to be known
a priori.
Murray [Murray, 1995] generalizes Ishigura et al.'s approach by using all the vertical slits of
the image (except in the paper, he uses a single image raster). This would be equivalent to structure
from known motion or motion stereo. The advantage is more efficient data acquisition, done at
lower angular resolution. The analysis involved in this work is similar to Bolles et al.'s [Bolles et
al., 1987] spatio-temporal epipolar analysis, except that the temporal dimension is replaced by that
of angular displacement.
Another related work is that of plenoptic modeling [McMillan and Bishop, 1995]. The idea is to
composite rotated camera views into panoramas, and based on two cylindrical panoramas, project
disparity values between these locations to a given viewing position. However, there is no explicit
3-D reconstruction.
Our approach is similar to that of [McMillan and Bishop, 1995] in that we composite rotated
camera views to panoramas as well. However, we are going a step further in reconstructing 3-D
feature points and modeling the scene based upon the recovered points. We use multiple panoramas
for more accurate 3-D reconstruction.
3 Overview of approach 3
Omnidirectional
multibaseline stereo
Recovered 3-D
scene points Modeled scene
Figure
1: Generating scene model from multiple 360 ffi panoramic views.
3 Overview of approach
Our ultimate goal is to generate a photorealistic model to be used in a variety of scenarios. We are
interested in providing a simple means of generating such models. We also wish to minimize the
use of CAD packages as a means of 3-D model generation, since such an effort is labor-intensive.
In addition, we impose the requirement that the means of generating models from real scene be
done using commercially available equipment. In our case, we use a workstation with framegrabber
(real-time image digitizer) and a commercially available 8-mm camcorder.
Our approach is straightforward: at each camera location in the scene, capture sequences of
images while rotating the camera about the vertical axis passing through the camera optical center.
Composite each set of images to produce panoramas at each camera location. Use stereo to extract
3-D data of the scene. Finally, model the scene using these 3-D data input and render it with the
texture provided by the input 2-D image. This approach is summarized in Figure 1.
By using panoramic images, we can extract 3-D data covering a very wide field of view, thus
by-passing the need for numerous depth map merging. Multiple depth map merging is not only
computationally intensive, but the resulting merged depth maps may be subject to merging errors,
especially if the relative poses between depth maps are not known exactly. The 3-D data may also
have to be resampled before merging, which adds additional complexity and potential sources of
s
Figure
2: Compositing multiple rotated camera views into a panorama. The '\Theta' marks indicate the
locations of the camera optical and rotation center.
errors.
Using multiple camera locations in stereo analysis significantly reduces the number of ambiguous
matches and also has the effect of reducing errors by averaging [Okutomi and Kanade, 1993;
Kang et al., 1995b]. This is especially important for images with very wide fields of view, because
depth recovery is unreliable near the epipoles 1 , where the looming effect takes place, resulting in
very poor depth cues.
4 Extraction of panoramic images
A panoramic image is created by compositing a series of rotated camera image images, as shown in
Figure
2. In order to create this panoramic image, we first have to ensure that the camera is rotating
about an axis passing through its optical center, i.e., we must eliminate motion parallax when panning
the camera around. To achieve this, we manually adjust the position of camera relative to an
X-Y precision stage (mounted on the tripod) such that the motion parallax effect disappears when
the camera is rotated back and forth about the vertical axis [Stein, 1995].
Prior to image capture of the scene, we calibrate the camera to compute its intrinsic camera
parameters (specifically its focal length f , aspect ratio r, and radial distortion coefficient -). The
camera is calibrated by taking multiple snapshots of a planar dot pattern grid with known depth
separation between successive snapshots. We use an iterative least-squares algorithm (Levenberg-
1 For a pair of images taken at two different locations, the epipoles are the location on the image planes which are the
intersection between these image planes and the line joining the two camera optical centers. An excellent description
of the stereo vision is given in [Faugeras, 1993].
Image 1 Image 2 Image (N-1) Image N
Figure
3: Example undistorted image sequence (of an office).
Marquardt) to estimate camera intrinsic and extrinsic parameters (except for -) [Szeliski and Kang,
1994]. - is determined using 1-D search (Brent's parabolic interpolation in 1-D [Press et al., 1992])
with the least-squares algorithm as the black box.
The steps involved in extracting a panoramic scene are as follow:
ffl At each camera location, capture sequence while panning camera around 360 ffi .
ffl Using the intrinsic camera parameters, correct the image sequence for r, the aspect ratio, and
-, the radial distortion coefficient.
ffl Convert the (r; -corrected 2-D flat image sequence to cylindrical coordinates, with the focal
length f as its cross-sectional radius. An example of a sequence of corrected images (of an
office) is shown in Figure 3.
ffl Composite the images (with only x-directional DOF, which is equivalent to motion in the angular
dimension of cylindrical image space) to yield the desired panorama [Szeliski, 1994].
The relative displacement of one frame to the next is coarsely determined by using phase correlation
[Kuglin and Hines, 1975]. This technique estimates the 2-D translation between a
pair of images by taking 2-D Fourier transforms of both images, computing the phase difference
at each frequency, performing an inverse Fourier transform, and searching for a peak
in the magnitude image. Subsequently, the image translation is refined using local image
registration by directly comparing the overlapped regions between the two images [Szeliski,
1994].
ffl Correct for slight errors in the resulting length (which in theory equals 2-f) by propagating
residual displacement error equally across all images and recompositing. The error in length
is usually within a percent of the expected length.
6 5 Recovery of epipolar geometry
Figure
4: Panorama of office scene after compositing.
An example of a panoramic image created from the office scene in Figure 3 is shown in Figure 4.
5 Recovery of epipolar geometry
In order to extract 3-D data from a given set of panoramic images, we have to first know the relative
positions of the camera corresponding to the panoramic images. For a calibrated camera, this is
equivalent to determining the epipolar geometry between a reference panoramic image and every
other panoramic image.
The epipolar geometry dictates the epipolar constraint, which refers to the locus of possible
image projections in one image given an image point in another image. For planar image planes,
the epipolar constraint is in the form of straight lines. The interested reader is referred to [Faugeras,
1993] for details.
We use the 8-point algorithm [Longuet-Higgins, 1981; Hartley, 1995] to extract what is called
the essential matrix, which yields both the relative camera placement and epipolar geometry. This is
done pairwise, namely between a reference panoramic image and another panoramic image. There
are, however, four possible solutions [Hartley, 1995]. The solution that yields the most positive
projections (i.e., projections away from the camera optical centers) is chosen.
5.1 8-point algorithm: Basics
We briefly review the 8-point algorithm here: If the camera is calibrated (i.e., its intrinsic parameters
are known), then for any two corresponding image points (at two different camera placements)
in 3-D, we have
5.1 8-point algorithm: Basics 7
The matrix E is called the essential matrix, and is of the form R, where R and t are the
rotation matrix and translation vectors, respectively, and [t] \Theta
is the matrix form of the cross product
with t.
If the camera is not calibrated, we have a more general relation between two corresponding
image points (on the image plane) (u; v; 1) T and namely
F is called the fundamental matrix and is also of rank 2,
A, where A is an arbitrary 3 \Theta 3
matrix. The fundamental matrix is the generalization of the essential matrix E, and is usually employed
to establish the epipolar geometry and to recover projective depth [Faugeras, 1992; Shashua,
1994].
In our case, since we know the camera parameters, we can recover E. Let e be the vector comprising
is the (i,j)th element of E. Then for all the point matches, we have from (1)
from which we get a set of linear equations of the form
If the number of input points is small, the output of algorithm is sensitive to noise. On the other
hand, it turns out that normalizing the 3-D point location vector on the cylindrical image reduces
sensitivity of the 8-point algorithm to noise. This is similar in spirit to Hartley's application of
isotropic scaling [Hartley, 1995] prior to using the 8-point algorithm. The 3-D cylindrical points
are normalized according to the relation
With N panoramic images, we solve for sets of linear equations of the form (4). The kth
set corresponds to the panoramic image pair 1 and 1). Notice that the solution of e is defined
only up to an unknown scale. In our work, we measure the distance between camera positions; this
enable us to recover the scale. However, we can relax this assumption by carrying out the following
steps:
Recovery of epipolar geometry
ffl Fix camera distance of first pair (pair 1), to, say unit distance. Assign camera distances for
all the other pairs to be the same as the first.
ffl Calculate the essential matrices for all the pairs of panoramic images, assuming unit camera
distances.
ffl For each pair, compute the 3-D points.
ffl To estimate the relative distances between of camera positions for pair j 6= 1 (i.e., not the
first pair), find the scale of the 3-D points corresponding to pair j that minimizes the distance
error to those corresponding to pair 1. Robust statistics is used to reject outliers; specifically,
only the best 50% are used.
5.2 Tracking features for 8-point algorithm
The 8-point algorithm assumes that feature point correspondences are available. Feature tracking is
a challenge in that purely local tracking fails because the displacement can be large (of the order of
about 100 pixels, in the direction of camera motion). The approach that we have adopted comprises
spline-based tracking, which attempts to globally minimize the image intensity differences. This
yields estimates of optic flow, which in turn is used by a local tracker to refine the amount of feature
displacement.
The optic flow between a pair of cylindrical panoramic images is first estimated using spline-based
image registration between the pair [Szeliski and Coughlan, 1994; Szeliski et al., 1995]. In
this image registration approach, the displacement fields u(x; y) and v(x; y) (i.e., displacements in
the x- and y- directions as functions of the pixel location) are represented as two-dimensional splines
controlled by a smaller number of displacement estimates which lie on a coarser spline control grid.
Once the initial optic flow has been found, the best candidates for tracking are then chosen. The
choice is based on the minimum eigenvalue of the local Hessian, which is an indication of local
image texturedness. Subsequently, using the initial optic flow as an estimate displacement field,
we use the Shi-Tomasi tracker [Shi and Tomasi, 1994] with a window of size 25 pixels \Theta 25 pixels
to further refine the displacements of the chosen point features.
Why did we use the approach of applying the spline-based tracker before using the Shi-Tomasi
tracker? This approach is used to take advantage of the complementary characteristics of these two
trackers, namely:
1. the spline-based image registration technique is capable of tracking features with larger dis-
placements. This is done through coarse-to-fine image registration; in our work, we use 6
levels of resolution. While this technique generally results in good tracks (sub-pixel accu-
racy) [Szeliski et al., 1995], poor tracks may result in areas in the vicinity of object occlu-
sions/disocclusions.
2. the Shi-Tomasi tracker is a local tracker that fails at large displacements. It performs better
for a small number of frames and for relatively small displacements, but deteriorates at large
numbers of frames and in the presence of rotation on the image plane [Szeliski et al., 1995].
We are considering a small number of frames at a time, and image warping due to local image
plane rotation is not expected. The Shi-Tomasi tracker is also capable of sub-pixel accuracy.
The approach that we have undertaken for object tracking can be thought of as a "fine-to-finer"
tracking approach. In addition to feature displacements, the measure of reliability of tracks is available
(according to match errors and local texturedness, the latter indicated by the minimum eigenvalue
of the local Hessian [Shi and Tomasi, 1994; Szeliski et al., 1995]). As we'll see later in Section
8.1, this is used to cull possibly bad tracks and improve 3-D estimates.
Once we have extracted point feature tracks, we can then proceed to recover 3-D positions corresponding
to these feature tracks. 3-D data recovery is based on the simple notion of stereo.
6 Omnidirectional multibaseline stereo
The idea of extracting 3-D data simultaneously from more than the theoretically sufficient number
of two camera views is founded on two simple tenets: statistical robustness from redundancy
and disambiguation of matches due to overconstraints [Okutomi and Kanade, 1993; Kang et al.,
1995b]. The notion of using multiple camera views is even more critical when using panoramic
images taken at the same vertical height, which results in the epipoles falling within the images. If
only two panoramic images are used, points that are close to the epipoles will not be reliable. It is
also important to note that this problem will persist if all the multiple panoramic images are taken at
camera positions that are collinear. In the experiments described in Section 8, the camera positions
are deliberately arranged such that all the positions are not collinear. In addition, all the images are
taken at the same vertical height to maximize view overlap between panoramic images.
We use three related approaches to reconstruct 3-D from multiple panoramic images. 3-D data
recovery is done either by (1) using just the 8-point algorithm on the tracks and directly recovering
the 3-D points, or (2) proceeding with an iterative least-squares method to refine both camera pose
and 3-D feature location, or (3) going a step further to impose epipolar constraints in performing a
full multiframe stereo reconstruction. The first approach is termed as unconstrained tracking and
3-D data merging while the second approach is iterative structure from motion. The third approach
is named constrained depth recovery using epipolar geometry.
6.1 Reconstruction Method 1: Unconstrained feature tracking and 3-D data
merging
In this approach, we use the tracked feature points across all panoramic images and apply the 8-
point algorithm. From the extracted essential matrix and camera relative poses, we can then directly
estimate the 3-D positions.
The sets of 2-D image data are used to determine (pairwise) the essential matrix. The recovery
of the essential matrix turns out to be reasonably stable; this is due to the large (360 ffi ) field of view.
A problem with the 8-point algorithm is that optimization occurs in function space and not image
space, i.e., it is not minimizing error in distance between 2-D image point and corresponding epipolar
line. Deriche et al. [Deriche et al., 1994] use a robust regression method called least-median-
of-squares to minimize distance error between expected (from the estimated fundamental matrix)
and given 2-D image points. We have found that extracting the essential matrix using the 8-point
algorithm is relatively stable as long as (1) the number of points is large (at least in the hundreds),
and (2) the points are well distributed over the field of view.
In this approach, we use the same set of data to recover Euclidean shape. In theory, the recovered
positions are only true up to a scale. Since the distance between camera locations are known and
measured, we are able to get the true scale of the recovered shape. Note, however, that this approach
is not critical upon knowing the camera distances, as indicated in Section 5.1.
Let u ik be the ith point of image k, -
v ik be the unit vector from the optical center to the panoramic
image point in 3-D space, ik be the corresponding line passing through both the optical center and
panoramic image point in space, and t k be the camera translation associated with the kth panoramic
image (note that t 0). The equation of line ik is then r . Thus, for each point
6.2 Reconstruction Method 2: Iterative panoramic structure from motion 11
(that is constrained to lie on line i1 ), we minimize the error function
where N is the number of panoramic images. By taking the partial derivatives of E i with respect to
equating them to zero, and solving, we get
from which the reconstructed 3-D point is calculated using the relation p
v i1 . Note
that a more optimal manner of estimating the 3-D point is to minimize the expression
A detailed derivation involving (8) is given in Appendix A. However, due to the practical consideration
of texture-mapping the recovered 3-D mesh of the estimated point distribution, the projection
of the estimated 3-D point has to coincide with the 2-D image location in the reference image. This
can be justified by saying that since the feature tracks originate from the reference image, it is reasonable
to assume that there is no uncertainty in feature location in the reference image.
An immediate problem with the approach of feature tracking and data merging is its reliance on
tracking, which makes it relatively sensitive to tracking errors. It inherits the problems associated
with tracking, such as the aperture problem and sensitivity to changing amounts of object distortion
at different viewpoints. However, this problem is mitigated if the number of sampled points is
large. In addition, the advantage is that there is no need to specify minimum and maximum depths
and resolution associated with multibaseline stereo depth search (e.g., see [Okutomi and Kanade,
1993; Kang et al., 1995b]). This is because the points are extracted directly analytically once the
correspondence is established.
6.2 Reconstruction Method 2: Iterative panoramic structure from motion
The 8-point algorithm recovers the camera motion parameters directly from the panoramic tracks,
from which the corresponding 3-D points can be computed. However, the camera motion parameters
may not be optimally recovered, even though experiments by Hartley using narrow view images
indicate that the motion parameters are close to optimal [Hartley, 1995]. Using the output of
Omnidirectional multibaseline stereo
the 8-point algorithm and the recovered 3-D data, we can apply an iterative least-squares minimization
to refine both camera motion and 3-D positions simultaneously. This is similar to work done
by Szeliski and Kang on structure from motion using multiple narrow camera views [Szeliski and
Kang, 1994].
As input to our reconstruction method, we use 3-D normalized locations of cylindrical image
point. The equation linking a 3-D normalized cylindrical image position u ij in frame j to its 3-D
position is the track index, is
R (k)
where P() is the projection transformation; R (k)
and t (k)
are the rotation matrix and translation
vector, respectively, associated with the relative pose of the jth camera. We represent each rotation
by a quaternion with a corresponding rotation matrix
(alternative representations for rotations are discussed in [Ayache, 1991]).
The projection equation is given simply
x
y
x
y
In other words, all the 3-D points are projected onto the surface of a 3-D unit sphere.
To solve for the structure and motion parameters simultaneously, we use the iterative Levenberg-Marquardt
algorithm. The Levenberg-Marquardt method is a standard non-linear least squares technique
[Press et al., 1992] that works well in a wide range of situations. It provides a way to vary
smoothly between the inverse-Hessian method and the steepest descent method.
The merit or objective function that we minimize is
where F() is given in (9) and
6.3 Reconstruction Method 3: Constrained depth recovery using epipolar geometry 13
is the vector of structure and motion parameters which determine the image of point i in frame
j. The weight c ij in (12) describes our confidence in measurement u ij , and is normally set to the
inverse variance oe \Gamma2
ij . We set
The Levenberg-Marquardt algorithm first forms the approximate Hessian matrix
@a
@a
and the weighted gradient vector
@a
is the image plane error of point i in frame j. Given a current estimate
of a, it computes an increment ffia towards the local minimum by solving
where - is a stabilizing factor which varies over time [Press et al., 1992]. Note that the matrix A is
an approximation to the Hessian matrix, as the second-derivative terms are left out. As mentioned
in [Press et al., 1992], inclusion of these terms can be destabilizing if the model fits badly or is
contaminated by outlier points.
To compute the required derivatives for (14) and (15), we compute derivatives with respect
to each of the fundamental operations (perspective projection, rotation, translation) and apply the
chain rule. The equations for each of the basic derivatives are given in Appendix B. The derivation
is exactly the same as in [Szeliski and Kang, 1994], except for the projection equation.
6.3 Reconstruction Method 3: Constrained depth recovery using epipolar ge-
ometry
As a result of the first reconstruction method's reliance on tracking, it suffers from the aperture problem
and hence limited number of reliable points. The approach of using the epipolar geometry to
limit the search is designed to reduce the severity of this problem. Given the epipolar geometry,
14 8 Experimental results
for each image point in the reference panoramic image, a constrained search is performed along
the line of sight through the image point. Subsequently, the position along this line which results in
minimum match error at projected image coordinates corresponding to other viewpoints is chosen.
Using this approach results in a denser depth map, due to the epipolar constraint. This constrain
reduces the aperture problem during search (which theoretically only occurs if the direction of ambiguity
is along the epipolar line of interest). The principle is the same as that described in [Kang
et al., 1995b].
While this approach mitigates the problem of the aperture problem, it suffers from a much higher
computational demand. In addition, the recovered epipolar geometry is still dependent on the output
quality of the 8-point algorithm (which in turn depends on the quality of tracking). The user has to
also specify minimum and maximum depths as well as resolution of depth search.
An alternative to working in cylindrical coordinates is to project sections of cylinder to a tangential
rectilinear image plane, rectify it, and use the rectified planes for multibaseline stereo. This
mitigates the computational demand as search is restricted to horizontal scanlines in the rectified
images. However, there is a major problem with this scheme: reprojecting to rectilinear coordinates
and rectifying is problematical due to the increasing distortion away from the new center of
projection. This creates a problem with matching using a window of a fixed size. As a result, this
scheme of reprojecting to rectilinear coordinates and rectifying is not used.
7 Stereo data segmentation and modeling
Once the 3-D stereo data has been extracted, we can then model them with a 3-D mesh and texture-map
each face with the associated part of the 2-D image panorama. We have done work to reduce
the complexity of the resulting 3-D mesh by planar patch fitting and boundary simplification. The
displayed models shown in this paper are rendered using our modeling system. A more detailed
description of model extraction from range data is given in [Kang et al., 1995a].
8 Experimental results
In this section, we present the results of applying our approach to recover 3-D data from multiple
panoramic images. We have used both synthetic and real images to test our approach. As mentioned
8.1 Synthetic scene 15
Figure
5: Panorama of synthetic room after compositing.
earlier, in the experiments described in this section, the camera positions are deliberately arranged
so that all of the positions are not collinear. In addition, all the images are taken at the same vertical
height to maximize overlap between panoramic images.
8.1 Synthetic scene
The synthetic scene is a room comprising objects such as tables, tori, cylinders, and vases. One half
of the room is textured with a mandrill image while the other is textured with a regular Brodatz pat-
tern. The synthetic objects and images are created using Rayshade, which is a program for creating
ray-traced color images [Kolb, 1994]. The synthetic images created are free from any radial distor-
tion, since Rayshade is currently unable to model this camera characteristic. The omnidirectional
synthetic depth map of the entire room is created by merging the depth maps associated with the
multiple views taken around inside the room.
The composite panoramic view of the synthetic room from its center is shown in Figure 5. From
left to right, we can observe the vases resting on a table, vertical cylinders, a torus resting on a table,
and a larger torus. The results of applying both reconstruction methods (i.e., unconstrained search
with 8-point and constrained search using epipolar geometry) can be seen in Figure 6. We get many
more points using constrained search (about 3 times more), but the quality of the 3-D reconstruction
appears more degraded (compare Figure 6(b) with (c)). This is in part due to matching occurring
at integral values of pixel positions, limiting its depth resolution. The dimensions of the synthetic
room are 10(length) \Theta 8(width) \Theta 6(height), and the specified resolution is 0.01. The quality of the
recovered 3-D data appears to be enhanced by applying a 3-D median filter 2 . However, the median
2 The median filter works in the following manner: For each feature point in the cylindrical panoramic image, find
other feature points within a certain neighborhood radius (20 in our case). Then sort the 3-D depths associated with the
neighborhood feature points, find the median depth, and rescale the depth associated with the current feature point such
that the new depth is the median depth. As an illustration, suppose the original 3-D feature location is v i
, where
is the original depth and - v i
is the 3-D unit vector from the camera center in the direction of the image point. If d med
results
(a) Correct distribution (b) Unconstrained 8-point (c) Iterative (d) Constrained search
Median-filtered (f) Median-filtered (g) Median-filtered (h) Top view of
8-point iterative constrained 3-D mesh of (e)
Figure
Comparison of 3-D points recovered of synthetic room.
filter also has the effect of rounding off corners.
The mesh in Figure 6(f) and the three views in Figure 7 are generated by our 3-D modeling
system described in [Kang et al., 1995a]. As can be seen from these figures, the 3-D recovered
points and the subsequent model based on these points basically preserved the shape of the synthetic
room.
In addition, we performed a series of experiments to examine the effect of both "bad" track
removal and median filtering on the quality of recovered depth information of the synthetic room.
The feature tracks are sorted in increasing order according to the error in matching 3 . We continually
is the median depth within its neighborhood, then the filtered 3-D feature location is given by v 0
d med - v i
3 Note that in general, a "worse" track in this sense need not necessarily translate to a worse 3-D estimate. A high
8.1 Synthetic scene 17
(a) View 1 (b) View 2 (b) View 3
Figure
7: Three views of modeled synthetic room of Figure 6(h).
remove tracks that have the worst amount of match error, recovering the 3-D point distribution at
each instant.
From the graph in Figure 8, we see an interesting result: as more tracks are taken out, retaining
the better ones, the quality of 3-D point recovery improves-up to a point. The improvement in the
accuracy is not surprising, since the worse tracks, which are more likely to result in worse 3-D esti-
mates, are removed. However, as more and more tracks are removed, the gap between the amount
of accuracy demanded of the tracks, given an increasingly smaller number of available tracks, and
the track accuracy available, grows. This results in generally worse estimates of the epipolar ge-
ometry, and hence 3-D data. Concomitant to the reduction of the number of points is the sensitivity
of the recovery of both epipolar geometry (in the form of the essential matrix) and 3-D data. This
is evidenced by the fluctuation of the curves at the lower end of the graph. Another interesting result
that can be observed is that the 3-D point distribution that has been median filtered have lower
errors, especially for higher numbers of recovered 3-D points.
As indicated by the graph in Figure 8, the accuracy of the point distribution derived from just
the 8-point algorithm is almost equivalent that that of using an iterative least-squares (Levenberg-
Marquardt) minimization, which is statistically optimal near the true solution. This result is in
agreement with Hartley's application of the 8-point algorithm to narrow-angle images [Hartley,
1995]. It is also worth noting that the accuracy of the iterative algorithm is best at smaller numbers
of input points, suggesting that it is more stable given a smaller number of input data.
Table
1 lists the 3-D errors of both constrained and unconstrained (8-point only) methods for the
synthetic scenes. It appears from this result that the constrained method yields better results (after
match error may be due to apparent object distortion at different viewpoints.
Percent of total points0.300.40
RMS
error
8-point (known camera distance)
8-point (unknown camera distance)
iterative
median-filtered 8-point (known camera distance)
median-filtered 8-point (unknown camera distance)
median-filtered iterative
Figure
8: 3-D RMS error vs. number of points. The original number of points (corresponding to
100%) is 3057. The dimensions of the synthetic room are 10(length) \Theta 8(width) \Theta 6(height).
original 0.315039 0.393777 0.302287
median-filtered 0.266600 0.364889 0.288079
Table
1: Comparison of 3-D RMS error between unconstrained and constrained stereo results (n is
the number of points).
8.2 Real scenes 19
median filtered) and more points (a result of reducing the aperture problem). In practice, as we shall
see in the next section, problems due to misestimation of camera intrinsic parameters (specifically
focal length, aspect ratio and radial distortion coefficient) causes 3-D reconstruction from real images
to be worse. This is a subject of on-going research.
8.2 Real scenes
The setup that we used to record our image sequences consists of a DEC Alpha workstation with
a J300 framegrabber, and a camcorder (Sony Handycam CCD-TR81) mounted on an X-Y position
stage affixed on a tripod stand. The camcorder settings are made such that its field of view is
maximized (at about 43 ffi ).
To reiterate, our method of generating the panoramic images are as follows:
ffl Calibrate camcorder using an iterative Levenberg-Marquardt least-squares algorithm [Szeliski
and Kang, 1994].
ffl Adjust the X-Y position stage while panning the camera left and right to remove the effect of
motion parallax; this ensures that the camera is then rotated about its optical center.
ffl At each camera location, record onto tape an image sequence while rotating the camera, and
then digitize the image sequence using the framegrabber.
ffl Using the recovered camera intrinsic parameters (focal length, aspect ratio, radial distortion
undistort each image.
ffl Project each image, which is in rectilinear image coordinates, into cylindrical coordinates
(whose cross-sectional radius is the camera focal length).
ffl Composite the frames into a panoramic image. The number of frames used to extract a panoramic
image in our experiments is typically about 50.
We recorded image sequences of two scenes, namely an office scene and a lab scene. A panoramic
image of the office scene is shown in Figure 4. We extracted four panoramic images corresponding
to four different locations in the office. (The spacing between these locations is about 6 inches and
the locations are roughly at the corners of a square. The size of the office is about 10 feet by 15
feet.) The results of 3-D point recovery of the office scene is shown in Figure 9, with three sample
Experimental results
views of its model shown in Figure 10. As can be seen from Figure 9, the results due to the constrained
search approach looks much worse. This may be directly attributed to the inaccuracy of the
extracted intrinsic camera parameters. As a consequence, the composited panoramas may actually
be not exactly physically correct. In fact, as the matching (with epipolar constraint) is in progress,
it has been observed that the actual correct matches are not exactly along the epipolar lines; there
are slight vertical drifts, generally of the order of about one or two pixels.
Another example of real scene is shown in Figure 11. A total of eight panoramas at eight different
locations (about 3 inches apart, ordered roughly in a zig-zag fashion) in the lab are extracted.
The longest dimensions of the L-shaped lab is about 15 feet by 22.5 feet. The 3-D point distribution
is shown in Figure 12 while Figure 13 shows three views of the recovered model of the lab.
As can be seen, the shape of the lab has been reasonably well recovered; the "noise" points at the
bottom of Figure 12(a) corresponds to the positions outside the laboratory, since there are parts of
the transparent laboratory window that are not covered. This reveals one of the weaknesses of any
correlation-based algorithm (namely all stereo algorithms); they do not work well with image reflections
and transparent material. Again, we observe that the points recovered using constrained
search is worse.
The errors that were observed with the real scene images, especially with constrained search,
are due to the following practical problems:
ffl The auto-iris feature of the camcorder used cannot be deactivated (even though the focal
length was kept constant). As a result, there may be in fact slight variations in focal length
as the camera was rotated.
ffl The camera may not be rotating exactly about its optical center, since the adjustment of the
X-Y position stage is done manually and there may be human error in judging the absence of
motion parallax.
ffl The camera may not be rotating about a unique axis all the way around (assumed to be ver-
tical) due to some play or unevenness of the tripod.
ffl There were digitization problems. The images digitized from tape (i.e., while the camcorder
is playing the tape) contain scan lines that are occasionally horizontally shifted; this is probably
caused by the degraded blanking signal not properly detected by the framegrabber. How-
ever, compositing many images averages out most of these artifacts.
8.2 Real scenes 21
(a) Unconstrained 8-point (b) Median-filtered version of (a)
(c) Iterative (d) Median-filtered version of (c)
Constrained search (f) Median-filtered version of (e)
(g) 3-D mesh of (b)
Figure
9: Extracted 3-D points and mesh of office scene. Notice that the recovered distributions
shown in (c) and (d) appear more rectangular than those shown in (a) and (b).
(a) View 1 (b) View 2 (b) View 3
Figure
10: Three views of modeled office scene of Figure 9(g)
Figure
11: Panorama of laboratory after compositing.
ffl The extracted camera intrinsic parameters may not be very precise.
As a result of the problems encountered, the resulting composited panorama may not be physically
correct. This especially causes problems with constrained search given the estimated epipolar
geometry (through the essential matrix). We actually widened the search a little by allowing search
as much as a couple of pixels away from the epipolar line; however, this further significantly increases
the computational demand and has the effect of loosening the constraints, making this approach
less attractive.
9 Discussion and conclusions
We have shown that omnidirectional depth data (whose denseness depends on the amount of local
texture) can be extracted using a set of simple techniques: camera calibration, image compositing,
feature tracking, the 8-point algorithm, and constrained search using the recovered epipolar geom-
etry. The advantage of our work is that we are able to extract depth data within a wide field of view
simultaneously, which removes many of the traditional problems associated with recovering camera
pose and narrow-baseline stereo. Despite the practical problems caused by using unsophisticated
equipment which result in slightly incorrect panoramas, we are still able to extract reasonable 3-D
data. Thus far, the best real data results come from using unconstrained tracking and the 8-point al-
(a) Unconstrained (b) Median-filtered (c) Iterative
8-point version of (a)
(d) Median-filtered (e) Constrained (f) Median-filtered
version of (c) search version of (e)
(g) 3-D mesh of (b)
Figure
12: Extracted 3-D points and mesh of laboratory scene.
(a) View 1 (b) View 2 (b) View 3
Figure
13: Three views of modeled laboratory scene of Figure 12(g)
gorithm (both direct and iterative structure from motion). Results also indicate that the application
of 3-D median filtering improves both the accuracy and appearance of stereo-computed 3-D point
distribution.
To expedite the panorama image production in critical applications that require close to real-time
modeling, special camera equipment may be called for. One such possible specialized equipment
is Ahuja's camera system (as reported in [Freedman, 1995]), in which the lens can be rotated
relative to the imaging plane. However, we are currently putting our emphasis on the use of commercially
available equipment such as a cheap camcorder.
Even if all the practical problems associated with imperfect data acquisition were solved, we
still have the fundamental problem of stereo-that of the inability to match and extract 3-D data in
textureless regions. In scenes that involve mostly textureless components such as bare walls and
objects, special pattern projectors may need to be used in conjunction with the camera [Kang et al.,
1995b].
Currently, the omnidirectional data, while obtained through a 360 ffi view, has limited vertical
view. We plan to extend this work by merging multiple omnidirectional data obtained at both different
heights and at different locations. We will also look into the possibility of extracting panoramas
of larger height extents by incorporating tilted (i.e., rotated about a horizontal axis) camera views.
This would enable scene reconstruction of a building floor involving multiple rooms with good vertical
view. We are currently characterizing the effects of misestimated intrinsic camera parameters
(focal length, aspect ratio, and the radial distortion factor) on the accuracy of the recovered 3-D
data.
A Optimal point intersection 25
In summary, our set of methods for reconstructing 3-D scene points within a wide field of view
has been shown to be quite robust and accurate. Wide-angle reconstruction of 3-D scenes is conventionally
achieved by merging multiple range images; our methods have been demonstrated to
be a very attractive alternative in wide-angle 3-D scene model recovery. In addition, these methods
do not require specialized camera equipment, thus making commercialization of this technology
easier and more direct. We strongly feel that this development is a significant one toward attaining
the goal of creating photorealistic 3-D scenes with minimum human intervention.
Acknowledgments
We would like to thank Andrew Johnson for the use of his 3-D modeling and rendering program
and Richard Weiss for helpful discussions.
A Optimal point intersection
In order to find the point closest to all of the rays whose line equations are of the form
we minimize the expression
where p is the optimal point of intersection to be determined. Taking the partials of E with respect
to - k and p and equating them to zero, we have
Solving for - k in (18), noting that -
substituting - k in (19) yields
from which
s
where
is the perpendicular projection operator for ray -
is the point along the viewing ray closest to the origin.
Thus, the optimal intersection point for a bundle of rays can be computed as a weighted sum of
adjusted camera centers (indicated by t k 's), where the weighting is in the direction perpendicular
to the viewing ray.
A more "optimal" estimate can be found by minimizing the formula
with respect to p and - k 's. Here, by weighting each squared perpendicular distance by - \Gamma2
k , we
are downweighting points further away from the camera. The justification for this formula is that
the uncertainty in -
direction defines a conical region of uncertainty in space centered at the cam-
era, i.e., the uncertainty in point location (and hence the inverse weight) grows linearly with - k .
However, implementing this minimization requires an interative non-linear solver.
Elemental transform derivatives
The derivative of the projection function (11) with respect to its 3-D arguments and internal parameters
is straightforward:
@x
\Gammaxy
\Gammaxz \Gammayz
where
3The derivatives of an elemental rigid transformation
are
@x
@t
where
z 0 \Gammax
\Gammay x 0C C C A and
(see [Shabana, 1989]). The derivatives of a screen coordinate with respect to any motion or structure
parameter can be computed by applying the chain rule and the above set of equations.
--R
Artificial Vision for Mobile Robots: Stereo Vision and Multisensory Perception.
Computational stereo.
Robust recovery of the epipolar geometry for an uncalibrated stereo rig.
What can be seen in three dimensions with an uncalibrated stereo rig?
Integrating information from multiple views.
A camera for near
In defence of the 8-point algorithm
Building 3-D models from unregistered range images
Extraction of Concise and Realistic 3-D Models from Real Data
A multibaseline stereo system with active illumination and real-time image acquisition
Rayshade user's guide and reference manual.
The phase correlation image alignment method.
A computer algorithm for reconstructing a scene from two projections.
Plenoptic modeling: An image-based rendering system
Recovering range using virtual multicamera stereo.
A multiple baseline stereo.
Numerical Recipes in C: The Art of Scientific Computing.
Dynamics of Multibody Systems.
Projective structure from uncalibrated images: Structure from motion and recognition.
Good features to track.
Principal component analysis with missing data and its application to object modeling.
Accurate internal camera calibration using rotation
Image Mosaicing for Tele-Reality Applications
Hierarchical spline-based image reg- istration
Recovering 3D shape and motion from image streams using nonlinear least squares.
A parallel feature tracker for extended image sequences.
--TR
--CTR
Yin Li , Heung-Yeung Shum , Chi-Keung Tang , Richard Szeliski, Stereo Reconstruction from Multiperspective Panoramas, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.1, p.45-62, January 2004
R. A. Hicks , D. Pettey , K. Daniilidis , R. Bajcsy, Closed Form Solutions for Reconstruction Via Complex Analysis, Journal of Mathematical Imaging and Vision, v.13 n.1, p.57-70, August 2000
Srikumar Ramalingam , Suresh K. Lodha , Peter Sturm, A generic structure-from-motion framework, Computer Vision and Image Understanding, v.103 n.3, p.218-228, September 2006
Nelson L. Chang , Avideh Zakhor, Constructing a Multivalued Representation for View Synthesis, International Journal of Computer Vision, v.45 n.2, p.157-190, November 2001 | 3-D median filtering;omnidirectional stereo;panoramic structure from motion;scene modeling;8-point algorithm;multibaseline stereo |
281209 | Multiple Experiment Environments for Testing. | Concurrent simulation (CS) has been used successfully as a replacement for serial simulation. Based on storing differences from experiments, CS saves storage, speeds up simulation time and allows excellent internal observation of events. In this paper, we introduce Multiple Domain Concurrent Simulation (MDCS) which like concurrent simulation, maintains efficiency by only simulating differences. MDCS also allows experiments to interact with one another and create new experiments through the use of domains. These experiments can be traced and observed at any point, providing insight into the origin and causes of new experiments. While many experiment scenarios can be created, MDCS uses dynamic spawning and experiment compression rather than explicit enumeration to ensure that the number of experiment scenarios does not become exhaustive. MDCS does not require any pre-analysis or additions to the circuit under test. Providing this capability in digital logic simulators allows more test cases to be run in less time. MDCS gives the exact location and causes of every experiment behavior and can be used to track the signature paths of test patterns for coverage analysis.We will describe the algorithms for MDCS, discuss the rules for propagating experiments and describe the concepts of domains for making dynamic interactions possible. We will report on the effectiveness of MDCS for attacking an exhaustive simulation problem such as Multiple Stuck-at Fault simulations for digital logic. Finally, the applicability of MDCS for more general experimentation of digital logic systems will be discussed. | Introduction
Concurrent Simulation(CS)[1, 2] has been proven
to be powerful and efficient for simulating single
stuck-at faults but inadequate for exhaustive applications
like Multiple Stuck-at Fault (MSAF)
simulations. It was developed as a speedup mechanism
over serial simulation, thus experiments are
independent and incapable of interacting with one
another. Cumulative behaviors which are combinations
of experiments cannot be created without
approaching exhaustive testing. This usually
requires either modifying the circuit [10] or
performing some back-tracing [14]. The methods
presented here leverage on the efficiencies of concurrent
simulation, but do allow scenarios of experiments
to be dynamically spawned should independent
experiments interact. Any experiment
that does not create a state difference from a parent
experiment is not propagated. This effectively
"compresses" experiments in the simulation.
CS[1] has a primary experiment that exhibits
the fault free or good behavior of a circuit. This
reference experiment exists during the entire sim-
ulation. Faulty experiments that differ or diverge
from the reference require additional independent
bursts of simulation time. Experiment behaviors
can be observed and contrasted since each experiment
propagated leaves a signature (its identifier).
Unfortunately, CS cannot allow independent experiments
to interact at all. It is a cost-effective
solution for serial simulation, where each fault inserted
creates a single experiment that can only
see fault effects it creates and the reference exper-
iment. As we will show, to modify it to accommodate
such functionality would defeat the overall
efficiency and would produce inaccurate observation
results.
Our simulation environment allows multiple domains
of experiments to be defined so that combinations
of independent experiments may be efficiently
simulated. It is not necessary to generate
every possible combination of experiments.
Only those experiments that arrive at the same
node in the network will be tested as candidates
for spawning of new behaviors. This eliminates
This work is sponsored by the National Science Foun-
dation, MIP-9528194 and the National Aeronautics and
Space AdministrationLangley Research Center NASA-JOVE.
a huge number of potential scenarios that could
arise from exhaustive testing. In addition, every
experiment in the simulation can be compressed
via the same mechanism of concurrent simulation
(i.e., only differences are propagated). Unlike CS
where there is only a single reference experiment,
we allow any number of independent experiments
to serve as parents.
Details on performing Multiple Stuck-at Fault
are presented here as evidence
of the framework's success for digital logic. We
were motivated to choose the MSAF application
because it demonstrates the large (even exhaus-
tive) number of experiments needed to be per-
formed. It is also a good test bed for verifying
the correctness of "compression" and interaction
features since digital logic outputs are limited to
a finite number of states (0,1,X,Z). Furthermore,
the availability of ISCAS benchmark circuits[3, 4]
made it possible for us to compare our performance
against other applications.
This paper is organized as follows. First, we will
provide necessary background information on CS.
We will then describe some of the inefficiencies of
trying to use CS for creating scenarios of experiment
combinations. Next, we introduce the concept
of domains, parent experiments as dynamic
references for compression, the use of identifiers
which are needed to observe experiments, and the
spawning of new experiments that display cumulative
behaviors. Once these fundamental methods
are understood, we will then present the Multiple
Domain Simulation algorithm.
2. Background
Since we leverage on many of the features of Concurrent
Simulation, we provide a brief discussion
of it here for background information. Consider
the following analogy: an engineer is assigned to
build an adder circuit. After developing the adder
design, the engineer is told she must now build a
calculator that does both addition and subtrac-
tion. If she took a "serial" approach, she would
start designing the adder circuitry all over again,
ignore the work she has already done, and then develop
the subtractor circuit independently of the
adder design. Likewise, in a concurrent approach,
the engineer would try to leverage on the adder
Multiple Experiment Environments for Testing 31 1101
I 1
I 2
Fault Sources I 1 I 2 Out1Gate Evaluations
R= no faults
Fig. 1. Conceptual copies indicate all the scenarios to be
evaluated
design as well as concentrate design efforts on the
new and different functionality required, such as
integrating a subtractor into the existing design.
In concurrent simulation, the simulation of identical
behaviors(reference and faulty) is performed
only once. Additional simulation time is dedicated
to those faulty experiments that are different.
Since the goal of CS is to produce simulation
results equivalent to simulating each fault
separately, it needs to create simulation scenarios
for any inserted fault and the reference case.
The main idea is to evaluate these scenarios, and
then propagate only those that create state dif-
ferences. There are many methods to implement
Reference Case
Fault C1
Fault C2
Fault C3
Fault C4
Current List
Look-Ahead List
Experiments
C
R 3 (1) R 3 (1)C (0)
R 3 (1) R 3 (1)
R
R 3 (1) R 3 (1)
R (1)
R 3 (1) R 3 (1)
R 3 (1)R (1)
The gate after evaluation of experiment C 1
R 3 (1)R (1)
The current list for evaluating fault scenario on
Input 1 of the gate.L 1
I 2
I 1
I 2
Fig. 2. Current and Lookahead Lists for processing experiments
in a Multiple List Traversal(MLT) algorithm
this scheme but the most efficient single CPU algorithm
is the Multiple List Traversal(MLT)[5].
Before describing the MLT implementation let us
present a basic concurrent fault simulation algorithm
Figure
1 shows a single NAND gate to be
simulated. The reference value is denoted by R i
for each input and output list, where i is the list
number. The conceptual copies in Figure 1, shown
as dashed NAND gates, indicate all the scenarios
that must be evaluated for the single stuck-at
faults inserted on the inputs of the NAND gate.
Each fault scenario is derived by replacing the reference
value with a faulty state value for the specified
input. For example, C 1 is a single stuck-at-1
fault on input I 1 of the NAND gate. From the
table of fault sources and their respective evaluations
in Figure 1, it can be seen that only one of
the four faults inserted will produce a state value
on the output different from that of the reference
output. Fault scenario C 1 produces a state of 0
on the output while, the reference produces a state
value of 1. Only C 1 is propagated. The absence of
the other fault experiments indicates that they are
behaving identically to the reference experiment.
The faults are physically stored on the input
or output list on which they are deposited, as
shown in Figure 2(A). Experiments are denoted
in the form identifier(statevalue). For exam-
ple, C 1 (1) is concurrent experiment 1 with a state
value of 1. The MLT algorithm is straightforward.
Each input along with the output list is traversed
and an experiment scenario is evaluated based on
the type of experiment encountered(reference or
fault). The output list must also be traversed
to properly insert any experiments propagated or
to update states for a specific experiment. The
MLT algorithm dynamically creates scenarios to
be evaluated by pointing to a single experiment
from each input and output list. These pointers
are stored in a current list denoted by L c . The reference
experiment is the first scenario to be eval-
uated. It is created by pointing at the reference
states on each list (see the first row of the table in
Figure
2). As the input and output lists are tra-
versed, the lowest experiment identifier from all
lists is chosen as the "next" experiment to be pro-
cessed. A lookahead list 1 maintains this information
so that when the MLT is ready to process another
experiment, it has already determined which
experiment is next. For example, in the table of
4 Lentz, Manolakos, Czeck and Heller
Figure
2, after the reference case is evaluated, the
look-ahead list indicates that input 1 contains the
next lowest faulty experiment identifier of all the
three lists. Therefore, the next experiment to be
processed will be the experiment for fault C 1 . The
experiment C 1 is created by pointing to any
states present on any list. If present on a
list L i , the fault state of C 1 is used to replace the
reference value on L i , while using the reference
value for any other list on which the fault does
not appear. Figure 2A demonstrates this by the
dashed lines. These lines show the experiments selected
to create L c for generating the experiment
for C 1 . C 1 is selected from list L 1 . Since C 1 does
not appear on any other list, the current list will
point to the reference experiments on these input
and output lists. This will create a scenario that
will evaluate 1). Thus the
evaluated state of C 1 equals zero. Notice that the
MLT also maintains a pointer for the output list
to quickly determine whether the faulty experiment
is already present or whether a comparison
against the output reference value should be per-
formed. This same information is also depicted in
the table row labeled C 1 in Figure 2. After evaluating
this case and comparing the state value of
to that of the Reference state R 3 (1), we see
that there is a difference and that the experiment
should be propagated to the output. Since
the current list indicates that the output list does
not contain storage for experiment C 1 already, the
MLT allocates space and updates the output list.
This is shown in Figure 2B.
To summarize, the MLT traverses all the lists,
all inputs and output lists simultaneously. It does
a look-ahead to determine which experiment is
coming up next on each list so it can quickly determine
if the same fault ID is present on more than
1 input or whether it is already present on the
output. The look-ahead information is stored in a
circular linked list. The experiment scenario to be
evaluated is derived from the look-ahead informa-
tion. The lowest identifier (the reference value on
each list is used to determine the reference state
on the output. The next higher id is then chosen
from the lookahead information and used to create
a scenario for that specific fault experiment.
Figure
3 describes the high level algorithm for the
simultaneous list traversal portion of the MLT algorithm
and the table of Figure 2 enumerates all
the scenarios.
Events can be scheduled in the future due to
any triggered experiment. (i.e., any experiment
that is active for the current simulation time). Let
concurrent experiment with identifier
k, with an current state value of V 0 .
3. Issues Associated with Exhaustive Scenario
Since efficient simulation algorithms [2, 6, 11] only
look at single stuck-at fault (SSF) scenarios versus
the reference experiment behavior, experiment
interaction for combinations of SSFs is not possi-
ble. For an MLT implementation of CS, modifications
would be necessary to accommodate building
scenarios of multiple stuck-at faults to simu-
late. Recall, the basic MLT algorithm only substitutes
the reference value with a single faulty value.
Fault scenarios do not interact, nor are they aware
of each others' existence. Despite this, to implement
MSAF with the CS MLT would require that,
for every fault inserted in the network, an N way
combination of every fault with every other fault
be inserted. This would represent, each 2 way,
3 way.N way combination of a fault with every
other fault.
In general, given a network with N inputs and
one single stuck-at fault deposited on every input
that can combine with single-stuck-at faults
on another input, the number of multiple stuck-at
fault sources which must be inserted can be determined
by equation 1, where N equals the number
of faults in the set; i is the multiplicity of faults.
(1)
This shows that the amount of storage for
all these combinations of fault scenarios would
greatly reduce the efficiency of CS by increasing
list lengths and thus list traversal times. Further-
more, when these fault scenarios are inserted, they
must be assigned a state value to be used by that
experiment. Typically, the state value chosen is
that of the single stuck-at fault value on the same
line, when two or more faults have not yet in-
teracted. Figure 4 demonstrates this. There are
Multiple Experiment Environments for Testing 5
Initialize network and time wheel
While (an event exists on time wheel for T, where T= current time)
For each element E i
apply Reference or Concurrent fault state value changes
For each input and output list
if (an R experiment is triggered)
call evaluation code for E i using all reference (non-faulty) state values
schedule a new event for time propagation delay, if necessary
For each Ck present at
update current and look-ahead lists
experiment identifiers present
on all inputs and output lists of
For each Input and output List, L j for
If L j does not contain experiment CID then
Insert a pointer to R i (statevalue) into the
current list L c for list L j
else
Insert a pointer to C i (statevalue) into L c for list L j
Update the lookahead list L o by inserting a pointer
to the next identifier present on L j
Using the state values from experiments pointed at by L c
begin evaluation of experiments
For scheduled Ck experiments for Time T
call evaluation code for
if (Ck
if (Ck is not on present on the output list)
propagate fault Ck as a fault effect
schedule event for Ck at time T propagation delay
else Ck
converge Ck
End For each element E i
End While
Fig. 3. Multiple List Traversal Algorithm (MLT)
three single stuck-at faults inserted, one on each
input. Combinations of all two-way and three-way
combinations must be stored when faults are
inserted. A by-product of creating the multiple
stuck-at experiments in this manner is the
degradation of observation. All experiments are
assigned unique identifiers, so that when propa-
gated, it can be traced and observed. The presence
of any specific identifier on an input or output
list should indicate that the experiment has
propagated there. Therefore if a MSAF identifier
is seen, one would expect it to indicate that the
MSAF has been propagated. For instance, in Figure
4, it appears that a three-way MSAF C 1 C 2 C 3
occurred at the output of gate E 1 . In fact, this
identifier was only carried forward because it is a
copy of the two-way stuck-at C 1 C2. The distinction
of whether an identifier's presence is due to
a single-stuck-at experiment or a MSAF can not
be determined without further detailed analysis
or back-tracing. It should be clear from the figure
that it is not possible for a three-way stuck-at
to occur at the output of this gate since C 3 is a
6 Lentz, Manolakos, Czeck and Heller
I 1
I 2
I 1
I 2
I 1
R
R
R
R
R
R
Fig. 4. Storage problems associated with MSAF simulation using Concurrent Simulation.
stuck-at on the primary input of E 2 . These issues
are eliminated in a Multiple Domain Simula-
tion. MDCS creates a separation of experiments
into classes called domains and eliminates both
the storage problem and observation impairment.
4. Defining the Experiments Through Domain
In developing an algorithm that allowed independent
experiments to interact while still performing
the single set of fault experiments, it was important
that the method avoid modifying the circuit
by adding additional hardware[10] and avoid
back-tracing for observing events or performing
analysis[14]. To achieve this, the concept of Domains
is introduced. Domains separate the original
independent experiments into classes. Different
experiments contained within the same domain
are by definition not allowed to interact with
one another. They are analogous to the independent
single-stuck-at fault sources. These experiments
define the original parent experiments.
For fault simulation, a single domain may contain
a set of single-stuck-at fault experiments,
ng. Each experiment C i within
A will be evaluated independently and will never
know of the presence of any other experiment C j
within A. In other words, single domain simulation
is the same as traditional concurrent fault
simulation. If another domain B is added, then
experiments contained in domain B are simulated
independently. However, interactions between experiments
contained in A are allowed to interact
and cause cumulative behaviors with those experiments
in domain B. This is an example of a two-
domain simulation, where two sets of experiments
in domains A and B, are simulated independently
and any interactions that may occur between experiments
in different domains are also simulated.
Interacting experiments that cause new behaviors
not displayed by any "parent" are propagated or
"spawned". Experiments fitting this description
are called "offspring" experiments.
By using domains, MDCS achieves efficiency
in experiment storage and, as will be discussed
later, also helps screen experiments before they
are simulated, thus saving processing time. Domains
minimize storage since only the original set
of single stuck-at faults is inserted and does not
require additional storage for defining potential
combinations of MSAFs. MSAFs experiments will
be created dynamically only if two or more fault
experiments meet at a node within the network.
As an example, if two sets of SSFs, each containing
faults were to be simulated for 2 way
stuck-at scenarios, MDCS would define two domains
and insert N experiments in each domain
for a total of 2 \Theta N parent experiments. Each
fault in the original sets of SSFs would be simulated
in addition to any two-way stuck-at that
may arise. We emphasize only two-way stuck-ats
that may arise because the input patterns may
Multiple Experiment Environments for Testing 7
never provide the stimulus to make these potential
interactions occur.
In addition to storing the fault experiments, it
is still necessary to store the reference experiment.
In general, the number of parent experiments necessary
for a simulation using Multiple Domain al-
is:
where m equals the number of defined domains
and n i equals the number of experiments contained
in domain i.
4.1. Creating Dynamic Scenarios of Experiments
It has already been mentioned that different experiments
within domains are not allowed to interact
with each other. It could be said that these
experiments do not "see" each other. However,
it is desired that experiments from different domains
should be allowed to interact and see one
another. This means experiments need to know
about the presence of other experiments from different
domains should they ever propagate to the
same node in a network. From this requirement,
a set of rules were derived to determine which experiments
should be checked for cumulative new
behaviors when this situation does occur. Experiments
that satisfy these rules will be called
combinable[9]. Other combinations of experiments
that may be present but are not combinable will
not be simulated.
Since MDCS is a discrete-event simulator algo-
rithm, it only processes those experiments that are
triggered, (active for the current simulation time).
Therefore, one requirement for a combinable scenario
of experiments is that it contain at least one
trigger.
In general, a scenario consisting of two or more
experiments is created and simulated if the following
are satisfied:
ffl There must be at least one trigger present in
a scenario for it to be evaluated.
ffl Experiments do not share any common domains
ffl Experiments in the scenario that do have common
domains must be related.
Either as a parent and offspring experiment
Or the scenario must contain the same experiments
from common domains.
Figure
5 describes the relationship of domains
and experiments in a simulation. The reference
is always considered to be a parent experiment to
any other experiment and is therefore combinable
with all other experiments. The reference is the
basic experiment to which all other experiment
behavior is compared. In addition, offspring experiments
should be compared to their parents for
similar behavior. If a parent experiment is present
and its state value is identical to the evaluated sce-
nario, it will be used to suppress the propagation
of the offspring created. If experiments are not related
as parent and offspring, then they may be related
by an identical experiment in a common do-
main. For instance, in Figure 5, A
are not related as a parent and offspring. They
are however, related by the fact that experiment
is common to both experiments. Namely, experiment
number 1 from Domain B is contained
within both A Although experiments
share a common domain
B, they are not related and therefore no combina-
R
Common domain
B is the same
experiment
Common domain B does not
contain the same experiments
and therefore this is not a valid
scenario.
Unrelated:
{
{
Offspring
(Multiple
Parent
experiments
Fig. 5. Experiments can be related, either as a parent
offspring relationship or by identical experiments sharing
common domains. Related experiments generate valid
combinations.
8 Lentz, Manolakos, Czeck and Heller
Insert single stuck-at faults into the network and assign a domain for each multiplicity:
For an event, scheduled for current time T
Evaluate the R experiment and update the new state value, Routput
on the output list for element E.
Locate all triggers present on all input lists and store them on a list called a trigger list.
Begin generation of valid combinations of experiments.
valid scenario must contain a trigger and be combinable.)
Take an experiment from one input list and determine which other experiments from
different inputs of E may be combined by using the rules of combinability.
If the experiments are related as:
(parent AND offspring)
(related by a common experiment in the same domain)
(the experiments do not share any common domains)
then
If any experiment in the scenario is present on the trigger list
then the scenario is valid and can be evaluated.
Build the current list L C to simulate the scenario.
Evaluate the experiment by calling its evaluation function.
Begin Parent Checking to see if the scenario can be compressed
to a parent experiment present on the output.
If Ck
where k is the experiment identifier for Ck
and no other parent experiments are present on the output
with a state value matching Ck (V 0 ).
then diverge Ck (V 0 ) as an offspring experiment.
End Parent Checking
Continue generation of combinations until no more valid combinations of experiments are found.
End generation of valid combinations of experiments.
Fig. 6. High Level Algorithm For MDCS
tion of these experiments will be simulated. Notice
that experiments such as A 2 and B 1 could
combine since they do not share any common do-
main. A high level algorithm is presented here to
summarize the major portions of the algorithm.
4.2. High Level Description of the Multiple Domain
Algorithm
The high level algorithm described in Figure 6 is
similar to the MLT algorithm in the manner that
a simulation scenario is built via a Current List,
evaluated and compared to a reference experi-
ment. In MLT, the simulation case was created by
selecting all identical experiments present on the
inputs and outputs of a gate. Experiments were
located through their unique concurrent identifiers
(CID) and state values were retrieved. On
inputs and outputs where no matching CID could
be found, the Reference experiment state value
was used. In MDCS, the simulation cases are created
by generating valid combinations of experi-
ments, opposed to using only a single experiment
and the reference. This is the most complex aspect
of building a multiple domain environment.
There are many checks for compatibility of experiments
that utilizes the domain information stored
within the C k experiment.
Recall that in Concurrent Fault Simulation, the
reference was the only parent experiment that
all other experiments were compared against. In
Multiple Experiment Environments for Testing 9
MDCS, many parent experiments have been de-
fined. The reference, and many single stuck-at
experiments. Consequently, a parent experiment
will sometimes be referred to as a dynamic
reference experiment, since when present, it will
be used in lieu of the reference for comparisons
against any offspring being processed.
As an element evaluation begins, the Current
periments from the lists of element E i . In order
periment must be present in L c , i.e., there must
exist an event for the experiment at the current
time. MDCS filters out any experiment not triggered
for the current event time T . An additional
list of "triggers" is maintained for this specific pur-
pose. Using this Trigger information, it is possible
to drastically reduce the number of combinations
that MDCS must explicitly simulate.
In
Figure
7, two domains are defined, each containing
a reference value and two stuck-at values.
Faults in domains A and B are injected at I 1 and
I 2 respectively. There is a potential of nine possible
combinations of experiments, three experiments
on I 1 versus the three experiments on I 2 .
As will be shown, only three out of the nine cases
must be evaluated using the MDCS algorithm.
According to the algorithm, the reference experiment
is the first to be processed, but notice it
is not triggered. Therefore, the activity is due
to a triggered faulty experiment. When the triggered
experiment A 2 is encountered, it must be
processed independently (as a SSF) and then all
the combinable experiments present on the other
input must be processed against A 2 . All the sce-
R (0)
R (0)
(1)
A 3 (1)
R (0)
I 1
I 2
Satisfy rules of
combinability ?
(1)
Propagate as an
offspring
(1)
A 3A (1)A (1)
R (0)
yes: evaluate no: invalid yes: evaluate
Experiment
Scenario
Propagate ?
Fig. 7. Applying MDCS to Multiple Stuck-at Fault Sim-
ulation
narios of A 2 (1) versus experiments from L 2 are
depicted in the table of Figure 7. Out of these
three cases, two of them must be evaluated because
they meet the rules of combinability. The
result from these evaluations will be propagated to
the output only if the experiment produces a different
state value from any parent present on the
output list L 3 . Since L 3 only contains one experiment
(the reference experiment), the results from
all evaluated scenarios are compared to it. Row
L 3 in the table of Figure 7 shows the results of
the evaluations. Case 1 is performing traditional
CS (i.e., simulating the SSF A 2 (stuck \Gamma at \Gamma 1) on
input I 1 . This case is compressed since it matches
the reference state on the output. Only case 3
produces a state value not equal to the reference
on the output, R(0), therefore this experiment is
propagated to the output of the element. Note
that the diverged experiment A 2 B 1 (1) displays a
new behavior that would not have been seen by
simulating each parent experiment A 2 (1) or B 1 (1)
independently. Therefore, an offspring experiment
has been spawned.
The previous example was shown to provide insight
into the method for generating scenarios.
The output contained only a reference experiment
and there were no other experiments to compare
the newly evaluated scenarios against. Let
us now demonstrate the MDCS algorithm when
other parent experiments besides the reference are
present at the output. Figure 8 shows the presence
of an experiment (shown in gray
that arrived on the output before the current simulation
time. If the gate were to be evaluated,
all the same scenarios as shown in the table of
Figure
7 would still be generated. The difference
however, would be that case 3 would see B 1 (1)
on L 3 as a parent and check its state value for
comparison. Upon verifying that the interaction
on the output, no propagation
would occur. This is an example of what
is called parent checking, where experiments are
compressed to dynamic parents opposed to being
propagated.
5. Circuit Example
Consider Figure 9 in order to demonstrate the
storage savings in the MDCS algorithm. This is
Czeck and Heller
R (0)
R (0)
(1)
A 3 (1)
R (0)
I 1
I
(1)
Fig. 8. Using Parent checking on the output before propagating
any offspring. Experiment B 1
arrived previous to
the current event.
the same circuit as given in Figure 4, only now
showing all the parent experiments that MDCS
would store. There are three domains, A; B; C in
Figure
9. Each domain is defined as containing
a single stuck-at-1 fault source for each primary
input of an element. In this case, domain A, B
and C contain values for input I 1 of gate E 1 , I 2 of
I 1 of E 2 respectively. Domain definition is
very flexible. There are no restrictions on the assignment
of domains to signal lines. For instance,
all three domains could have contained faults to
be inserted on the same input but this would not
have been a very interesting case. Using a reference
state value of zero, the network is initialized
with four parent experiments at E 1 and E 2 . These
are the reference R and the three single stuck-at-1
faults A 1 (1),B 1 (1) and C 1 (1).
Table
1. Combinations generated for using MDCS include
the two SSF and a MSAF Experiment A 1 B 1 .
Case 1 Case 2 Case3 Case 4
I 1 R(0) A 1
(1)
I 2 R(0)
evaluated
(1)
All experiments for E 1 are triggered. The resulting
combinations are shown in Table 1 along02
=Triggered Experiment
R
R
R
R
R
R (1) C (1)Fig. 9. Simulating a Digital Logic Network with three
domains.
with the contributing parent experiments from
each input that produced them. After the signals
have propagated to their respective
outputs, only the reference and those experiments
that have spawned a new behavior have
been propagated. The creation of all MSAFs in
MDCS are reported along with detection statistics
6. Results
In order to demonstrate the potential of MDCS
for practical applications, a proof-of-concept prototype
was developed. We used the CREATOR
Concurrent Fault Simulator [6] as a reference and
compared it to our MDCS version for correctness
and for measuring overhead associated with our
algorithms. These test cases are based on the
ISCAS benchmark circuits [3, 4] widely used for
evaluating fault simulation techniques.
Experimental results are presented for fault
(single stuck-at and two-way stuck-at) simulations
that were performed using the MDCS scheme. Although
MDCS is portable to many platforms, the
results presented here were gathered using a VAX
8800 uni-processor machine. The waveforms (test
input patterns) were generated using CONTEST
[13].
Table
2 contains the information necessary to
describe the benchmark circuits used. The circuit
number, ISCAS name, number of gates, primary
inputs, primary outputs, number of flip-
flops, number of single stuck-at faults inserted,
and number of input patterns used are provided.
The circuits will be referenced by their number
from the first column of Table 2 for all graphs.
The first letter of the circuit indicates whether it
is sequential (S) or combinational (C). The combinational
circuits are taken from the ISCAS85
[3] benchmark set and the sequential circuits were
taken from the ISCAS89 [4] set.
We simulated all the original set of single stuck-at
faults plus the combination of potential two-way
stuck-at faults. We say "potential" two-way
faults because MDCS is not performing exhaustive
experimentation, but rather, investigating the
whole space for interactions, given a set of input
patterns. The results clearly demonstrate the experiment
compression feature of MDCS. Not only
Multiple Experiment Environments for Testing 11
Table
2. ISCAS benchmark circuit descriptions, ordered in ascending number of faults inserted.
Number Circuit # of Gates PInputs POutputs Flops Faults Patterns
9 S526 214 3 6 21 599 1496
14 C6288 2417
are the original single stuck-ats simulated as independent
experiments, but if two or more different
experiments arrive on different inputs to a gate,
then they are tested for interactions. An interaction
is counted any time two experiment scenarios
are tested for new behavior. This number can include
redundant counts for the same combinations
of faults due to feedback paths or the application
of a new waveform pattern. In contrast, an offspring
experiment is one where an interaction creates
a new behavior that must be propagated.
Given this information, we wanted to find out
the overhead in going from single to a multiplicity
of 2 faults (and from one domain to two). The
CPU times are plotted in Figure 10. Th times
show that adding the potential of experiment interaction
is fairly cost effective. We know from
the basic CS algorithm that many single stuck-at
faults are compressed using the reference. The
same phenomenon exhibited by MDCS indicates
that the other parent experiments must be playing
an important role in curtailing the total number
of experiments simulated. Otherwise, the complexity
in the algorithm would be overwhelming as
more experiments become explicit. Circuit C6288
had the largest overhead due to the fact that our
code was optimized for sequential circuits. Newer
revisions of the algorithm will address this problem
The upper bound of all possible two-way experiment
scenarios that could arise in an exhaustive
simulation was computed by:
This value is shown for each benchmark in the
column called "Total # of Possible MSAF" in Table
3. The column titled "Total # of SSF+MSAF
experiments possible" is the total number of scenarios
that could possibly arise. This includes the
original single stuck-at faults as well as all possible
double fault experiments. This was computed
by:
, where N is the number of single
stuck-at fault sources.
ng l e
ng l e+MSAF
CPU
Circuit Number
Fig. 10. CPU time for MDCS single stuck-at simulations
and single plus double MSAF.
12 Lentz, Manolakos, Czeck and Heller
Table
3. Two Domain Simulation (Single and Double) fault simulation performance.
Number Circuit # of SSF Total # Total # CPU time (sec) Storage
Name inserted of Possible SSF + MSAF SSF SSF+MSAF in Kbytes
MSAF experiments
possible
Ring 6 15 21 0.10
9 S526 599 179101 179700 414.63 695.93 103
14 C6288 7744 29980896 29988640 783.34 2145.32 728
Table
4. Interactions eliminated due to parent checking.
No. Circuit Total Poss. # of Interactions Offspring # of Interactions
Name MSAF that Occurred Propagated Eliminated
Ring
9 S526 179101 1202 53 1149
14 C6288 29980896 52351 972 51379
Unlike other techniques [10], MDCS is not
physically inserting all possible fault scenarios or
adding additional circuitry. Rather, MDCS allows
sets of single stuck-ats to be inserted and then uses
test patterns as the stimulus for MSAFs to manifest
themselves as interacting experiments. The
original purpose for MDCS was not to be used
exclusively as a single stuck-at fault simulator,
but rather as a test environment for efficiently
creating experiment scenarios and observing experiment
behavior. However, it is still interesting
to compare the overhead of the algorithm to
other simulators. Comparing the MDCS underlying
concurrent implementation (single stuck-at
faults) to the PROOFS simulator[11], the MDCS
Multiple Experiment Environments for Testing 13
CPU times for the benchmarks were much better
than one would have expected from a concurrent
simulation algorithm. This comparison is based
on the 32 bit word data as reported in[11]. Although
PROOFS demonstrates a massive speed-up
over a version of concurrent simulation, the
MDCS version of CS uses the MLT for element
processing thus closing the gap between the two
algorithms. Most commercial concurrent simulators
use a two-list traversal[15] mechanism for implementing
their concurrent fault simulation and
this certainly would impede the speed as reported
in PROOFS [11].
One way to measure the performance of MDCS
is to investigate the number of experiment interactions
that occurred and whether or not they generated
offspring.
Table
4 shows the statistics gathered on interactions
and their relation to the number of offspring
experiments. The occurrence of interactions of
two or more experiments have an overhead associated
with them. For every interaction, a test
must be done to determine whether a parent experiment
(the reference or other parent) already
exists on the output and exhibits the same be-
havior. behaving experiments are com-
pressed, while different behaviors must be propagated
and therefore become offspring. Offspring
experiments are a measure of how many new scenarios
were propagated. MDCS results show that
despite the fact that the number of interacting scenarios
can be large, MDCS can eliminate most of
them through parent checking. The column of table
4 titled, "# of Interactions that Occurred", is
a count of the number of times two or more experiments
were tested for cumulative new behaviors.
The column called "Offspring Propagated" indicates
how many of the interactions actually generated
new offspring experiments. Finally, the difference
between "# of Interactions that Occurred"
and and "Offspring Propagated", is shown in the
column called "# number of Interactions Elimi-
nated". This column reflects the number of those
interacting scenarios that were converged (com-
pressed) due to parent checking.
Figure
11 shows that parent checking is very ef-
fective. Each column of the graph represents the
percentage of interactions that were eliminated
for each circuit simulated. This not only proves
that MDCS curtails the number of experiments
spawned, but also that the experiment compression
factor is extremely high. This also has a
tremendous impact on storage as shown in the last
column of Table 3.
6.1. Extension to Larger Domains
Digital logic simulation is an excellent test-bed for
MDCS because there is a limited number of logic
state values an output can assume. As the number
of domains was increased to three and then
four, it was found that more interactions could be
represented by a parent experiment and therefore
compressed. CPU time for larger multiplicity of
faults grew slightly and remained relatively constant
for multiplicities after four.
6.2. Future Directions
Since it was demonstrated that MDCS can efficiently
create scenarios that can interact, an area
of further research interest involves utilizing the
function list[2, 7] for creating multiple instances of
a model in a single simulation. In fault simulators
such as MDCS and CREATOR[6], the function
list stores the various Activity Functions that allow
a model to assume different behaviors during
the simulation. For instance, an AND gate could
be forced to act like an OR gate at various times
to model some intermittent fault scenario. Including
this functionality in the Multiple Domain algorithm
will provide the simulator with the ability
to create scenarios from combinations of activity
functions dynamically and efficiently. In Figure
12, a two input AND gate is shown.
The Reference Experiment on the function
list[2] contains the activity function for the fault-free
experiment, denoted by R 14 . Faulty behaviors
are inserted as single stuck-ats when faults
are loaded into the simulator. The MDCS algorithm
will create and simulate all the single
stuck-at fault experiments by creating current
lists for simulating the
reference experiment, L
for simulating input I 1 stuck-at 1, and L
The
presence of an activity function on the function
list causes the model to be evaluated by substi-
14 Lentz, Manolakos, Czeck and Heller
INTERACTIONS
ELIMINATED
Fig. 11. Demonstrating the efficiency of experiment com-
pression. Percentage of experiments that were eliminated
from the simulation.
tuting the stuck-at value on the appropriate input
or output list. For example, activity function F 1
will cause the AND gate to be evaluated with input
I inputs use the reference
state values.
The Multiple Domain algorithm traverses the
function list to dynamically create fault scenarios
of any multiplicity. In the example of Figure 12,
the behaviors of activity functions F 1 and F 2 will
be tested for interaction against any fault experiments
present on the input lists. If a new behavior
is detected that is different from the contributing
activity functions, then a new activity function is
created and an offspring experiment is spawned.
Activity functions in the MDCS show much
promise for scenario experimentation. The activity
functions need not be limited to fault behaviors
and may possess more complex model functions
such as those used in scenario control in virtual
environments[8].R 11
R 12
I
Function List
AND(I I )
activity
Fig. 12. The Function list containing activity functions
which allows a model to assume multiple behaviors.
7. Limitations
By utilizing dynamic reference (parent) exper-
iments, scenarios are compressed into implicit
classes, thus reducing the number of explicit experiments
that must be stored and propagated.
All these features not currently available in other
discrete-event simulators can be implemented in
a CPU and storage efficient manner. However,
one area of concern is the difficulty in establishing
appropriate reference experiments beyond the
four state simulator, in an application beyond digital
logic simulation, such as software programs.
In fault simulation, since there are only 4 possible
outcomes (0,1,Z,X), offspring experiments are
curtailed because their state behaviors can only
assume one of these four values. In considering
more complex models, where perhaps the model
can be an equation, the number of valid output
states could be enormous. The algorithm will
have to develop a method of choosing appropriate
parent experiments to serve as the most appropriate
behavior for comparison. One method
under investigation is to use valid ranges for specific
variables. These are either provided by the
user or derived from the model being simulated.
Experiments that produce behavior states within
the range are compressed while those outside the
boundaries are propagated.
8. Conclusions
This paper introduced a framework intended to
attack large simulation problems where the number
of experiments can approach exhaustive test-
ing. By using a Multiple Domain Concurrent Simulation
algorithm, a methodology was presented
for experimentation without explicit representation
of all scenarios. In addition, dynamic interactions
are allowed to create new cumulative behaviors
(offspring), which can be observed through-out
the simulation. Any offspring experiments
that were spawned during a simulation were detected
using input patterns generated from CON-
TEST[13] and GENTEST.
An advantage of using MDCS is the flexibility
and ease of defining experiments. No modifications
need to be made to the network and no scenarios
(multiple stuck-at faults) need be inserted.
Multiple Experiment Environments for Testing 15
Instead, domains are used to insert sets of single
stuck-at faults and only those faults that create
new behaviors are ever propagated. The algorithms
developed for this framework where verified
by performing Multiple Stuck-at Fault Simulation
for digital logic circuits. This application
was chosen because it demonstrates all the
features in a MDCS, namely experiment interac-
tion, compression, and spawning of new behaviors.
Benchmark testing and evaluation using the ISCAS
benchmark circuits [3, 4] were performed for
multiple stuck-at fault simulations. Our results
indicate that MDCS does not create a combinatorial
explosion of experiment combinations that
need to be stored and simulated. This was evident
through the number of experiments converged to
a parent.
Another interesting feature that is worth noting
is the ability to compare the activity of one
pattern against another. For instance, because
MDCS generates interactions based on pattern
stimulus, the more interactions a pattern created
seemed to indicate the robustness in exercising the
circuit. Signatures of specific patterns could easily
be determined and compared for redundancy
in test pattern development.
Future directions under investigation include
using behavioral models, VHDL or other types
of models such as HCSM[8] or even BDDs[16] in
the Multiple Domain Environment, such that dynamic
interactions will be possible within these
models. The current MDCS implementation can
use VHDL models, but the experiment interaction
feature inherent in our framework has not yet been
implemented for VHDL models. Using a function
list in our framework also shows much promise in
simulation applications where orchestrating scenarios
is important. It allows many different behaviors
to be captured in a single model entity
and through the MDCS list traversal mechanism,
we can compress and propagate behaviors only of
interest.
Finally, this methodology shows promise for
evaluating the effectiveness of patterns. Coverage
and detection of different pattern sets can be
performed and help derive new patterns that will
be more robust in detecting complex interacting
behaviors.
Acknowledgements
The Authors would like to thank Dr. Pierluca
Montesorro for his continued support on the CREATOR
simulator, Dr. Fabio Somenzi and Dr.
Vishwani Agrawal for their help with benchmarking
and Dr.Chung Len Lee for his valuable suggestions
Notes
1. Note: This list is also known as the obligation list.
--R
"The Concurrent Simulation of Nearly Identical Digital Networks"
"MOZART: A Concurrent Multi-Level Simulator"
"A Neutral Netlist of 10.Combinational Benchmark Circuits and a Target Translator in FORTRAN"
"Combina- tional Profiles of Sequential Benchmarks for Sequential Test Generation"
"Switch-Level Concurrent Fault Simulation based on a General Purpose List Traversal Mechanism"
"Creator: General and Efficient Multilevel Concurrent Fault Simulation"
"Creator: New Advanced Concepts in Concurrent Simulation"
"HCSM: A framework for Behavior and Scenario Control in Virtual Environments"
"Multiple Domain Concurrent Simulation of Interacting Experiments and Its application to Multiple Stuck-at Fault Simulation"
"Multiple-fault Simulation and Coverage of Deterministic Single-Fault Test
"PROOFS: A Fast, Memory Efficient Sequential Circuit Fault Sim- 16.Lentz, Manolakos, Czeck and Heller ulator"
"The Comparative and Concurrent Simulation of Discrete-Event Experiments"
"A Directed Search Method for Test Generation Using a Concurrent Simulator"
"Sequential circuit fault simulation by fault information tracing algorithm: fit"
"An Efficient Method of Fault Simulation for Digital Circuits Modeled from Boolean Gates and Memories"
"Symbolic Boolean Manipulation with Ordered Binary Decision Diagrams"
He earned his M.
--TR
--CTR
Karen Panetta Lentz , Jamie Heller , Pier Luca Montessoro, System Verification Using Multilevel Concurrent Simulation, IEEE Micro, v.19 n.1, p.60-67, January 1999
Zainalabedin Navabi , Shahrzad Mirkhani , Meisam Lavasani , Fabrizio Lombardi, Using RT Level Component Descriptions for Single Stuck-at Hierarchical Fault Simulation, Journal of Electronic Testing: Theory and Applications, v.20 n.6, p.575-589, December 2004
Maria Hybinette , Richard M. Fujimoto, Cloning parallel simulations, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.11 n.4, p.378-407, October 2001
Maria Hybinette, Just-in-time cloning, Proceedings of the eighteenth workshop on Parallel and distributed simulation, May 16-19, 2004, Kufstein, Austria | scenario;interactive experimentation;concurrent fault simulation;multiple stuck-at |
282692 | Gossiping on Meshes and Tori. | AbstractAlgorithms for performing gossiping on one- and higher-dimensional meshes are presented. As a routing model, the practically important wormhole routing is assumed. We especially focus on the trade-off between the start-up time and the transmission time. For one-dimensional arrays and rings, we give a novel lower bound and an asymptotically optimal gossiping algorithm for all choices of the parameters involved. For two-dimensional meshes and tori, a simple algorithm composed of one-dimensional phases is presented. For an important range of packet and mesh sizes, it gives clear improvements upon previously developed algorithms. The algorithm is analyzed theoretically and the achieved improvements are also convincingly demonstrated by simulations, as well as an implementation on the Paragon. On the Paragon, our algorithm even outperforms the gossiping routine provided in the NX message-passing library. For higher-dimensional meshes, we give algorithms which are based on an interesting generalization of the notion of a diagonal. These algorithms are analyzed theoretically, as well as by simulation. | Introduction
Meshes and Tori. One of the most thoroughly investigated interconnection schemes for parallel
computers is the n\Thetan mesh, in which n 2 processing units (PUs) are connected by a two-dimensional grid of
communication links. A torus is a mesh with wrap-around connections. Their immediate generalizations
are d-dimensional n\Theta\Delta \Delta \Delta\Thetan meshes and tori. Although these networks have a large diameter in comparison
to the various hypercubic networks, they are nevertheless of great importance due to their simple structure
and efficient layout. Numerous parallel machines with mesh and torus topologies have been built, and
various algorithmic problems have been analyzed on theoretical models of the mesh.
Wormhole Routing. Traditionally, algorithms for the mesh have been developed using a store-and-
forward routing model in which a packet is treated as an atomic unit that can be transferred between
two adjacent PUs in unit time. However, many modern parallel architectures employ wormhole routing
instead. Briefly, in this model a packet consists of a number of atomic data units called flits which are
routed through the network in a pipelined fashion. As long as there is no congestion in the network, the
time to send a packet consisting of l flits between two arbitrary PUs is well approximated by t s +l \Delta t l , where
t s is the start-up time (the time needed to initiate the message transmission) and t l is the flit-transfer
time (the time required for actually transferring the data). Usually, t s ?? t l , so that it is important to
minimize the number of startups when the packet size is small, whereas it is important to minimize the
time when the packet size is large. These two goals may conflict, and then trade-offs must
be made.
Gossiping. Collective communication operations occur frequently in parallel computing, and their performance
often determines the overall running time of an application. One of the fundamental communication
problems is gossiping (also called total exchange or all-to-all non-personalized communication).
Gossiping is the problem in which every PU wants to send the same packet to every other PU. Said
differently, initially each of the N PUs contains an amount of data of size L, and finally all PUs know
the complete data of size N \Delta L. This is a very communication intensive operation. On a d-dimensional
store-and-forward mesh it can be performed trivially in N=d steps, but for wormhole-routed meshes it is
less obvious how to organize the routing so that the total cost is minimal.
Gossiping appears as a subroutine in many important problems in parallel computation. We just
mention two of them. If M keys need to be sorted on N PUs (M ?? N ), then a good approach is to
select a set of m splitters [14, 13, 7] which must be made available in all PUs. This means that we have to
perform a gossip in which every PU contributes m=N keys. In this case, the cost of gossiping (provided
it is performed efficiently) will not dominate the overall sorting time when the input size is large, because
the splitters constitute only a small fraction of the data. A second application of gossiping appears in
algorithms for solving ordinary differential equations using parallel block predictor-corrector methods
[15]. In each application of the method, block point computations corresponding to the prediction are
carried out by different PUs, and these values are needed by all PUs for the correction phase, requiring
a gossiping of the data.
Previous Work. A substantial amount of research has been performed on finding efficient algorithms
for collective communication operations on wormhole-routed systems (see, e.g., [1, 4, 12, 3, 17]). However,
most papers either deal with very small packets or with very large packets. Both these extreme cases
require algorithms optimizing only one parameter.
If the packets are small, then the number of start-ups should be minimized. Peters and Syska [12]
considered the broadcasting problem on two-dimensional tori and showed that it can be performed in
the optimal 2 \Delta dlog 5 ne steps. Their ideas have been generalized to three-dimensional tori in [3]. The
algorithms described in these papers can be adapted for the gossiping problem by first concentrating all
data into one PU and then performing a broadcast. However, such an approach leads to a prohibitively
large transmission time. Another drawback of both approaches is that it is assumed that the routing
paths may be selected by the algorithm. The algorithms presented in this paper can also be used if the
network only supports dimension-ordered routing.
If the packets are large, a store-and-forward approach yields the best results. As mentioned before,
on a d-dimensional n \Theta mesh it can be performed trivially in n d =d packet steps. Gossiping in a
store-and-forward hypercube model was studied in [8].
There are many other papers on collective communication operations on wormhole-routed meshes and
tori. Although these papers do not deal with the same problem, there are some similarities. For exam-
ple, Sundar et al. [16] propose a hybrid algorithm for performing personalized all-to-all communication
(complete exchange) on wormhole-routed meshes. Briefly, they employ a logarithmic step algorithm until
the packet size becomes large, at which point they switch to a linear step algorithm.
Our Results. In this paper we focus on the trade-off between the start-up time and the transmission
time. This is useful, because there is a large range of mesh sizes, packet sizes and start-up costs, in which
neither of the two contributions is negligible. We would like to emphasize that we are not proposing a
hybrid algorithm that simply uses the fastest of the gather/broadcast approach and the store-and-forward
approach. In an intermediate range of packet sizes, our algorithm is asymptotically better than the best
of the two extreme approaches.
A non-trivial lower bound shows that our algorithms are close to optimal for all possible values of
the parameters involved. For the efficiency of the two-dimensional algorithm, it is essential that data
is concentrated in PUs that lie on diagonals. For higher dimensional meshes we give an interesting
generalization of the notion of a diagonal, which may be of independent interest. We remark that Tseng
et al. [17] also used diagonals in their complete exchange algorithm. However, the generalization of a
diagonal given there for three-dimensional tori is rather straightforward. Hyperspaces are used that when
projected give back a diagonal in two-dimensional space. We generalize the diagonal in a different way,
that gives better performance, and which allows to formulate a generic algorithm that works for arbitrary
dimensions (not only dimension three) without problem.
We also compare the value of several strategies by substituting parameters in the formulas for their
time consumptions. Furthermore, our theoretical results for two-dimensional meshes are completed with
measurements of an implementation on the Intel Paragon. The assumed and the real hardware model
do not completely coincide, but still we believe that these measurements support our claims in most
important points.
described. Thereupon, in Section 3, we present several lower bounds for the gossiping problem. The
in Section 5 and Section 6, we extend the algorithm to two- and higher-dimensional meshes and tori.
Finally, in Section 7, experimental results gathered on the Intel Paragon are presented.
2 Model of Computation
A d-dimensional mesh consists of processing units (PUs) laid out in a d-dimensional grid of side
length n. Every PU is connected to each of its (at immediate neighbors by a bidirectional
communication link. A torus is a mesh with wrap-around connections. We concentrate on the communication
complexity, and assume that a PU can perform an unbounded amount of internal computation in a
step. It is also assumed that a PU can simultaneously send and receive data over all its connections. This
is sometimes called the full-port or all-port model. With minor modifications, the presented algorithms
can also be implemented on one-port architectures.
For the communication we assume the much considered wormhole routing model (see [6, 11, 5] for
some recent surveys). In this model a packet consists of a number of atomic data units called flits.
During routing the header flit governs the route of the packet and the other flits follow it in a pipelined
fashion. Initially all flits reside in the source PU and finally all flits should reside in the destination PU.
At intermediate stages, all flits of a packet reside in adjacent PUs. The packets should be 'expanded'
and 'contracted' only once. That is, two or more flits should reside in the same PU only at the source
and destination PU. Wormhole routing is likely to produce deadlock unless special care is taken.
The reasons to consider wormhole routing instead of the more traditional store-and-forward routing
are of a practical nature. On modern MIMD computers (such as the Intel Paragon and the Cray T3D), the
time to initiate a packet transmission is considerably larger than the time needed to traverse a connection.
Wormhole routing has been developed in response to this fact. The time for sending a packet consisting
of l flits over a distance of d connections is given by
We refer to t s as the start-up time, t d as the hop time, and t l as the flit-transfer time.
Equation (1) is only correct if there is no link contention (in other words, as long as the paths of the
packets do not overlap). If paths of various packets overlap, then the transfer time increases. All our
algorithms are overlap-free.
3 Lower Bounds
We start with a trivial but general lower bound. Thereupon, we give a more detailed analysis, proving a
stronger lower bound for special cases.
Lemma 1 In any network with N PUs, degree \Delta and diameter D, the time T con (N; \Delta; D) needed to
concentrate all information in a single PU satisfies:
Proof: The terms are motivated as follows: N \Delta l flits have to be transferred over at most \Delta connections
to the PU in which the data is concentrated; one packet must travel over a distance of at least (D=2) to
reach the concentration PU; after t steps a PU can hold at most data items by induction.
Of course, a lower bound for T con immediately implies the same lower bound for the gossiping problem.
The degree of a d-dimensional n \Theta \Delta \Delta \Delta \Theta n mesh is 2 \Delta d, and the diameter equals d \Delta (n \Gamma 1).
Usually, t d is comparable to t l while D ! (N=\Delta) \Delta l. Thus we can omit the term (D=2) \Delta t d from the
lower bound without sacrificing too much accuracy. By dividing both remaining terms by l \Delta t l , and by
setting the following simplified lower bound for concentrating
all data in one PU:
con
In Section 4, gossiping algorithms are presented that match this lower bound up to constant factors for all
well as for all n). For the intermediate range there is a considerable deviation
from (2). Therefore, these values of r are considered in more detail. Let T 0
gos (n) denote the number of
time units (of duration l \Delta t l each) required for gossiping on a circular array with n PUs.
Theorem 1 Let
Theorem 1 will be proven by two lemmas. Notice that it establishes a smooth transition from the range
of small r values (r - n 1\Gammaffl , ffl ? 0) to the range of large r values n)), for which (2) already gives
sharp results.
First we show that for proving lower bounds, one can concentrate on the dissemination problem: the
problem of broadcasting the information that is concentrated in one PU to all other PUs. The number
of time units required for this problem is denoted by T 0
dis .
Proof: Starting with all data concentrated in a single PU, the initial situation can be established in time
con by reversing a concentration. On the other hand, gossiping can be performed by concentrating and
subsequently disseminating.
As in our case we will prove a dissemination time that is of larger order than the concentration time (e.g.,
for
con
For the dissemination problem with certain r, it is easy to see that having full freedom of choosing
the size of the packets can be at most a factor two cheaper than when the data are bundled into fixed
messages of size r. That is, we may focus on the problem of disseminating n=r messages, residing in
PU 0, while sending a message takes 2 time. At most another constant factor difference is
introduced if we assume that dissemination has to be performed on a circular array with only rightward
connections. By the above argumentation the proof of Theorem 1 is completed by
Lemma 3 Consider a circular array with only rightward connections and with n PUs. Initially, PU 0
contains n=r messages of size r. In one step the messages may be sent rightwards arbitrarily far, but
the paths of the messages should be disjoint. If r - (n \Gamma 1)=e, then dissemination takes at least n=r \Delta
steps.
Proof: We speak of the original n=r messages as colors, and the task is to make all colors available in
all PUs. We define a cost function F (t) for the distribution of colors after t steps. Consider a PU i and
a color c, and let j be the rightmost PU to the left of i holding color c. The contribution of PU i to F (t)
by color c is does not contain color c and 0 otherwise. The initial cost is given by
We consider how much the cost function can be reduced after a step is performed. It is essential that the
paths must be disjoint. One large 'jump' by a message of some color c gives a strong reduction of the
contribution by color c, but the following claim shows that the total reduction is at most n= ln(n=r).
(n\Gamma1)=e, then after one step the cost function is reduced by at most n= ln(n=r). Moreover,
this occurs if we make a jump over distance r with one message from each color.
From this, the result of the lemma follows, because then the number of steps required for dissemination
is at least
r
In order to prove the claim, let d c be the maximum jump made by a message of color c, 0 - c - n=r \Gamma 1.
Obviously, we must have
c
d c - n, since the paths of the messages must be disjoint. The reduction of
the contribution to F (t) by color c is at most
This can be seen as follows. The initial contribution by color c is at most
After a step
over distance d c , the contribution of the PUs which are within distance d c remains unchanged. This
contribution is
Furthermore, the contribution of the other PUs becomes
The reduction due to the step made by color c is therefore at most
From (3), it follows that the total reduction (due to all steps made by all colors) is bounded by
\DeltaF -
It needs to be shown that this expression is at most n= ln(n=r). 'Powering,' we obtain
dc \Deltaln(n=d c
d
dc
d
The Lagrange multiplicator theorem (see, e.g., [10, Section 4.3]) gives that the product of factors with a
fixed sum is maximal if all factors are equal. Therefore
n=r
dc
It follows that
dc
d
dc
Let a
r \Deltak
We need to show that a k is maximal if k is fixed at its maximum
legal value, which is n. Consider the ratio a k+1 =a k . We have
a k+1
a k
It needs to be shown that a k+1 =a k ? 1. Hence, we must have
r
r
which holds because r - (n \Gamma 1)=e. It follows that the total reduction in cost is at most
4 Linear and Circular Arrays
In this section we analyze gossiping on one-dimensional processor arrays. It is assumed that the time for
routing a packet is given by (1), as long as the paths of the packets do not overlap. As in the previous
section, the distance term, which is of minor importance anyway, is neglected in the rest of this paper.
Furthermore, we write express the time needed for gossiping in units of duration l \Delta t l .
We only present algorithms for circular arrays. Due to their more regular structure, these are slightly
'cleaner', but with minor modifications all results carry on for linear arrays.
4.1 Basic Approaches
For gossiping on a circular array with n PUs, there are two trivial approaches. Each of them is good in
an extreme case.
1. Every PU sends a packet containing its data to the left and right. The packets are sent on for bn=2c
steps.
2. Recursively concentrate the data packets into a selected PU. After that, disseminate the information
to all other PUs by reversing the process.
denote the time taken by Approach 1 and Approach 2, respectively. A simple
analysis gives
Lemma 4
Proof: Approach 1 consists of bn=2c steps, and in each step every packet consists of l flits.
The time taken by Approach 2 is determined as follows. During the concentration phase, the packets
get three times as heavy in every step:
log 3
The expression for the dissemination phase is similar, but in this case the packets consist of n \Delta l flits in
every step:
log 3
Adding the two contributions and neglecting the lower-order term n=2 gives the stated result.
Approach 1 is good when r is small. Comparing it with the lower bound given in (2) shows that
it is exactly optimal when goes to infinity, Approach 2 becomes optimal to within a
constant factor. It will outperform Approach 1 for many practical values of r. Still, in principle, the time
consumption of Approach 2 is not even linear in n.
4.2 Intermixed Approach
For log n, both approaches require \Theta(n \Delta log n) time units. This is a factor of log n more than given
by the lower bound. For this intermediate range of r-values, we present an algorithm that establishes a
better trade-off between the start-up and the transfer time. The algorithm consists of three phases and
works with parameters a and b:
Algorithm circgos(n, a, b)
1. Concentrate n=a packets in a evenly interspaced PUs, called bridgeheads.
2. For ba=2c steps, send the packets of size n=a among the bridgeheads in both directions, such that
afterwards every bridgehead contains the complete data.
3. In dlog a repeatedly increase the number of bridgeheads by a factor of a. This will be
done as follows. Let b - ba=2c denote the number of steps allowed in one round, and let
Every bridgehead partitions the data into k packets of size n=k each. Thereupon, the packets
are broadcast to the new bridgeheads in a pipelined fashion. The packets to the right are sent in order,
whereas the packets to the left are sent in reverse order.
In Phase 2, the packets are circulated around. The description is pleasant because of the circular structure.
In Phase 3, two oppositely directed packet streams are sustained between the bridgeheads. In order to
fully exploit the bidirectional communication links, a bridgehead should not send the same packets to
the left and right. Rightwards the packets should be sent in order, whereas leftwards they should be sent
in reverse order. Figure 1 shows two examples.
The total time consumption of algorithm circgos is given in the following lemma. We do not consider
all rounding details.
cg,f denote the time taken by Phase f of circgos(n; a; b). Then
Proof: Phase 1 corresponds to a concentration step on linear arrays of size n=a instead of n. The time
needed for Phase 2 follows by multiplying the number of steps by the time taken by each of them. In order
to prove the time consumption of Phase 3, it needs to be shown that after b steps every new bridgehead
contains the complete data. Consider a new bridgehead and assume it is at distance d - ba=2c to the
closest old bridgehead. This new bridgehead receives the first packet after d steps. After that, it receives
one more packet in Step d+1 through Step a \Gamma d \Gamma 1. From that point onwards, it receives two packets in
every step. After Step a \Gamma d +x, it contains a+ 2 \Delta packets. By setting
that after b steps the new bridgehead contains a
oe
oe
oe
oe
oe
oe
oe
Figure
1: Behavior of one round of Phase 3 of algorithm circgos. Every arrow is labeled with the time
steps at which the corresponding packet reaches the PUs. The top figure illustrates the case a = 9 and
5. In this case, 3 packets are routed from the bridgeheads. The bottom figure illustrates the case
In this case, the data is partitioned into 4 packets.
At first glance, it is not clear what the result of Lemma 5 means. In particular, it is not immediately
clear which a and b should be chosen. In order to obtain an impression, we have written a small program
which searches for the optimal values. Table 1 lists some typical results. From these results we conclude
that
ffl For realistic values of n and r, circgos may be several times faster than the best of Approach 1
and Approach 2. Furthermore, it never performs worse.
ffl The range of r values for which algorithm circgos is the fastest increases with n.
ffl The best choices for a and b increase with n and decrease with r. In this range of n and r, b is
approximately given by
729 4397 4493 4973 7373
Table
1: Comparison between the time taken by Approach 1 (top), Approach 2 (middle) and circgos
(bottom). The values of the parameters a and b for which the result for circgos was obtained are given
behind its time consumption. The cost unit is l \Delta t l .
Notice that when a identical to Approach 1. Furthermore, circgos(n; 3; 1) behaves
almost identically to Approach 2, but after log 3 n steps the three bridgeheads contain the complete data.
The dissemination phase therefore requires one routing step fewer than in Approach 2. This explains
why circgos(n; 3; 1) is always faster than Approach 2, which does not profit from the wrap-around
connections.
Although the exact choice for the parameters is essential for obtaining the best performance (as shown
in
Table
1), the asymptotic analysis remains unchanged if we take a. The reason for this is that
using different parameters can reduce the amount of transferred data by at most a factor of two. For
proving asymptotic results this is fine, but for practical applications this is highly undesirable. On the
other hand, we might have used more parameters: the factor a by which the number of bridgeheads is
increases in every round of Phase 3 might have been chosen differently, together with its corresponding
optimal choice of b.
Theorem 2 Let r ! n. The number of time units needed by circgos(n; n=r; n=r) is given by
Proof: From Lemma 5 it follows that
When the second term never dominates because
Replacing the factor of ln r in the third term by ln n gives the theorem.
Thus, algorithm circgos gives a continuous transition from gossiping times O(n) (as achieved by Approach
to gossiping times O(n \Delta log n) (as achieved by Approach 2 when n). For
intermediate r values, circgos may be substantially faster:
Corollary 1 Algorithm circgos is asymptotically optimal for all values of r. For all log
\Omega\Gamma/14 n) times faster than Approach 1 and Approach 2.
Proof: The optimality claim follows by comparing the result of Theorem 2 with the lower bound given
in Theorem 1. For r - n, optimality was already established before.
For r - log n, Approach 1 and Approach 2 both take \Omega\Gamma n \Delta log n) time units. On the other hand, when
r - n 1\Gammaffl , circgos has a time consumption of at most O(n \Delta log
4.3 Generalization
In the previous section we presented a gossiping algorithm for linear and circular arrays which is optimal
to within a constant factor for all values of r. In this section we show that this immediately implies an
asymptotically optimal algorithm for 2- and 3-dimensional meshes and tori. In fact, it immediately gives
an asymptotically optimal algorithm for d-dimensional meshes, as long as d is constant. The reason to
develop gossiping algorithms for 2- and higher-dimensional meshes (as will be done in subsequent sections)
is therefore to obtain algorithms with good practical behavior, paying attention to the constants.
The algorithm for gossiping on a d-dimensional mesh consists of d phases. In Phase f ,
the packets participate in a gossip along axis f . For each of these one-dimensional gossips, the most
efficient algorithm is taken. As the size of the packets increases in each phase (in Phase f , the packets
consist of l \Delta n f =d flits each), this is not necessarily the same algorithm in all phases. The described
algorithm will be denoted by high-dim-gos.
Theorem 3 For constant d, high-dim-gos has asymptotically optimal performance.
Proof: Denote the time taken by Phase f of high-dim-gos by T hdg,f , and the time taken by the optimal
gossiping algorithm by T opt . Clearly, T opt exceeds the time required for making all information available
in all PUs, starting with the situation at the beginning of Phase d \Gamma 1. In Phase d \Gamma 1, only a fraction
of 1=d of the connections is used. Using all connections would make the algorithm faster by at most a
factor of d. Thus, T hdg,d
f
Thus, once again, we would like to emphasize that achieving asymptotically optimal performance is
not the real issue, but constructing algorithms with good practical behavior.
Algorithm high-dim-gos does not take advantage of the all port communication capability. To do
better, the l flits in each PU are colored with d colors: Flit bc \Delta l=dc to Flit b(c+1) \Delta l=d \Gamma 1c are given Color c,
d. After that we perform d independent gossiping operations with parameter r
where l In Phase f , the packets with Color c participate in an operation along
c) mod d. This algorithm will be denoted by high-dim-gos 0 . It has the same start-up time as
high-dim-gos, but the transfer time is reduced by a factor of d.
5 Two-Dimensional Arrays
In this section we analyze the gossiping problem on two-dimensional n \Theta n tori. First we investigate what
can be obtained by overlapping two one-dimensional gossiping algorithms, one along the rows and one
along the columns, as sketched in Section 4.3. After that, a truly two-dimensional algorithm is presented,
which for some values of n and r performs significantly better.
5.1 Basic Approaches
The simplest idea is to apply high-dim-gos 0 with a choice from the presented one-dimensional gossiping
algorithms in each phase. Let Approach i-j denote the algorithm in which first Approach i is applied,
and then Approach j. Approach 1-2 can be excluded, since it will never outperform the best of the other
approaches. Let T 0
i;j denote the number of time steps taken by Approach i-j. Using the results from
Section 4, we find
Lemma 6
The time consumption for applying the best version of circgos in both phases cannot be fitted in a
simple formula, but it is better then the best of the above algorithms by almost the same factors as those
found before. In Table 2 some numerical results are given. Because the packets have size n \Delta l during
Phase 2 (which dominates the total time consumption), the transition between the various algorithm now
occurs for much larger r than in Table 1.
5.2 Intermixed Approach
In this section we present a two-dimensional analogue of algorithm circgos. The algorithm as described
below does not use the horizontal and vertical connections simultaneously. Such an algorithm is called
uni-axial. By applying the coloring technique of high-dim-gos 0 , the transfer time is halved.
The algorithm first creates a situation comparable to the one we find after Phase 2 of circgos. For
this, three routing phases are required:
Algorithm torgos(n, a, b)
1. Each PU i;j where (j \Gamma i) mod designated as a bridgehead. Note that there are a
bridgeheads in every row. In each bridgehead concentrate n=a packets from its row.
2. For ba=2c steps, send packets of size n=a along the rows among the bridgeheads in both directions,
such that afterwards, every bridgehead contains the complete data of its row.
3. For ba=2c steps, send packets of size n along the columns among the bridgeheads in both directions,
such that afterwards, every bridgehead contains a \Delta n data.
Now, each bridgehead in Row i, data from every Row i 0 where (i
This is the result of the diagonal way in which the bridgeheads were chosen. An example is given in
Figure
2. Thus, all data are available on the diagonals of every n=a \Theta n=a submesh. The algorithm
proceeds in log a rounds. In each round, the number of bridgeheads is increased by a factor of a.
Figure
2: Left: The bridgeheads in a two-dimensional torus for the case 3. After Phase 3,
each bridgehead in Row i, all data from every Row i 0 , where (i
bridgeheads from nine consecutive rows therefore know the complete data. Right: The bridgeheads (new
ones are drawn smaller) during the first round of Phase 4. Hereafter, each bridgehead in Row i knows
all data from every Row
Invariant 1 At the beginning of Round t, 1 - t - log a n, each PU holds n \Delta a t data, and all data are
available on the diagonals of all n=a t \Theta n=a t submeshes.
This implies that the gossiping has been completed when log a n. A more formal description of the
last phase is given below:
Algorithm torgos(n, a, b) (continued)
4. For repeatedly increase the number of bridgeheads by a factor of a by
inserting a \Gamma 1 new bridgeheads between any pair of two consecutive bridgeheads in every row.
a. The information from the old bridgeheads in a row is passed to the a \Gamma 1 new bridgeheads in
steps with packets of size m=(2 \Delta b \Gamma a
b. For ba=2c steps, send packets of size m along the columns among the bridgeheads (old as well as
new) in both directions, so that afterwards, every bridgehead contains a \Delta m data.
The three phases that operate along the rows are identical to those of circgos. Only Phase 3 and
Phase 4.b, which add the information of a row to a other rows, are new. The following analogue of
Lemma 5 is straightforward:
tg,f denote the number of time units needed for Phase f of torgos(n; a; b). Then
Proof: The time taken by Phase 1 is given in the proof of Lemma 5. In Phase 2, 3 and 4.b, there are
log a n+1 rounds in total, and each round consists of ba=2c routing steps. The size of the packets increases
from n=a until n 2 =a over the rounds. The transfer time is therefore bounded by ba=2c \Delta
a). The time taken by Phase 4.a is determined analogously.
Just as circgos, torgos constitutes a compromise between simplicity and performance. The routing
time will be somewhat smaller if the a-values and the corresponding b-values are chosen in dependency
of the growing size of the packets. But even the presented basic version of torgos performs fairly well.
In
Table
2 we compare the performance of all two-dimensional algorithms. It can be seen that torgos is
442 1482 5382 13182
Table
2: Comparison of the results obtained for gossiping on an n \Theta n torus. For every pair of values (n; r),
the number of time steps is given (from top to bottom) for Approach 1-1, Approach 2-1, Approach 2-2,
high-dim-gos 0 that circgos, and torgos. For the latter two, the values of (the second) a and b for
which the result was obtained are indicated.
always the most efficient algorithm, but for small r-values the difference with Approach 2-1 is marginal.
The performance of the variation of high-dim-gos 0 that utilizes circgos in both phases is better than
that of the simple approaches but nevertheless slightly disappointing, particularly if one considers that
this algorithm can choose its a and b values in each phase independently. Generally, we can conclude
that if one aims for simplicity, one should utilize Approach 2-1. If a slightly more involved algorithm is
acceptable, one should use torgos, which may be more than twice as fast.
6 Higher Dimensional Arrays
For the success of torgos it was essential that the packets were concentrated on diagonals at all times,
as formulated in Invariant 1. Starting in such a situation, the invariant could be efficiently reestablished
by copying horizontally (Phase 4.a), and adding together vertically (Phase 4.b).
The main problem in the construction of a gossiping algorithm for d-dimensional meshes is that it is
unclear how the concept of a diagonal can be generalizes. Once we have such a 'diagonal', we can perform
an analogue of torgos. In the following section we describe the appropriate notion of d-dimensional
diagonals. After that, we specify and analyze the gossiping algorithm for d - 3.
6.1 Generalized Diagonals
The property of a two-dimensional diagonal that must be generalized is the possibility of 'seeing' a full
and non-overlapping hyperplane, when looking along any of the coordinate axes. We will try to explain
what this means.
Let the unit-cube I d 2 R d be defined as I d = [0; 1i \Theta \Delta \Delta \Delta \Theta [0; 1i. When projecting the diagonal of
I 2 orthogonally on the x 0 -axes, we obtain the set [0; 1i \Theta 0; when projecting on the x 1 -axes, we obtain
0 \Theta [0; 1i. These projections are bijections (one-to-one mappings) from the diagonal to the sides of I 2 .
For algorithm torgos, this means that the information from diagonals can be replicated efficiently. A
diagonal behaves like a magical mirror: data received along one axis can be reflected along the other
axis. Not only in one direction, but in both directions. This requirement of problem-free copying between
diagonals in adjacent submeshes along all coordinate axes leads to the following
subset D of I d is called a d-dimensional diagonal if the orthogonal projections of D onto
any of the bounding hyperplanes of I d are bijective.
We will proof that the union of the following d sets satisfies the property of a d-dimensional diagonal:
Notice that D 0)g. On I 2 , the diagonal consists of f(0; 0)g as well as the points in f0 -
1g. The diagonals of I 2 and I 3 are illustrated in Figure 3. On a torus the d partial
hyperplanes are connected in a topologically interesting way. 110D
Figure
3: Diagonals of I 2 and I 3 . The bounding lines that belong (do not belong) to the considered sets
are drawn solid (dashed). The corner points of D 1 are no elements of it. Projecting I 3 downwards maps
bijectively on the ground plane.
Lemma
a diagonal of I d in the sense of Definition 1.
Proof: As the D i are completely symmetric, we can concentrate on the projection \Pi 0 along the x 0 -axis.
It is easy to check that for all
Clearly, these sets are all disjoint, so \Pi 0 is injective. On the other hand,
which implies the surjectivity of \Pi 0 .
In order to extend the definitions to d-dimensional n \Theta \Delta \Delta \Delta \Theta n cubes, one has to multiply all bounds
on every x i by n. Thus, the diagonal Dn can be defined concisely as
On grids, only the points with integral coordinates should be taken. An example is given in Figure 4.
So, we successfully defined d-dimensional diagonals. The reader is advised to obtain a full understanding
of the case 3. For us it was helpful to construct a model of paper (cardboard would have
1 It is easy to see that all d-subsets together form a closed d \Gamma 1-dimensional subspace of the d-dimensional torus. On
two-dimensional tori they constitute a circle and on three-dimensional tori a two-dimensional torus. Generally the diagonal
of a d-dimensional torus is a d \Gamma 1-dimensional torus, but a proof of this is beyond the scope of the paper.
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
Figure
4: The diagonal of a 4 \Theta 4 \Theta 4 grid: 16 points, such that if they were occupied by towers in a
three-dimensional chess game, none of them could capture an other.
been even better). Such a model makes it easy to convince oneself that the required property, that looking
along a coordinate axis indeed gives a full but non-overlapping view of the hyperplanes, is satisfied.
Though we are not aware of any result in this direction, we are not sure that we are the first to define
this concept. Still we are very pleased with the utmost simplicity of Equation (4) and the elegance of the
proof of Lemma 8.
6.2 Details of the Algorithm
With the defined diagonals, we can now generalize torgos for gossiping on tori of arbitrary dimensions.
The algorithm is almost the same as before. With a few extra routing steps, the algorithm can also be
applied for meshes. Again, the presented algorithm is uni-axial. By applying the coloring technique of
high-dim-gos 0 , the transfer time can be reduced by a factor of d. By rows we mean one-dimensional
subspaces parallel to the x 0 -axis.
Algorithm cubgos(n, d, a, b)
1. In each row a PUs are designated as bridgeheads, namely the PUs which lie on the diagonal of
their n=a \Theta \Delta \Delta \Delta \Theta n=a submesh. Concentrate in each bridgehead n=a packets from their rows.
2. For ba=2c steps, send packets of size n=a along the rows among the bridgeheads in both directions,
such that afterwards, every bridgehead contains the complete data of its row.
3. Perform d \Gamma 1 round each consisting of ba=2c routing steps. In Round i, 1 d, packets of size
a are routed along the x i -axis among the bridgeheads in both directions, such that afterwards,
every bridgehead contains a i \Delta n data.
Now each bridgehead in Row data from every
Row
all data are available on the diagonal
of every n=a \Theta n=a submesh. Hereafter, log a further rounds are performed. In each round,
the number of bridgeheads is increased by a factor of a.
Invariant 2 At the beginning of Round t, 1 - t - log a n, each PU holds n \Delta a (d\Gamma1)\Deltat data, and all data
are available on the diagonal of every n=a t \Theta \Delta \Delta \Delta \Theta n=a t submesh.
When log a n, this implies that the gossiping has been completed. A more formal description of the
last phase is given below.
Algorithm cubgos(n, d, a, b) (continued)
4. For repeatedly increase the number of bridgeheads by a factor of a by
inserting a \Gamma 1 new bridgeheads between any pair of consecutive bridgeheads in every row.
a. The information from the old bridgeheads in a row is passed to the a \Gamma 1 new bridgeheads in
steps with packets of size m=(2 \Delta b \Gamma a
b. Perform d \Gamma 1 subphases each consisting of ba=2c routing steps. In Subphase i, 1
of size a are routed along the x i -axis among the bridgeheads (old as well as new) in both
directions. Afterwards, every bridgehead contains a
The following analogue of Lemma 7 is straightforward:
Lemma 9 Let T 0
cg,f denote the number of time units needed for Phase f of cubgos(n; d; a; b). Then
(a
log a
Denote the version of the algorithm that utilizes coloring of the packets in order to fully exploit the
all-port communication capability by cubgos 0 . Then we get
Theorem 4 Let T 0
cg 0 denote the number of time units taken by cubgos 0 (n; d; a; b). Then
log a n \Delta r:
Proof: In Lemma 9 the third expression dominates the other two by far.
9 280 ( 9,
90824
Table
3: Comparison of the results obtained for gossiping on a three-dimensional n \Theta n \Theta n torus. For
every pair of values (n; r), the number of time steps are given (from top to bottom) for application of
high-dim-gos 0 with the best choice from Approach 1 and Approach 2 (indicated), for high-dim-gos 0
that utilizes circgos in every phase, and for cubgos 0 . For the latter two, the values of (the last) a and
b are also indicated.
Thus, the transfer time is within a factor of a=(a \Gamma 1) from optimality for all d, and the start-up time
is within a factor of d\Deltaa=2\Deltalog a n
log 2\Deltad+1 n d ' a\Delta(log d+1)
2\Deltalog a
from optimality. This appears to be a really strong result.
From
Table
3 it can be seen that for some n and r, cubgos 0 is substantially faster than high-dim-gos 0 ,
even though the latter has much more freedom of choosing its parameters. Actually, if one is going to
apply high-dim-gos 0 , then one can just as well take the best of Approach 1 and Approach 2 in each of
the phases.
7 Experiments
To validate the efficiency of the developed algorithms, we implemented them on the Intel Paragon [2]. In
this section, the experimental results are presented.
System Description. The Paragon system used for the experimentation consists of 140 PUs, each
consisting of two 50MHz i860 XP microprocessors. One processor, called the message processor, is
dedicated to communication, so that the compute processor is released from message-passing operations.
Every PU is connected to a Mesh Routing Chip (MRC), and the MRCs are arranged in a 2-dimensional
mesh which is 14 nodes high and 10 nodes wide. The links can transfer data at a rate up to 175 MB/s
in both directions simultaneously.
The algorithms were implemented using the NX message-passing library. NX is the programming
interface supplied by Intel. Other communication layers for the Paragon, such as SUNMOS [9], achieve
higher bandwidth and lower latency than NX, but were not available.
Some features of the Paragon are particularly important in order to understand the performance of
the implemented algorithms, namely
ffl The MRCs implement dimension order wormhole routing, i.e., packets are first routed along the
rows to their destination columns and from there along the columns to their destinations. We
employed this fact to embed a circular array into the mesh topology of the Paragon.
ffl When a message enters its destination before the receive is posted, the OSF/1 operating system
buffers the message in a system buffer. When the corresponding receive is issued, the message is
copied from the system buffer to the application buffer. This buffering is very expensive and can be
avoided if the recipient first sends a zero-length synchronization message to the sender indicating
that it has posted the receive. All implementations make use of this mechanism.
ffl In previous experiments on the Paragon, we determined that the startup cost of a message transmission
under NX is about 150 -s. Short messages incur a somewhat lower startup cost than long
messages, because they are sent immediately whereas long messages wait until sufficient space is
available at the destination processor. The experiments also showed that the uni-directional transfer
rate from PU to MRC under NX is about 87 MB/s (11.5 ns per byte), whereas the bi-directional
transfer rate is approximately 44 MB/s. Furthermore, the bi-directional transfer rate between two
MRCs is 175 MB/s. Because of this, the topology of the Paragon can be viewed as a torus.
Modifications to the Algorithms. The implemented algorithms deviate slightly from the algorithms
described in the previous sections. This was done for two reasons. First, because every PU of the Paragon
is connected to an MRC and not directly to its (up to) 4 neighbors, we cannot assume the full-port
model in which a PU can send and receive a message in all 4 wind directions simultaneously. Second, as
mentioned above, the uni-directional transfer rate of the Paragon using NX is about 87 MB/s, whereas the
bi-directional transfer rate is approximately 44 MB/s. This shows that it is more accurate to assume that
a PU cannot send and receive simultaneously, although this is not a feature of the Paragon architecture
but a feature of NX.
We give two examples of how (the analyses of) the algorithms need to be modified in order to reflect
these communication characteristics of the Paragon. First, Approach 1 for gossiping on a circular array
of n PUs now consists of steps, and in each step every PU must send a message to one of its
neighbors and receive a message from its other neighbor. Since this cannot happen simultaneously, the
time consumption of Approach 1 is given by
Similarly, under the modified model it takes dlog 2 ne instead of dlog 3 ne steps to concentrate all data into
one PU and another dlog 2 ne steps to broadcast the data to every other PU. The time taken by Approach
2 is therefore approximately given by
ne
A detailed analysis of all algorithms under this model is omitted, because the modifications are rather
straightforward. Furthermore, the main purpose of this section is to show that the developed techniques
actually work in practice, and not that the performance model is accurate. A detailed performance model
should incorporate that short messages incur a lower startup cost than long messages, that the send and
receive overheads can differ significantly, etc. This is beyond the scope of this paper.
Time
per
byte
Message length (bytes)
Approach 1
Approach 2
Circgos
Figure
5: Performance of the gossiping algorithms on a circular array with 64 PUs.
An additional remark concerns the implementations of algorithm circgos and torgos. In the
implementations we used 3 parameters: a 1 , a 2 and k, where a 1 is the number of bridgeheads, a 2 is the
factor by which the number of bridgeheads is increased in every round, and n=k is the packet size during
Phase 3 (4.a) of circgos (torgos). In the descriptions of circgos and torgos, we have set a
2. For asymptotic analysis this is fine, but on such a moderate size platform rounding
errors may be introduced which can affect the execution times significantly.
Experimental Results. The circular array algorithms were implemented on the Paragon by embedding
a circular array into the mesh. Figure 5 compares the performance achieved by algorithm
on a circular array with 64 PUs with the performance achieved by Approach 1 and the
performance attained by Approach 2. Every data point measured for the implementation of algorithm
circgos is labeled with the values of a 1 , a 2 and k for which the result was obtained. In order to place
the data on a common scale, we divided the time taken by each algorithm by the message length m. The
total execution time is obtained by multiplying the time per byte by the message length.
It can be seen that algorithm circgos is always faster than Approach 2. For messages up to 256
bytes, the best results are obtained with a 1. With this set of parameters, the
behavior of circgos is almost the same as the behavior of Approach 2, except that it saves 1 startup and
the transmission of a packet of size l \Delta n=2 at the end of the concentration phase. When the message size
increases, the fastest results are obtained when the number of bridgeheads a 1 also increases, but a 2 and k
remain fixed. With these parameters, algorithm circgos first concentrates data in a few selected nodes
as in Approach 2, after that it circulates the packets around as in Approach 1, and finally it broadcasts
the data to all non-bridgeheads, again as in Approach 2. Other values for a 2 and k always performed
worse than a 1. When the message size increases beyond 16 KB, the best results are
obtained when a 64. For this value of a, algorithm circgos and Approach 1 behave identically,
as can be seen since the data points coincide.
Figure
6 shows the performance of 6 gossiping routines on an 8 \Theta 8 configuration of the Paragon: (1)
Approach 1-1, (2) Approach 2-2, (3) Approach 2-1, (4) algorithm torgos with parameters a
Approach 1-1 using black/white packets, and (6) the gossiping routine gcolx provided by
the NX message-passing library. The fifth algorithm implementation does not partition the packet in every
PU into a white and a black packet, but first performs a gossip in every 2 \Theta 2 submesh, after which each
colors its packet white, and each PU i;j where even colors its packet black.
This was done because of the one-port restriction. Furthermore, the first four algorithm implementations
do not employ the technique of interleaving horizontal and vertical messages. This was done because on
such a moderate size network one does not save many startups by using a concentrate/broadcast approach
instead of a store-and-forward approach. Moreover, if the PUs were divided into black and white PUs,
Time
per
byte
Message length (bytes)
Approach 1-1
Approach 2-2
Approach 2-1
Approach 1-1 w black/white nodes
gcolx
(a) Messages smaller than 1KB.1.52.53.54.51K 2K 4K 8K 16K 32K 64K
Time
per
byte
Message length (bytes)
Approach 1-1
Approach 2-2
Approach 2-1
Approach 1-1 w black/white nodes
gcolx
(b) Messages larger than 1KB.
Figure
Performance of the gossiping algorithms on a 2-dimensional mesh with 64 PUs.
the differences would almost vanish.
Because the differences between the various gossiping algorithms are rather small on this moderate
size machine, we divided the experimental data into results for messages smaller than 1KB and messages
larger than 1KB. Comparing Approach 1-1, 2-2, 2-1 and torgos(2; 2; 1), we find that torgos is the
fastest algorithm for messages up to 3KB. For larger messages, Approach 1-1 yields the best results. For
a message of 3KB, the ratio between the startup cost of the message transmission and the transmission
time of the message is about 4.2, and for such a small ratio Approach 1-1 turns out to be the fastest
gossiping algorithm. Furthermore, as was indicated in Section 6, Approach 2-2 and 2-1 have become
they never outperform the fastest of Approach 1-1 and torgos(2; 2; 1).
Comparing Approach 1-1 and torgos(2; 2; 1) with the gossiping routine gcolx supplied by NX,
one can see that gcolx only yields the best results when the message size is very small. For messages
larger than about 200 bytes, the fastest of our algorithm implementations always outperforms the vendor
supplied routine. The largest relative difference was measured for messages of 1.5KB. For this message
length, gcolx requires 3.96 -s/byte, whereas torgos(2; 2; 1) needs 3.04 -s/byte, which corresponds to
a performance improvement by a factor of about 1.3. We believe that this supports our claim that the
developed algorithms have practical relevance.
Also included in Figure 6 are the results obtained for an implementation of Approach 1-1 in which the
nodes are divided into white and black nodes, and in which the white nodes route their packets at all times
orthogonally to the black ones. It can be seen that except for very small packets, this implementation
always produces the best results. As stated before, this is due to the fact that on this moderate size
machine one does not save many startups by using a concentrate/broadcast approach instead of a store-
and-forward approach, especially when the nodes are split into white and black nodes. The results for this
algorithm are mainly included here to show that the idea of interleaving horizontal and vertical packets
can be used advantageously.
8 Conclusion
We presented gossiping algorithms for meshes of arbitrary dimensions. We optimized the trade-off between
contributions due to start-ups and those due to the bounded capacity of the connections. This
enabled us to reduce the time for gossiping in theory as well as practice for an important range of the
involved parameters. Furthermore, we presented an interesting generalization of a diagonal, which can
be applied to arbitrary dimensions. This seems to have wider applicability.
Acknowledgments
Computational support was provided by KFA J-ulich, Germany.
--R
'On the Efficiency of Global Combine Algorithms for 2-D Meshes with Wormhole Routing,' Journal of Parallel and Distributed Computing
'Intel Paragon XP/S - Architecture, Software Environment, and Performance,' Technical Report KFA-ZAM-IB-9409
Advanced Computer Architecture
'Randomized Multipacket Routing and Sorting on Meshes,' Algorith- mica
'Fast Gossiping for the Hypercube,' SIAM J.
'SUNMOS for the Intel Paragon: A Brief User's Guide
'A Survey of Wormhole Routing Techniques in Direct Networks,' IEEE Computer
'Circuit-Switched Broadcasting in Torus Networks,' IEEE Transactions on Parallel and Distributed Systems
'k-k Routing, k-k Sorting, and Cut-Through Routing on the Mesh,' Journal of Algorithms
'A Logarithmic Time Sort for Linear Size Networks,' Journal of the ACM
'Data Communications in Parallel Block Predictor-Corrector Methods for solving ODEs,' Techn.
'Bandwidth-Optimal Complete Exchange on Wormhole-Routed 2D/3D Torus Networks: A Diagonal-Propagation Approach,' IEEE Transactions on Parallel and Distributed Systems
--TR
--CTR
Michal Soch , Paval Tvrdk, Time-Optimal Gossip of Large Packets in Noncombining 2D Tori and Meshes, IEEE Transactions on Parallel and Distributed Systems, v.10 n.12, p.1252-1261, December 1999
Jop F. Sibeyn, Solving Fundamental Problems on Sparse-Meshes, IEEE Transactions on Parallel and Distributed Systems, v.11 n.12, p.1324-1332, December 2000
Ulrich Meyer , Jop F. Sibeyn, Oblivious gossiping on tori, Journal of Algorithms, v.42 n.1, p.1-19, January 2002
Francis C.M. Lau , Shi-Heng Zhang, Fast Gossiping in Square Meshes/Tori with Bounded-Size Packets, IEEE Transactions on Parallel and Distributed Systems, v.13 n.4, p.349-358, April 2002
Yuanyuan Yang , Jianchao Wang, Near-Optimal All-to-All Broadcast in Multidimensional All-Port Meshes and Tori, IEEE Transactions on Parallel and Distributed Systems, v.13 n.2, p.128-141, February 2002
Yuanyuan Yang , Jianchao Wang, Pipelined All-to-All Broadcast in All-Port Meshes and Tori, IEEE Transactions on Computers, v.50 n.10, p.1020-1032, October 2001 | wormhole routing;mesh networks;torus networks;gossip;global communication |
282725 | An Efficient Algorithm for Row Minima Computations on Basic Reconfigurable Meshes. | AbstractA matrix A of size mn containing items from a totally ordered universe is termed monotone if, for every i, j, 1 i < jm, the minimum value in row j lies below or to the right of the minimum in row i. Monotone matrices, and variations thereof, are known to have many important applications. In particular, the problem of computing the row minima of a monotone matrix is of import in image processing, pattern recognition, text editing, facility location, optimization, and VLSI. Our first main contribution is to exhibit a number of nontrivial lower bounds for matrix search problems. These lower bound results hold for arbitrary, infinite, two-dimensional reconfigurable meshes as long as the input is pretiled onto a contiguous nn submesh thereof. Specifically, in this context, we show that every algorithm that solves the problem of computing the minimum of an nn matrix must take (log log n) time. The same lower bound is shown to hold for the problem of computing the minimum in each row of an arbitrary nn matrix. As a byproduct, we obtain an (log log n) time lower bound for the problem of selecting the kth smallest item in a monotone matrix, thus extending the best previously known lower bound for selection on the reconfigurable mesh. Finally, we show an $\Omega \left( {\sqrt {\log\log n}} \right)$ time lower bound for the task of computing the row minima of a monotone nn matrix. Our second main contribution is to provide a nearly optimal algorithm for the row-minima problem: With a monotone matrix of size mn with mn pretiled, one item per processor, onto a basic reconfigurable mesh of the same size, our row-minima algorithm runs in O(log n) time if 1 m 2 and in $O\!\left( {{{{\log n} \over {\log m}}}\log\log m} \right)$ time if m > 2. In case $m = n^\epsilon$ for some constant $\epsilon,$$(0 < \epsilon \le 1),$ our algorithm runs in O(log log n) time. | Introduction
Recently, in an attempt to reduce its large computational diameter, the mesh-connected architecture
has been enhanced with various broadcasting capabilities. Some of these involve endowing the
mesh with static buses, that is buses whose configuration is fixed and cannot change; more recently,
researches have proposed augmenting the mesh architecture with reconfigurable broadcasting buses:
these are high-speed buses whose configuration can be dynamically changed in response to specific
processing needs. Examples include the bus automaton [25, 26], the reconfigurable mesh [21], the
mesh with bypass capability [12], the content addressable array processor [31], the reconfigurable
network [7], the polymorphic processor array [16,20], the reconfigurable bus with shift switching [15],
the gated-connection network [27, 28], the PARBS [30], and the polymorphic torus network [13, 17]
- see the comprehensive survey paper of Nakano [22].
Among these, the reconfigurable mesh and its variants have turned out to be valuable theoretical
models that allowed researchers to fathom the power of reconfiguration and its relationship with
the PRAM. From a practical standpoint, however, the reconfigurable mesh and its variants [21,30]
omit important properties of physical architectures and, consequently, do not provide a complete
and precise characterization of real systems. Moreover, these models are so flexible and powerful
that it has turned out to be impossible to derive from them high-level programming models that
reflect their flexibility and intrinsic power [16, 20]. Worse yet, it has been recently shown that
the reconfigurable mesh and the PARBS do not scale and, as a consequence, do not immediately
support virtual parallelism [18, 19].
Motivated by the goal of developing algorithms in a scalable model of computation, we adopt
a restricted version of the reconfigurable mesh, that we call the basic reconfigurable mesh, (BRM,
for short). Our model is derived from the Polymorphic Processor Array (PPA) proposed in [16,20]:
the BRM shares with the PPA all the restrictions on the reconfigurability and the directionality
of the bus system. The BRM differs from the PPA in that we do not allow torus connections.
As a result, the BRM is potentially weaker than the PPA. It is very important to stress that the
programming model developed in [16, 20] for the PPA also applies to the BRM. In particular, all
the broadcast primitives developed in [16, 20], with the exception of those using torus connections,
can be inherited by the BRM. In fact, all the algorithms developed in this paper could have been,
just as easily, written using the extended C language primitives of [16,20]. We opted for specifying
our algorithm in a more conventional fashion only to make the presentation easier to follow.
Consider a two-dimensional array (i.e. a matrix) A of size m \Theta n with items from a totally
ordered universe. Matrix A is termed monotone if for every m, the smallest
value in row j lies below or to the right of the smallest value in row i, as illustrated in the example
above, where the row minima are highlighted. A matrix A is said to be totally monotone if every
submatrix of A is monotone. The concepts of monotone and totally monotone matrices may seem
artificial and contrived at first. Rather surprisingly, however, these concepts have found dozens of
applications to problems in optimization, VLSI design, facility location problems, string editing,
pattern recognition, and computational morphology, among many others. The reader is referred
to [1-6] where many of these applications are discussed in detail.
One of the recurring problem involving matrix searching is referred to as row-minima computation
[6]. In particular, Aggarwal et al. [2] have shown that the task of computing the row-minima
of an m \Theta n monotone matrix has a sequential lower bound of \Omega\Gamma n log m). They also showed that
this lower bound is tight by exhibiting a sequential algorithm for the row-minima problem running
in O(n log m) time. In the case matrix is totally monotone, the sequential complexity is reduced to
To the best of our knowledge, no time lower bound for the row-minima problem has been
obtained in parallel models of computation, in spite of the importance of this problem. The first
main contribution of this paper is to propose a number of non-trivial time lower bounds for matrix
search problems. These lower bounds hold for general two-dimensional reconfigurable meshes of
infinite size, as long as the input is pretiled onto a contiguous submesh of size n \Theta n. Specifically,
in this context we show that every algorithm that solves the problem of computing the smallest
item of an n \Theta n matrix must
take\Omega\Gamma238 log n) time. The same lower bound is shown to hold for
the problem of computing the minima in each row of an arbitrary n \Theta n matrix. As a byproduct,
we obtain an
\Omega\Gamma/13 log n) time lower bound for the problem of selecting the k-th smallest item
in a monotone matrix. Previously, Hao et al. [10] have obtained
log n) lower bound for
selection in arbitrary matrices on finite reconfigurable meshes. Thus, our lower bound extends
the result of [10] in two directions: we show that the same lower bound applies to selection on
monotone matrices and on a reconfigurable mesh of an infinite size. Finally, we show an almost
tight \Omega\Gamma
log log n) time lower bound for the task of computing the row minima of a monotone
n \Theta n matrix. Our second main contribution is to provide an efficient algorithm for the row-minima
problem: with a monotone matrix of size m \Theta n with m - n pretiled, one item per processor, onto
a BRM of the same size, our row-minima algorithm runs in O( log n
log m log log m) time. In case
for some constant ffl, (0 our algorithm runs in O(log log n) time.
The remainder of this work is organized as follows: Section 2 introduces the model of computations
adopted in this paper; Section 3 discusses a number of relevant lower-bound results; Section 4
presents basic algorithms that will be key in our subsequent row-minima algorithm; Section 5 gives
the details of our row-minima algorithm; finally, Section 6 offers concluding remarks and poses
open problems.
2 The Basic Reconfigurable Mesh
A Basic Reconfigurable Mesh (BRM, for short) of size m \Theta n consists of mn identical SIMD
processors positioned on a rectangular array with m rows and n columns. As usual, it is assumed
that every processor knows its own coordinates within the mesh: we let P (i; j) denote the processor
placed in row i and column j, with P (1; 1) in the north-west corner of the mesh.
Figure
1: A basic reconfigurable mesh of size 4 \Theta 4
Each processor P (i; j) is connected to its four neighbors P (i \Gamma
exist, and has four ports N, S, E, and W, as illustrated in Figure 1. Local
connections between these ports can be established, subject to the following restrictions:
1. In each time unit at most one of the pairs of ports (N, S) or (E,W) can be set; moreover,
2. All the processors that connect a pair of ports must connect the same
3. broadcasting on the resulting subbuses is unidirectional. For example, if the processors set
the (E,W) connection, then on the resulting horizontal buses all broadcasting is done either
"eastbound" or else "westbound", but not both.
Figure
2: Examples of unidirectional horizontal subbuses
We refer the reader to Figure 2(a)-(b) for an illustration of several possible unidirectional sub-
buses. The BRM is very much like the recently proposed PPA multiprocessor array, except that
the BRM does not have the torus connections present in the PPA. In a series of papers [16, 18-20]
Maresca and his co-workers demonstrated that the PPA architecture and the corresponding programming
environment is not only feasible and cost-effective to implement, it also enjoys additional
features that set it apart from the standard reconfigurable mesh and the PARBS. Specifically, these
researchers have argued convincingly that the reconfigurable mesh is too powerful and unrestricted
to support virtual parallelism under present-day technology. By contrast, the PPA architecture has
been shown to scale and, thus, to support virtual parallelism [16, 18].
The BRM is easily shown to inherit all these attractive characteristics of the PPA, including
the support of virtual parallelism and the C-based programming environment, making it eminently
practical. As in [16], we assume ideal communications along buses (no delay). Although inexact, a
series of recent experiments with the PPA [16] and the GCN [27, 28] seem to indicate that this is
a reasonable working hypothesis.
3 Lower Bounds
The main goal of this section is to demonstrate non-trivial lower bounds for several matrix search
problems. Our lower bound arguments do not use the restrictions of the BRM, holding for more
powerful reconfigurable meshes that allow any local connections. In fact, our arguments hold for
arbitrary two-dimensional reconfigurable meshes of an infinite size, provided that the input is placed
into a contiguous n \Theta n submesh thereof.
Formally, this section deals with the following problems:
Problem 1. Given an n \Theta n matrix pretiled one item per processor onto an n \Theta n submesh of an
reconfigurable mesh, find the minimum item in the matrix.
Problem 2. Given an n \Theta n matrix pretiled one item per processor onto an n \Theta n submesh of an
reconfigurable mesh, find the minimum item of each row.
Problem 3. Given an n \Theta n monotone matrix pretiled one item per processor onto an n \Theta n
submesh of an 1 \Theta 1 reconfigurable mesh, find the minimum item of each row.
Problem 4. Given an n \Theta n totally monotone matrix pretiled one item per processor onto an n \Theta n
submesh of an 1 \Theta 1 reconfigurable mesh, find the minimum item of each row.
We will show that Problems 1 and 2 have
an\Omega\Gamma/29 log n)-time lower bound, and that Problem 3
has an \Omega\Gamma
log log n)-time lower bound. The lower bound for Problem 4 is still open.
The proofs are based on a technique detailed in [11, 29] that uses the following graph-theoretic
result of Tur'an [8]. (Recall that an independent set in a graph is a set of pairwise non-adjacent
vertices.)
Lemma 3.1 E) be an arbitrary graph. G has an independent set U such that
This lemma is used, in an implicit adversary argument, to bound from below the number of items
in the matrix that are possible choices for the minimum. Let V be the set of candidates for the
minimum at the beginning of the current iteration and let E stand for the set of pairs of candidates
that are compared within the current iteration. The situation benefits from being represented by
a graph E) with, V and E representing, respectively, the vertices and the edges of the
graph. It is intuitively obvious that an adversary can choose the outcome of the comparisons in
such a way that the next set of candidates is no larger than the size of an independent set U in G.
In other words, for a set V of candidates and for a set E of pairs that are compared by a minimum
finding algorithm, items in the independent set U have the potential of becoming the minimum.
Consequently, all items in U are still candidates for the minimum after comparing all pairs in E.
To make the presentation easier to follow, we assume that each time unit is partitioned into the
following three stages:
Phase 1 bus reconfiguration: i.e. the processors set local connections;
Phase 2 broadcasting: i.e. the processors send at most a data item to each port, and receive a
piece of data from each port;
Phase 3 local computation: i.e. every processor selects two elements stored in its local memory,
compares them and changes its internal status.
We begin by proving the following lemma.
Lemma 3.2 Every algorithm that solves Problem 1
log n) time.
Proof. We will evaluate the number of pairs that can be compared by an algorithm in Phase 3 of
time unit t. Notice that in Phase 2 of a time unit, at most 4n items can be sent to the outside of
the submesh. Hence, altogether, at most 4nt items can be sent before the execution of Phase 3 of
time unit t. Therefore, the outside of the submesh can compare at mostB @ 4nt1
of items. The inside of the submesh can compare at most n 2 pairs in each Phase 3. Consequently,
in Phase 3 of time unit t, at most 16n 2 can be compared by the 1 \Theta 1
reconfigurable mesh. Let c t be the number of candidates that can be the minimum after Phase 3
of time unit t. Then, by virtue of Lemma 3.1 we have,
By applying the logarithm, we obtain
log c t - 2 log c
To complete the algorithm at the end of T time units, c T must be less than or equal to 1. Therefore,
must hold. In turn, this implies that T
2\Omega\Gamma400 log n), as claimed. 2
It is worth mentioning that Lemma 3.2 implies a similar lower bound for the task of selection
in monotone matrices. To see this, note that given an arbitrary matrix A of size n \Theta n we can
construct a monotone matrix A 0 of size n \Theta (n + 1) by simply adjoining to A a column vector of all
of whose entries are \Gamma1. It is now clear that the minimum item in A is precisely the (n
smallest item in A 0 . Thus, we have the following result.
Lemma 3.3 Every algorithm that selects the k-th smallest item in a monotone matrix of size n \Theta n
requires
\Omega\Gamma108 log n) time.
Previously, Hao et al. [10] have obtained
log n) lower bound for selection in arbitrary matrices
on finite reconfigurable meshes. Thus, Lemma 3.3 extends the result of [10] in two directions:
first it shows that \Omega\Gammaat/ log n) remains the lower bound for selection on monotone matrices and
second, it shows that the lower bound must hold even for infinite reconfigurable meshes.
Lemma 3.4 Every algorithm that solves Problem 2
log n) time.
Proof. Suppose to the contrary that Problem 2 requires o(log log n) time However, by using the
algorithm of Proposition 4.1 in Section 4, the minimum in the matrix can be computed in O(1)
further time. This contradicts Lemma 3.2. 2
Lemma 3.5 Every algorithm that solves Problem 3 requires \Omega\Gamma
log log n) time.
Proof. Since there is an algorithm that solves Problem 3 in O(log log n) time (see Section 5), we
can assume that the upper bound for the Problem 3 is O(log log n). Assume that a row-minima
algorithm spent time and has found no row-minima so far, and now it is about to execute
Phase 3 of time unit t, where t ! ffl log log n for some small fixed ffl ? 0.
Proceeding as in the proof of Lemma 3.2, we see that at most 17n 2 t 2 pairs can be compared in
Phase 3 of time unit t. Now a simple counting argument guarantees that at most n 1\Gamma1=4 t
rows have
been assigned at least 17n
comparisons each in time unit i,
Hence, at time i, at least n \Gamma in 1\Gamma1=4 t
rows have been assigned at most 17n 1+1=4 t
Assume that the topmost row was assigned at most 17n 1+1=4 t
comparisons in each time unit i,
be the number of candidates in the top row at the end of Phase 3 of time
unit t.
By applying the logarithm, we have log c i - 2 log c
Hence, for some small fixed ffl ? 0, c ffl log log n ? 1 for large n. Therefore, at least n \Gamma tn 1\Gamma1=4 t rows
including the topmost row cannot find the row-minima in Phase 3 of time unit t. Consequently, at
most tn 1\Gamma1=4 t
rows can find the row-minima in Phase 3 of time unit t. In turn, this implies that
there exist n=(tn 1\Gamma1=4 t
=t consecutive rows that cannot find the row-minima in Phase 3
of time t. Therefore, we can find a submatrix of size n 1=4 t
=t \Theta n 1=4 t
=t such that all of the n 1=4 t
row-minima are in it but no row-minima is found. Let d t \Theta d t be the size of sub-matrix such that all
d t row-minima are in it but no row-minima is found at time t. Then, d t - d
=t. In addition,
for large t, d
holds. Thus, for large t we have: d t - d
. By applying the
logarithm twice, we can write
log log d t - log log d
log log
log log
Hence, in order to have d T - 1 it must be the case that T 2 \Omega\Gamma
log log n), and the proof is
4 Preliminaries
Data movement operations are central to many efficient algorithms for parallel machines constructed
as interconnection networks of processors. The purpose of this section is to review a number of
basic data movement techniques for basic reconfigurable meshes.
Consider a sequence of n items a 1 , a . We are interested in computing the prefix maxima
defined for every j, (1 - j - n), by setting z g. Recently Olariu
et al. [23] showed that the task of computing the prefix maxima of a sequence of n numbers stored
in the first row of a reconfigurable mesh of size m \Theta n can be solved in O(log n) time if
in O( log n
log m algorithm is crucial for understanding our algorithm for
computing the row minima of a monotone matrix, we now present an adaptation of the algorithm
in [23] for the BRM.
To begin, we exhibit an O(1) time algorithm for computing the prefix maxima of n items on
a BRM of size n \Theta n. The idea of this first algorithm involves checking, for all j, (1 - j - n),
whether a j is the maximum of a 1 ; a . The details are spelled out by the following sequence
of steps. The reader is referred to Figure 3(a)-(f) where the algorithm is illustrated on the input
sequence 7, 3, 8, 6.
Algorithm
Step 1. Establish a vertical bus in every column j, (1
every processor P (1; j), (1 broadcasts the item a j southbound along the vertical
bus in column j;
Step 2. Establish a horizontal bus in every row i, (1
every processor P broadcasts the item a n+1\Gammai westbound along
the horizontal bus in row
Step 3. At the end of Step 2, every processor P (i; j), (i stores the items a n+1\Gammai and
sets a local variable b i;j as follows:
Step 4. Every processor P (i; j), (i connects its ports E and W; every
processor P (i; j), (i broadcasts a 0 eastbound; every processor
that receives a 0 from its W port sets b i;n+1\Gammai to 0;
Step 5. Every processor P (i; j), (i connects its ports N and S; every processor P
northbound on the bus in column i; every processor
copies the value received into b 1;i ;
to a i ; every processor P (1; i),
connects its ports E and W; every processor P (1; i), (1 - i -
to the value received from its port W.
The correctness of the algorithm above is easily seen. Thus, we have the following result.
Proposition 4.1 The prefix maxima of n items from a totally ordered universe stored one item
per processor in the first row of a basic reconfigurable mesh of size n \Theta n can be computed in O(1)
time.
Next, following [23], we briefly sketch the idea involved in computing the prefix maxima of n
items a 1 , a a n on a BRM of size m \Theta n with by partitioning the original
mesh into submeshes of size m \Theta m, and apply Prefix-Maxima-1 to each such submesh of size m \Theta m.
We further combine groups of m consecutive submeshes of size m \Theta m into a submesh of size
combine groups of m consecutive submeshes of size m \Theta m 2 into a submesh of size
m\Thetam 3 , and so on. Note that if the prefix maxima of a group of m consecutive submeshes are known,
then the prefix maxima of their combination can be computed essentially as in Prefix-Maxima-1.
For details, we refer the reader to [23].
To summarize the above discussion we state the following result.
Proposition 4.2 The prefix maxima of n items from a totally ordered universe stored in one row
of a basic reconfigurable mesh of size m \Theta n with can be computed in O( log n
log m ) time.
Proposition 4.2 has the following important consequence.
Proposition 4.3 Let ffl be an arbitrary constant in the range 1. The prefix maxima of
n items from a totally ordered universe stored one item per processor in the first row of a basic
reconfigurable mesh of size n ffl \Theta n can be computed in O(1) time.
For later reference we now solve a particular instance of the row-minima problem, that we call
the selective row minima problem. Consider an arbitrary matrix A of size K \Theta N stored, one item
per processor, in K consecutive rows of a BRM of size M \Theta N . For simplicity of exposition we
assume that A is stored in the first K rows of the platform, but this is not essential. The goal is to
compute the minima in rows 1;
A. We proceed as follows.
Algorithm Selective-Row-Minima;
7,6
(a) (b)
(c) (d)
Figure
3: Illustrating algorithm Prefix-Maxima-1
r
r
r
r
R
R i;2
R i;1
Figure
4: Illustrating algorithm Selective-Row-Minima
Step 1. Partition the BMR into N=K submeshes R 1 N=K each of size K \Theta K as illustrated
in Figure 4; further partition each submesh R i , (1 - i - N=K), into submeshes
k each of size
K \Theta K;
Step 2. Compute the minimum in the first row of each submesh R i;j in O(1) time using Proposition
4.3; let a i;1 ; a
K be the minima in the first row of R i;1
by using appropriately established horizontal buses we arrange for every a i;j ,
to be moved to the processor in the first row and j
K-th column of R i;j ;
Step 3. We now perceive the original BRM of size M \Theta N as consisting of
K submeshes
K each of size M
\Theta N ; the goal now becomes to compute for every i, (1 -
the minimum of row (i \Gamma 1)
of A in T i ; it is easy to see that after having established
vertical buses in all columns of the BRM, all the partial minima in row (i \Gamma 1)
K+1,
of A can be broadcast southbound to the first row of T
Step 4. Using the algorithm of Proposition 4.2 compute the minimum in the first row of each T i ,
in O
log N \Gammalog K
log M \Gammalog
O
log N
log M
time.
Thus, we have proved the following result.
Lemma 4.4 The task of computing the minima in rows 1;
of an arbitrary matrix of size K \Theta N stored one item per processor in K rows of a BRM of size
M \Theta N can be performed in O
log N
log M
time.
5 The algorithm
The goal of this section is to present the details of an efficient algorithm for computing the row-
minima of an m \Theta n monotone matrix A. The matrix is assumed pretiled one item per processor
onto a BRM R of the same size, such that for every
stores A(i; j).
We begin by stating a few technical results that will come in handy later on. To begin, consider
a subset of the rows of A and let j(i 1 be such that for all k, (1 - k - p),
is the minimum in row r k . Since the matrix A is monotone, we must have
be the submatrices of A defined as follows:
consists of the intersection of the first rows with the first j(i 1 ) columns of A;
ffl for every k, consists of the intersection of rows
with the columns j(i
ffl A p consists of the intersection of rows with the columns j(i p ) through n.
The following result will be used again and again in the remainder of this section.
Lemma 5.1 Every matrix A k , (1 - k - p) is monotone.
Proof. First, let k be an arbitrary subscript with 2 - k - p. and refer to Figure 5. Let B k consist
of the submatrix of A consisting of the intersection of rows columns
Similarly, let C k be the submatrix of A consisting of the intersection of rows
Figure
5: Illustrating the proof of Lemma 5.1
Since the matrix A is monotone and since A(i is the minimum in row i
that none of the minima in rows i occur in the submatrix B k . Similarly,
since A(i k ; j(i k )) is the minimum in row i k , no minima in rows i
in the submatrix C k . It follows that the minima in rows i must occur in the
submatrix A k . Consequently, if A k is not monotone, then we violate the monotonicity of A.
A perfectly similar argument shows that A 1 and A p are also monotone, completing the proof of
the lemma. 2
The matrices A k , (1 - k - p), defined above pairwise share a column. The following technical
result shows that one can always transform these matrices such that they involve distinct columns.
For this purpose, consider the matrix A 0
k obtained from A k by replacing for every i, (i
by dropping
column j(i k ). In other words, A 0
k is obtained from A k by retaining the minimum values in its first
and next column and then removing the last column. The last matrix A 0
p is taken to be A p . The
following result, whose proof is omitted will be used implicitly in our algorithm.
Lemma 5.2 Every matrix A 0
In outline, our algorithm for computing the row-minima of a monotone matrix proceeds as
follows. First, we solve an instance of the selective row minima, whose result is used to partition
the original matrix into a number of monotone matrices as described in Lemmas 5.1 and 5.2. This
process is continued until the row minima in each of the resulting matrices can be solved directly.
then the problem has a trivial solution running in \Theta(log n) time, which is also best
possible even on the more powerful reconfigurable mesh [23].
We shall, therefore, assume that m - 2. exposition we shall assume that
c i#1
R
c i#1
Figure
Illustrating the partition into submeshes T i and R i
Algorithm Row-Minima(A);
Step 1. Partition R into
each of size
m \Theta n such that for every
m),
m of R as illustrated in Figure 6;
Step 2. Using the algorithm of Lemma 4.4 compute the minima of the items in the first row of
every submesh T i ,
m), in O( log n
log
Step 3. Let c
m be the columns of R containing the minima in T
tively, computed in Step 2. The monotonicity of A guarantees that c 1 - c 2 - c p m .
m), be the submesh of consisting of all the processors P (r; c) such that
In other words, R i consists of the intersection
of rows
m with columns c
illustrated in Figure 6;
c i#1
Figure
7: Illustrating the submeshes S i
Step 4. Partition the mesh R into submeshes S 1
illustrated in Figure 7; for log log m iterations, repeat Steps 1-3 above in each submesh S i .
The correctness of algorithm being easy to see, we now turn to the complexity. Steps 1-3 have
a combined complexity of O
log m
. In Step 4, c and so, by Lemma 4.4 each iteration
of Step 4 also runs in O
log m
time. Since there are, essentially, log log m such iterations, the
overall complexity of the algorithm is O
log m log log m
. To summarize our findings we state the
following result.
Theorem 5.3 The task of computing the row-minima of a monotone matrix of size m \Theta n with
pretiled one item per processor in a BRM of the same size can be solved in O(log n)
and in O
log m log log m
2.
Theorem 5.3 has the following consequence.
Corollary 5.4 The task of computing the row-minima of a monotone matrix of size m \Theta n with
pretiled one item per processor in a BRM of the same size can be solved in
O(log log n) time.
6 Conclusions and open problems
We have shown that the problem of computing the row-minima of a monotone matrix can be solved
efficiently on the basic reconfigurable mesh (BRM) - a weaker variant of the recently proposed
Polymorphic Processor Array [16].
Specifically, we have exhibited an algorithm that, with a monotone matrix A of size m \Theta n,
stored in a BRM of the same size, as input solves the row-minima problem in
O(log n) time in case m 2 O(1), and in O
log m log log m
time otherwise. In particular, if
for some fixed ffl, (0 our algorithm runs in O(log log n) time.
Our second main contribution was to propose a number of non-trivial time lower bounds for matrix
search problems. These lower bounds hold for general two-dimensional reconfigurable meshes
of infinite size, as long as the input is pretiled onto an n \Theta n submesh thereof. Specifically, in this
context we show that every algorithm that solves the problem of computing the smallest item of
an n \Theta n matrix, or the smallest item in each row of an n \Theta n matrix must
take\Omega\Gamma453 log n) time.
This result implies an
\Omega\Gamma/17 log n) time lower bound for the problem of selecting the k-th smallest
item in a monotone matrix, extending the result of [10] in two directions: we show that the same
lower bound applies to selection on monotone matrices and on a reconfigurable mesh of an infinite
size. Finally, we showed an almost tight \Omega\Gamma
log log n) time lower bound for the task of computing
the row minima of a monotone n \Theta n matrix. These are the first non-trivial lower bounds of this
kind known to the authors.
A number of problems remain open. First, as noted, there is a discrepancy between the time
lower bound we obtained for the task of computing the row-minima of a monotone matrix and
the upper bound provided by our algorithm. Narrowing this gap will be a hard problem that we
leave for future research. Second, no non-trivial lower bounds for the problem of computing the
row-minima of a totally monotone matrix are known to us. This promises to be an exciting area
for future research. Yet another problem of interest would be to solve the row-minima problem
for the special case of totally monotone matrices. trivially, our algorithm for monotone matrices
also works for totally monotone ones. Unfortunately, to this date we have not been able to find a
non-trivial lower bound for this problem.
Acknowledgement
: The authors wish to thank Mike Atallah for many useful comments
and for pointing out a number of relevant references.
--R
Applications of generalized matrix searching to geometric problems
Geometric applications of a matrix-searching algorithm
Notes on searching in multidimensional monotone arrays
Efficient parallel algorithms for string editing and related problems
A faster parallel algorithm for a matrix searching problem
An efficient parallel algorithm for the row minima of a totally monotone matrix
The power of reconfiguration
Graphs and Hypergraphs
Pattern Classification and Scene Analysis
Selection on the reconfigurable mesh
An Introduction to Parallel Algorithms
IEEE Transactions on Computers
Reconfigurable buses with shift switching - concepts and applications
IEEE Transactions on Parallel and Distributed Systems
Connection autonomy in SIMD computers: a VLSI implementation
Virtual parallelism support in reconfigurable processor arrays
Hierarchical node clustering in polymorphic processor arrays
Hardware support for fast reconfigurability in processor arrays
Parallel computations on reconfigurable meshes
A bibliography of published papers on dynamically reconfigurable architectures
Fundamental data movement on reconfigurable meshes
Fundamental algorithms on reconfigurable meshes
On the ultimate limitations of parallel processing
Bus automata
bit serial associate processor
The gated interconnection network for dynamic programming
Parallelism in comparison problems
Constant time algorithms for the transitive closure problem and its applications IEEE Transactions on Parallel and Distributed Systems
The image understanding architecture
--TR
--CTR
Schwing , Larry Wilson, Optimal Algorithms for the Multiple Query Problem on Reconfigurable Meshes, with Applications, IEEE Transactions on Parallel and Distributed Systems, v.12 n.9, p.875-887, September 2001
Tatsuya Hayashi , Koji Nakano , Stephen Olariu, An O((log log n)2) Time Algorithm to Compute the Convex Hull of Sorted Points on Reconfigurable Meshes, IEEE Transactions on Parallel and Distributed Systems, v.9 n.12, p.1167-1179, December 1998
R. Lin , K. Nakano , S. Olariu , M. C. Pinotti , J. L. Schwing , A. Y. Zomaya, Scalable Hardware-Algorithms for Binary Prefix Sums, IEEE Transactions on Parallel and Distributed Systems, v.11 n.8, p.838-850, August 2000
Alan A. Bertossi , Alessandro Mei, Time and work optimal simulation of basic reconfigurable meshes on hypercubes, Journal of Parallel and Distributed Computing, v.64 n.1, p.173-180, January 2004 | basic reconfigurable meshes;monotone matrices;cellular system design;row minima;search problems;facility location problems;VLSI design;reconfigurable meshes;totally monotone matrices |
282729 | Designing Masking Fault-Tolerance via Nonmasking Fault-Tolerance. | AbstractMasking fault-tolerance guarantees that programs continually satisfy their specification in the presence of faults. By way of contrast, nonmasking fault-tolerance does not guarantee as merely guarantees that when faults stop occurring, program executions converge to states from where programs continually (re)satisfy their specification. We present in this paper a component based method for the design of masking fault-tolerant programs. In this method, components are added to a fault-intolerant program in a stepwise manner, first, to transform the fault-intolerant program into a nonmasking fault-tolerant one and, then, to enhance the fault-tolerance from nonmasking to masking. We illustrate the method by designing programs for agreement in the presence of Byzantine faults, data transfer in the presence of message loss, triple modular redundancy in the presence of input corruption, and mutual exclusion in the presence of process fail-stops. These examples also serve to demonstrate that the method accommodates a variety of fault-classes. It provides alternative designs for programs usually designed with extant design methods, and it offers the potential for improved masking fault-tolerant programs. | Introduction
In this paper, we present a new method for the design of "masking" fault-tolerant systems [1-4].
We focus our attention on masking fault-tolerance because it is often a desirable -if not an ideal-
property for system design: masking the effects of faults ensures that a system always satisfies its
problem specification and, hence, users of the system always observe expected behavior. By the
same token, when the users of the system are other systems, the design of these other systems
becomes simpler.
To motivate the design method, we note that designers of masking fault-tolerant systems often
face the potentially conflicting constraints of maximizing reliability while minimizing overhead.
As a result, designers avoid methods that yield complex designs, since the complexity itself may
result in reduced reliability. Moreover, they avoid methods that yield inefficient implementations,
since system users are generally unwilling to pay a significant cost in price or performance for the
sake of masking fault-tolerance. Therefore, a key goal for our method is to yield well-structured
-and hence more reliable- systems, while still offering the potential for efficient implementation.
Other goals of the method include the ability to deal with a variety of fault-classes and the ability
to provide designs -albeit alternative ones- for masking tolerant systems which are typically
designed by using classical methods such as replication, exception handling, and recovery blocks.
With these goals in mind, our method is based on the use of components that add tolerance
properties to a fault-intolerant system. It divides the complexity of designing fault-tolerant system
into that of designing relatively simpler components and that of adding the components to the fault-
intolerant system. And, by focusing attention on the efficient implementation of the components
themselves, it offers the potential for efficient implementation of the resulting system. We call
the components added in the first stage correctors and those added in the second stage detectors.
Efficient implementation of correctors and detectors is important, as noted above, for offering the
potential for efficient masking fault-tolerant implementations.
To manage the complexity of adding components to a system, the method proceeds in a stepwise
fashion. Informally speaking, instead of adding the components which will ensure that the problem
specification is satisfied in the presence of faults all at once, the method adds components in two
stages. In the first stage, the method merely adds components for nonmasking fault-tolerance. By
nonmasking fault-tolerance we intuitively mean that, when faults stop occurring, the system execution
eventually reaches a "good" state from where the system continually "satisfies" its problem
specification. In the second stage, the method adds components that additionally ensure that the
problem specification is not "violated" until the program reaches these good states. It follows that
the fault-tolerance of the system is enhanced from nonmasking to masking.
As in any component based design, to prove the correctness of the resulting composite system,
we need to ensure that the components do not interfere with each other, i.e., they continue to
accomplish their task even if they are executed concurrently with the other components. To this
end, in the first stage, we ensure that fault-intolerant system and the correctors added to it do
not interfere with each other. And, in the second stage, we ensure that the resulting nonmasking
fault-tolerant system and the detectors added to it do not interfere with each other.
We demonstrate that our method accommodates a variety of fault-classes, by using it to design
programs that are masking fault-tolerant to Byzantine faults, input corruption, message loss, and
fail-stop failures. More specifically, we design: (1) a Byzantine agreement program whose processes
are subject to Byzantine faults; (2) an alternating-bit data transfer program whose channel
messages may be lost; (3) a triple modular redundancy (TMR) program whose inputs may be
corrupted; and (4) a new token-based mutual exclusion program whose processes may fail-stop in
a detectable manner. The TMR and Byzantine agreement examples also serve to provide alternative
designs for programs usually associated with the method of replication. The alternating-bit
protocol example serves to provide an alternative design for a program usually associated with the
method of exception handling or that of rollback recovery. The mutual exclusion case study serves
to demonstrate that, by focusing on the addition of efficient components, the method enables the
design of improved programs.
We proceed as follows. First, in Section 2, we recall a formal definition of programs, faults, and
what it means for programs to be masking or nonmasking fault-tolerant. Then, in Section 3, we
present our two-stage method for design of masking fault-tolerance. Next, in Section 4, we illustrate
the method by designing standard masking fault-tolerant programs for Byzantine agreement, data
transfer, and TMR. In Section 5, we present our case study in the design of masking fault-tolerant
token-based mutual exclusion. Finally, we compare our method with extant methods for designing
masking fault-tolerant programs and make concluding remarks in Section 6.
Programs, Faults, and masking and Nonmasking Tolerances
In this section, we recall formal definitions of masking and nonmasking fault-tolerance of programs
[5] in order to characterize a relationship between these two tolerance types, and to motivate our
design method which is presented in Section 3.
Programs. A program p is defined recursively to consist of a (possibly empty) program q, a set
of "superposition variables", and a set of "superposition actions". The superposition variables of p
are disjoint from the remaining variables of p, namely the variables of q. Each superposition action
of p has one of two forms:
hnamei :: hguardi \Gamma! hstatementi , or
hnamei :: haction of qi k hstatementi
A guard is a boolean expression over the variables of p. Thus, evaluating a guard may involve
accessing the variables of q. Note that there is a guard in each action of p: in particular, the guard
of the actions of the second (i.e., k) form is the same as that of the corresponding action of q. A
statement is an atomic, terminating update of zero or more superposition variables of p. Thus, the
superposition actions of the first form do not update the variables of q, whereas those of the second
may since they are based on an action of q. Note that, since statements of p do not update the
variables of q, the only actions of p that update the variables of q are the actions of q.
Thus, programs are designed by superposition of variables and actions on underlying programs
[6]. Superposition actions may access, but not update, the underlying variables, whereas the
underlying actions may not access or update the superposition variables. Operationally speaking,
the superposition actions of the first form execute independently (asynchronously) of other actions
and those of the second form execute in parallel (synchronously) with the underlying action they
are based upon.
State. A state of a program p is defined by a value for each variable of p, chosen from the predefined
domain of the variable. A "state predicate" of p is a boolean expression over the variables of p.
An action of p is enabled in a state iff its guard is true at that state. We use the term "S state" to
denote a state that satisfies the state predicate S.
Closure. An action "preserves" a state predicate S iff in any state where S holds and the action
is enabled, executing all of the statements in the action instantaneously in parallel yields a state
where S holds. S is "closed" in a set of actions iff each action in that set preserves S.
It follows from this definition that if S is closed in (the actions of) p then executing any sequence
of actions of p starting from a state where S holds yields a state where S holds.
Computation. A computation of p is a fair, maximal sequence of steps; in every step, an action of p
that is enabled in the current state is chosen and all of its statements are instantaneously executed
in parallel. (Recall that actions of the second form consist of multiple statements composed in
parallel.) Fairness of the sequence means that each action in p that is continuously enabled along
the states in the sequence is eventually chosen for execution. Maximality of the sequence means
that if the sequence is finite then the guard of each action in p is false in the final state.
Problem Specification. The problem specification that p satisfies consists of a "safety" specification
and a "liveness" specification[7]. A safety specification identifies a set of "bad" finite computation
prefixes that should not appear in any program computation. Dually, a liveness specification
identifies a set of "good" computation suffixes such that every computation has a suffix that is in
this set. We assume that the problem specification is suffix closed, i.e., if a computation satisfies
the problem specification, so do its suffixes.
(Remark: Our definition of liveness is stronger than Alpern and Schneider's definition [7]: the
two definitions become identical if the liveness specification is fusion closed; i.e., if computations
hff; x; fli and hfi; x; ffii satisfy the liveness specification then computations hff; x; ffii and hfi; x; fli also
satisfy the liveness specification, where ff; fi are finite computation prefixes, fl; ffi are computation
suffixes, and x is a program state.)
Invariant. An invariant of p is a state predicate S such that S 6=false, S is closed in p, and every
computation of p starting from a state in S satisfies the problem specification of p.
Informally, an invariant of p includes the states reached in fault-free executions of p. Note that
may have multiple invariants. Techniques for the design of invariants have been articulated by
Dijkstra [8], using the notion of auxiliary variables, and by Gries [9], using the heuristics of state
predicate ballooning and shrinking. Techniques for the mechanical calculation of invariants have
been discussed by Alpern and Schneider [10].
Convergence. A state predicate Q "converges to" R in p iff Q and R are closed in p and, starting
from any state where Q holds, every computation of p has a state where R holds. Note that the
converges-to relation is transitive.
Lemma 2.1. If Q converges to R in p and every computation of p starting from states where
R holds satisfies a liveness specification, then every computation of p starting from states where Q
holds satisfies that liveness specification.
Proof. Consider a computation c of p starting from a Q state. Since Q converges to R in p, c
has a suffix c 1
starting from an R state. Since every computation of p starting from an R state
satisfies the liveness specification, c 1
has a suffix c 2
that is identified by the liveness specification.
is also a suffix of c, it follows that c also satisfies the liveness specification. Thus, every
computation of p starting from a Q state satisfies that liveness specification.
Faults. The faults that a program is subject to are systematically represented by actions whose
execution perturbs the program state. We emphasize that such representation is possible notwithstanding
the type of the faults (be they stuck-at, crash, fail-stop, omission, timing, performance,
or Byzantine), their nature (be they permanent, transient, or intermittent), their observability (be
they detectable or not), or their repairability (be they correctable or not).
In some cases, such representation of faults introduces auxiliary variables. For example, to represent
a fail-stop fault as a state perturbation, we introduce an auxiliary variable up. Each action is
restricted to execute only when up is true. The fail-stop fault is represented by the action that
changes up from true to false, thereby disabling all the actions in a detectable manner. Moreover,
the repair of a fail-stopped program can be represented by the fault action that changes up from
false to true, and initializes the state of j. (This initialization may retain the state before the
process fail-stopped provided that information is on a non-volatile storage, or it may initialize it
to some predetermined value. We ignore these details as they depend on the problem at hand.) In
other words, fail-stop and repair faults are respectively represented by the following fault actions:
Fail-stop :: up \Gamma! up := false
Repair :: :up \Gamma! up := true;
f initialize the state of the process g
To represent a Byzantine fault as a state perturbation, we introduce an auxiliary variable b. The
specified actions of the program are restricted to execute only when b is false, i.e., when the program
is non-Byzantine. If b is true, i.e., if the program is Byzantine, the program is allowed to execute
actions that can change its state arbitrarily. Thus, the Byzantine fault is represented by the action
that changes b from false to true, thereby enabling the program to enter a mode where it executes
actions that change its state arbitrarily. In other words, Byzantine fault is represented by the
following
Byzantine :: :b \Gamma! b := true
Fault-span. A fault-span of program p for a fault-class F is a predicate T such that T is closed in
p and F . Informally, the fault-span includes the set of states that p reaches when executed in the
presence of actions in F . Note that p may have multiple fault-spans for F .
If program p with invariant S is subject to a fault-class F , the resulting states of p may no longer
satisfy S. However, these states satisfy fault-span of p, say T . Moreover, every state in S also
satisfies T .
Fault-Tolerance: masking and Nonmasking. We are now ready to give a formal definition
of fault-tolerance [5]. Instantiations of this definition yield definitions of masking and nonmasking
fault-tolerance.
Let p be a program, F be a set of fault actions, and S be an invariant of p. We say that "p is
F -tolerant for S" iff there exists a state predicate T of p such that the following three conditions
hold:
Closure: T is closed in p and F
Convergence: T converges to S in p
This definition may be understood as follows. At any state where the invariant, S, holds, executing
an action in p yields a state where S continues to hold, but executing an action in F may yield
a state where S does not hold. Nonetheless, the following three facts are true about this last
the fault-span, holds, (ii) subsequent execution of actions in p and F yields states
where T holds, and (iii) when actions in F stop executing, subsequent execution of actions in p
alone eventually yields a state where S holds, from which point the program resumes its intended
execution.
When the definition is instantiated so that the fault-span T is identical to the invariant S, we get
that p is masking F -tolerant for S. And when the definition is instantiated so that T differs from
S, we get that p is nonmasking F -fault-tolerant for S.
In the rest of this paper, the predicate S p denotes an invariant of program p. Moreover, the
predicate T p denotes a fault-span predicate for a program p that is F -tolerant for S p . Finally,
when the fault-class F is clear from the context, we omit mentioning F ; thus, "masking tolerant"
abbreviates "masking F -tolerant".
3 A Method for Designing masking Tolerance
From the definitions in the previous section, we observe that masking and nonmasking fault-tolerance
are related as follows.
Theorem 3.1. For any program p,
If there exists S p and T p such that p is nonmasking F -tolerant for S p and
every computation of p starting from a state where T p holds satisfies the
safety specification of p
Then there exists S p such that p is masking F -tolerant for S p .
Proof. Let S np and T np be state predicates satisfying the antecedent. Then every computation of
p starting from a state where S np holds satisfies its problem specification, and starting from a state
where T np holds satisfies its safety specification. From Lemma 2.1, it follows that every computation
of p starting from a T np state satisfies its problem specification. Thus, choosing S p =T np satisfies
the consequent.
The Method. Theorem 3.1 suggests that an intolerant program can be made masking tolerant
in two stages: In the first stage, the intolerant program is transformed into one that is nonmasking
tolerant for, say, the invariant S np and the fault-span T np . In the second stage, the tolerance of
resulting program is enhanced from nonmasking to masking, as follows. The nonmasking tolerant
program is transformed so that every computation upon starting from a state where T np holds, in
addition to eventually reaching a state where S np holds, also satisfies the safety specification of the
problem at hand.
We address the details of both stages, next.
Stage 1. For a fault-intolerant program, say p, the problem specification is satisfied by computations
of p that start at a state where its invariant holds but not necessarily by those that start at
a state where its fault-span holds. Hence, to add nonmasking tolerance to p, a program component
is added to p that restores it from fault-span states to invariant states.
We call the program component added to p for nonmasking tolerance a corrector. Well-known
examples of correctors include reset procedures, rollback-recovery, forward recovery, error correction
codes, constraint (re)satisfaction, voters, exception handlers, and alternate procedures in recovery
blocks. The design of correctors has been studied extensively in the literature. We only note
that correctors can be designed in a stepwise and hierarchical fashion; in other words, a large
corrector can be designed by parallel and/or sequential composition of small correctors. One
simple parallel composition strategy is to superpose small correctors on others. An example of
a sequential composition strategy, due to Arora, Gouda, and Varghese [11], is to order the small
correctors in a linear manner (or, more generally, a well-founded manner) such that each corrector
does not interfere with the recovery task of the correctors lower than it in the chosen ordering. For
a detailed discussion of corrector compositions, we refer the reader to [12].
Stage 2. For a nonmasking program, say np, even though the problem specification is satisfied
after computations of np converge to invariant states, the safety specification need not be satisfied
in all computations of np that start at fault-span states. Therefore, in the second stage, we restrict
the actions of np so that the safety specification is preserved during the convergence of computations
of np to invariant states. By Theorem 3.1, it follows that the resulting program is masking tolerant.
To see that restriction of actions of np is sufficient for preserving safety during convergence, recall
that the safety specification essentially rules out certain finite prefixes of computation of np. Now
consider any prefix of a computation of np that is not ruled out by the safety specification: Execution
of an action following this prefix increases the length of the computation prefix by one. As long
as the elongated prefix is not one of the prefixes ruled out by the safety specification, safety is not
violated. In other words, it suffices that whenever an action is executed, the resulting prefix be one
that is not ruled out by the safety specification.
It follows that there exists, for each action of np, a set of computation prefixes for which execution
of that action preserves the safety specification. Assuming the existence of auxiliary state (which in
the worst case would record the history of the computation steps), for each action of np, there exists
a state predicate that is true in exactly those states where the execution of that action preserves
safety. We call this state predicate the safe predicate of that action. It follows that if an action is
executed in a state where its safe predicate is satisfied, safety is preserved.
The restriction of the actions of np so as to enhance the tolerance of np to masking can now be
stated precisely. Each action of np is restricted to execute only when its safe predicate holds.
Moreover, for each action of np, the detection of its safe predicate may require the addition of a
program component to np.
We call a program component added to np for detecting that the safe predicate of an action holds
a detector. Well-known examples of detectors include snapshot procedures, acceptance tests, error
detection codes, comparators, consistency checkers, watchdog programs, snooper programs, and
exception conditions. Analogous to the compositional design of large correctors, large detectors
can be designed in a stepwise and hierarchical fashion, by parallel and/or sequential composition
of small detectors.
Thus, in sum, the second stage adds at most one detector per action of np and restricts each action
of np to execute only when the detector of that action witnesses that its safe predicate holds. Before
concluding our discussion of this stage, we make three observations about its application:
1. The safe predicate of several program actions is trivially true;
2. The safe predicate of most other actions requires only simple detector components, which
introduce only little history state to check the safe predicate, and
3. If the problem specification is fusion closed and suffix closed, then no history state is required
to check the safe predicate.
Observation (1) follows from the fact that the actions of masking tolerant programs can be conceptually
characterized as either "critical" or "noncritical", with respect to the safety specification.
Critical actions are those actions whose execution in the presence of faults can violate the safety
specification; hence, only they require non-trivial safe predicates. In other words, the safe predicate
of all non-critical actions is merely true.
For example, in terminating programs, e.g. feed-forward circuits or database transactions, only
the actions that produce an output or commit a result are critical. In reactive programs, e.g.
operating systems or plant controllers, only the actions that control progress while maintaining
safety are critical. In the rich class of "total" programs for distributed systems [13], e.g. distributed
consensus, infima finding, garbage collection, global function computation, reset, routing, snapshot,
and termination detection, only the "decider" actions that declare the outcome of the computation
are critical.
Observation (2) follows from the fact that conventional specification languages typically yield safety
specifications that are tested on the current state only or on the current computation step only;
i.e., the set of finite prefixes that their safety specifications rule out can be deduced from the last
or their last two states of computation prefixes. Thus, most safety specifications in practice do not
require maintenance of unbounded "history" variables for detection of the safe predicates of each
action.
Observation (3) follows from the fact that if the problem specification is fusion closed and suffix
closed then the required history information already exists in the current state. A proof of this
observation is presented in [12].
Verification obligations. The addition of corrector and detector components as described
above may add variables and actions to an intolerant program and, hence, the invariant and the
fault-span of the resulting program may be different from those of the original program. The
addition of corrector and detector components thus creates some verification obligations for the
designer.
Specifically, when a corrector is added to an intolerant program, the designer has to ensure that the
corrector actions and the intolerant program actions do not interfere with each other. That is, even
if the corrector and the fault-intolerant program execute concurrently, both accomplish their tasks:
The corrector restores the intolerant program to a state from where the problem specification of the
intolerant program is (re)satisfied. And starting from such a state, the intolerant program satisfies
its problem specification.
Similar obligations are created when detectors are added to a nonmasking program. Even if the
detectors and the nonmasking program are executed concurrently, the designer has to ensure that
the detector components and the components of the nonmasking program all accomplish their
respective tasks.
Another set of verification obligations is due to the fact that the corrector and detector components
are themselves subject to the faults that the intolerant program is subject to. Hence, the designer
is obliged to show that these components accomplish their task in spite of faults. More precisely,
the corrector tolerates the faults by ensuring that when fault actions stop executing it eventually
restores the program state as desired. In other words, the corrector is itself nonmasking tolerant to
the faults. And each detector tolerates the faults by never falsely witnessing its detection predicate,
even in the presence of the faults. In other words, each detector is itself masking tolerant to the
faults. As can be expected, our two-stage design method can itself be used to design masking
tolerance in the detectors, if their original design did not yield masking tolerant detectors.
Adding detectors components by superposition. One way of simplifying the verification
obligations is to add components to a program by superposing them on the program: if a program
p is designed by a superposition on the program q, then it is trivially true that p does not interfere
with q (although the converse need not be true, i.e., q may interfere with p).
In particular, superposition is well-suited for the addition of detector components to a nonmasking
tolerant program, np, in Stage 2, since detectors need only to read (but not update) the state of
np. (It is for this reason that we have stated the definition of programs in Section 2 in terms of
superposition.) Thus, the detectors do not interfere with the tasks of the corrector components in
np.
When superposition is used, the verification of the converse obligation, i.e. that np does not
interfere with the detectors, may be handled as follows. Ensure that the corrector in np terminates
after it restores np to an invariant state and that as long as it has not terminated it prevents
the detectors from witnessing their safe predicate. Aborting the detectors during the execution of
the corrector guarantees that the detectors never witness their safe predicate incorrectly, and the
eventual termination of the corrector guarantees that eventually detectors are not prevented from
witnessing their safe predicate.
More specifically, the simplified verification obligations resulting from superposition are explained
from Theorems 3.2 and 3.3. Let program p be designed by superposition on q such that T p ) T q
Theorem 3.2. If q is nonmasking F -tolerant for S q , then T p converges to S q in p.
Theorem 3.3. If q is nonmasking F -tolerant for S q , then
converges to S p in p)
converges to S p in p)
Proof: Since q is nonmasking fault-tolerant, T q converges to S q in q. Since p is designed by a
superposition on q, it follows that (T p - T q converges to T p - S q ). Since the converges-to relation
is transitive and (T p - S q converges to S p - S q ), it follows that (T p - T q converges to S p - S q ), i.e.,
converges to S p in p.
Theorems 3.2 and 3.3 imply that if p is designed by superposition on a nonmasking tolerant program
q, then to reason about p, it suffices to assume that q always satisfies its invariant S q , even in the
presence of faults. For a discussion of alternative strategies for verifying interference freedom, we
refer the reader to [12].
In this section, we demonstrate that our method is well suited for the design of classical examples
of masking tolerance, which span a variety of fault-classes. Specifically, our examples of masking
tolerance achieve Byzantine agreement in the presence of Byzantine failure, data transfer in the
presence of message loss in network channels, and triple modular redundancy (TMR) in the presence
of input corruption.
Notation. For convenience in presenting these designs, we will partition the actions of a program
into "processes".
4.1 Example agreement
Recall the Byzantine agreement problem: A unique process, the general, g, asserts a binary value
d:g. Every process j in the system is required to eventually finalize its decision such that the
following two conditions hold: (1) if g is non-Byzantine, the final decision reached by every non-
Byzantine process is identical to d:g; and (2) even if g is Byzantine, the final decisions reached by
all non-Byzantine processes are identical.
Faults corrupt processes permanently and undetectably such that the corrupted processes are
Byzantine. It is well known that masking tolerant Byzantine agreement is possible iff there are at
least 3f+1 processes, where f is the number of Byzantine processes [14]. For ease of exposition, we
will restrict our attention to the case where the total number of processes (including g) is 4 and,
hence, f is 1. A generalization for multiple Byzantine faults is presented elsewhere [15].
As prescribed by our method, we will design the masking tolerant solution to the Byzantine agreement
problem in two stages. Starting with an intolerant program for Byzantine agreement, we will
first transform that program to add nonmasking tolerance, and subsequently enhance the tolerance
to masking.
Intolerant Byzantine agreement. The following simple program suffices for agreement but
not for tolerance to faults: Process g is assumed to have a priori finalized its decision d:g. Each
process j other than g receives the value d:g from process g and then finalizes its decision to that
value. To this end, the program maintains two variables for each process j: a boolean f:j that is
true iff j has finalized its decision, and d:j whose value denotes the decision of j.
The program has two actions for each process j. The first action, IB1, copies d:g into the decision
variable d:j: to denote that j has not yet copied d:g, we add a special value ? to the domain of d:j;
thus, j copies d:g only if d:j is ?. The second action, IB2, finalizes the decision d:j: if j has copied
its decision by truthifying f:j. Formally, the actions of the intolerant program, IB,
are as follows:
Invariant. In program IB, g has a priori finalized its decision. Moreover, when a process finalizes
its decision, d:j is different from ?, and the final decision of each non-Byzantine process is identical
to d:g. Hence, the invariant of program IB is S IB , where
Fault Actions. The faults in this example make one process Byzantine, provided that no process
is Byzantine. As discussed in Section 2, these faults would be represented by the following fault
action at each j :
Nonmasking tolerant Byzantine agreement. Program IB is intolerant because if g becomes
Byzantine before all processes have finalized their decisions, g may keep changing d:g arbitrarily
and, hence, the final decisions reached by the non-Byzantine processes may differ. We now add
nonmasking tolerance to IB so that eventually the decisions reached by all non-Byzantine processes
are identical.
Since IB eventually reaches a state where the decisions of all processes differ from ? (i.e., are 0 or
1), it follows that eventually the decisions of at least two of the three processes other than g will
be identical. Hence, if all of these processes ensure that their decision is the same as that of the
majority, the resulting program will be nonmasking tolerant.
Our nonmasking tolerant program consists of four actions for each process j: the first two are
identical to the actions of IB. The third action, NB3, is executed by j only if it is Byzantine: this
action nondeterministically changes d:j to either 0 or 1 and f:j to either true or false. The fourth
action, NB4, changes the decision of j to the majority of the three processes. Formally, the actions
of the nonmasking program, NB, are as follows:
\Gamma! d:j; f:j := 0j1; truejfalse
majdefined - d:j 6=maj
\Gamma! d:j := maj
where,
majdefined
Remark. A formula (operation may be read as the value of obtained by performing
the (commutative and associative) operation on the X:j values for all j (in this case j is a process)
that satisfy R:j. As a special case, when operation is a conjunction, we
when operation is a disjunction, we may be read as
if R:j is true then so is X:j, and may be read as there exists a process where both
R:j and X:j are true. Moreover, if R:j is true, i.e., X:j is computed for all processes, we omit R:j.
This notation is generalized from [16].
Invariant and fault-span. As in program IB, in program NB, when any non-Byzantine process
finalizes its decision, d:j 6= ?. Also, if g remains non-Byzantine, all other non-Byzantine processes
reach the same decision value as process g. Hence, the fault-span of NB, TNB is:
Observe that if g is non-Byzantine, starting from any state in TNB , the nonmasking program works
correctly. Also, if g is Byzantine, the nonmasking program works correctly if it starts in a state
where all processes have correctly finalized their decisions. Hence, the invariant of program NB is
Enhancing the tolerance to masking. Program NB is not yet masking tolerant as a non-
Byzantine process j may first finalize its decision incorrectly, and only later correct its decision to
that of the majority of the other processes. Hence, to enhance the tolerance of NB to masking, it
suffices that j finalize its decision only when d:j is the same as the majority.
The masking program thus consists of four actions at each process j: the three actions are identical
to actions NB1;NB3 and NB4, and the fourth action, MB2 is restricted so that j finalizes its
decision only when d:j is the same as the majority. Formally, the actions of the masking program,
MB, are as follows:
majdefined - d:j =maj
\Gamma! f:j := true
Invariant. The fault-span of the nonmasking program, TNB , is implied by the invariant, SMB ,
of the masking program. Also, in SMB , j finalizes its decision only when d:j is the same as that of
the majority. Thus, SMB is:
Theorem 4.1 The Byzantine agreement program MB is masking fault-tolerant for invariant SMB .
4.2 Example 2 : Data transfer
Recall the data transfer problem: An infinite input array at a sender process is to be copied, one
array item at a time, into an infinite output array at a receiver process. The sender and receiver
communicate via a bidirectional channel that can hold at most one message in each direction at a
time. It is required that each input array item be copied into the output array exactly once and
in the same order as sent. Moreover, eventually the number of items copied by the receiver should
grow unboundedly.
Data transfer is subject to the faults that lose channel messages.
As before, we will design the masking tolerance for data transfer in two stages. The resulting
program is the well known alternating-bit protocol.
Intolerant program. Iteratively, a simple loop is followed: sender s sends a copy of one array
item to receiver r. Upon receiving this item, r sends an acknowledgment to s, which enables the
next array item to be sent by s and so on. To this end, the program maintains binary variables rs
in s and rr in rs is 1 if s has received an acknowledgment for the last item it sent, and rr is 1 if
the r has received an item but has not yet sent an acknowledgment.
The 0 or 1 items in transit from s to r are denoted by the sequence cs, and the 0 or 1 acknowledgments
in transit from r to s are denoted by the sequence cr. Finally, the index in the input array
corresponding to the item that s will send next is denoted by ns, and the index in the output array
corresponding to the item that r last received is denoted by nr.
The intolerant program contains four actions, the first two in s and the last two in r. By ID1,
s sends an item to r, and by ID2, s receives an acknowledgment from r. By ID3, r receives an
item from s, and by ID4, r sends an acknowledgment to s. Formally, the actions of the intolerant
program, ID, are as follows (where c1 ffi c2 denotes concatenation of sequences c1 and c2):
ID2 :: cr 6=hi \Gamma! rs; cr; ns := ns
Remark. For brevity, we have ignored the actual data transfered between the sender and the
receiver: we only use the array index of that data.
Invariant. When r receives an item, ns holds, and this equation continues to hold until s
receives an acknowledgment. When s receives an acknowledgment, ns is exactly one larger than
nr and this equation continues to hold until r receives the next item. Also, if cs is nonempty,
cs contains only one item, hnsi. Finally, in any state, exactly one of the four actions is enabled.
Hence, the invariant of program ID is, S ID , where
rs
Fault Actions. The faults in this example lose either an item sent from s to r or an acknowledgment
sent from r to s. The corresponding fault actions are as follows:
cs 6=hi \Gamma! cs := tail(cs)
cr 6=hi \Gamma! cr := tail(cr)
Nonmasking tolerant program. Program ID is intolerant as it deadlocks when a fault
loses an item or an acknowledgment. Hence, we add nonmasking tolerance to this fault by adding
an action by which s detects that an item or acknowledgment has been lost and recovers ID by
retransmitting the item.
Thus, the nonmasking program consists of five actions; four actions are identical to the actions of
program ID, and the fifth action retransmits the last item that was sent. This action is executed
when both channels, cs and cr, are empty, and rs and rr are both zero. In practice, this action
can be implemented by waiting for a some predetermined timeout so that the sender can be sure
that either the item or the acknowledgment is lost, but we present only the abstract version of the
action. Formally, the actions of the nonmasking program, ND, are as follows:
\Gamma! cs := cs ffi hnsi
Fault-span and invariant. If an item or an acknowledgment is lost, the program reaches a state
where cs and cr are empty and rs and rr are both equal to zero. Also, even in the presence of
faults, if cs is nonempty, it contains exactly the item whose index in the input array is hnsi. Thus,
the fault-span of the nonmasking program is
rs
and the invariant is the same as the invariant of ID, i.e.,
Enhancing the tolerance to masking. Program ND is not yet masking tolerant, since r may
receive duplicate items if an acknowledgment from r to s is lost. Hence, to enhance the tolerance
to masking, we need to restrict the action ID3 so that r copies an item into the output array iff it
is not a duplicate.
Upon receiving an item, if r checks that nr is exactly one less than the index number received with
the item, r will receive every item exactly once. Thus, we can enhance its tolerance to masking by
adding such a check to program ND. However, this check forces the size of the message sent from
the s to r to grow unboundedly. However, we can exploit the fact that in ND, ns and nr differ by
at most 1, in order to simulate this check by sending only a single bit with the item as follows.
Process s adds one bit, bs, to every item it sends such that the bit values added to two consecutive
items are different and the bit values added to an item and its duplicates are the same. Thus, to
detect that a message is duplicate, r maintains a bit, br, that denotes the sequence number of the
last message it received. It follows that an item received by r is a duplicate iff br is the same as
the sequence number in that message.
The masking program consists of five actions. These actions are as follows:
\Gamma! rs; cs := 0; cs ffi hns; bsi
MD2 :: cr 6=hi
\Gamma! rs; cr; ns; bs := ns
\Gamma! cs := cs ffi hns; bsi
\Gamma! if ((head(cs)) 2 6=br) then
cs; rr := tail(cs); 1
\Gamma!
Remark. Observe that in the masking program, the array index ns and nr need not be sent on
the channel as it suffices to send the bits bs and br. With this modification, the resulting program
is the alternating bit protocol.
Invariant. In any state reached in the presence of program and fault actions, if cs is nonempty,
cs has exactly one item, hns; bsi. Also, when r receives an item, nr =ns holds, and this equation
continues to hold until s receives an acknowledgment. Moreover, bs is the same as ns mod 2, br
is the same as nr mod 2, and exactly one of the five actions is enabled. Finally, nr is the same as
ns or nr is one less than ns. Thus, the invariant of the masking program is SMD , where
rs
bs=(ns mod ns
Theorem 4.2. The alternating-bit program MD is masking tolerant for invariant SMD .
4.3 Example 3 : TMR
Recall the TMR problem: Three processes share an output, out. A binary value is input to in:j,
for each process j. It is required that the output be set to this binary value.
Faults corrupt the input value of any one of the three processes.
Intolerant TMR. In the absence of faults, it suffices that out be set to in:j, for any process j.
Hence, the action of program IR in each process j is as follows (where out =? denotes that the
output has not yet been set):
Fault actions. In this example, the faults corrupt the input value in:j of at most one process.
They are represented by the following fault actions, one for each j (where k also ranges over the
Nonmasking TMR. Program IR is intolerant since out may be set incorrectly from a corrupted
in:j. Therefore, to add nonmasking tolerance to IR, we add a corrector that eventually corrects
out. Since at most one in:j is corrupted, the correct output can differ from at most one in:j. Hence,
if out differs from the in:j of two processes, the corrector resets out to the in:j value of those two.
Thus, the nonmasking program, NR, consists of two actions in each process j: action NR1 is the
same as IR1 and action NR2 is the corrector. Formally, these two actions are as follows (where \Phi
denotes modulo 3 addition):
\Gamma! out := in:j
\Gamma! out := in:j
Enhancing the tolerance to masking. Program NR is not yet masking tolerant since out
may be set incorrectly before being corrected. Therefore, to enhance the tolerance to masking, we
restrict the action NR1 so that the output is always set to an uncorrupted in:j. A safe predicate
for this restriction of action NR1 is Restricting action
NR1 with this safe predicate yields a stronger version of action NR2, thus the resulting masking
tolerant program MR consists of only one action for each j:
\Gamma! out := in:j
Invariant. In program MR, if out is equal to in:j for some j, then there exists another process
whose input value is the same as in:j. Hence, the invariant of program MR is, SMR , where
Theorem 4.3 The triple modular redundancy program MR is masking tolerant for invariant
SMR .
In this section, we design a new and improved masking tolerant solution for the mutual exclusion
problem using our two-stage method. Recall the mutual exclusion problem: Multiple processes
may each access their critical sections provided that at any time at most one process is accessing
its critical section. Moreover, no process should wait forever to access its critical section, assuming
that each process leaves its critical section in finite time.
We assume that the processes have unique integer ids. At any instant, each process is either "up"
or "down". Only up processes can execute program actions. Actions executed by an up process j
may involve communication only with the up processes connected to j via channels. Channels are
bidirectional.
A fault fail-stops one of the processes, i.e., renders an up process down. Fail-stops may occur in
any (finite) number, in any order, at any time, and at any process as long as the set of up processes
remains connected.
One class of solutions for mutual exclusions is based on tokens. In token-based solutions, a unique
token is circulated between processes, and a process enters its critical section only if (but not
necessarily if) it has the token. To ensure that no process waits forever for the token, a fair
strategy is chosen by which if any process requests access to its critical section then it eventually
receives the token. An elegant token-based program is independently due to Raymond [17] and
Snepscheut [18]; this program uses a fixed tree to circulate the token.
The case study is organized as follows. In Section 5.1, we recall (an abstract version of) the
intolerant mutual exclusion program of Raymond and Snepscheut. In Section 5.2, we transform
this fault-intolerant program into a nonmasking tolerant one by adding correctors. Finally, in
Section 5.3, we enhance the tolerance to masking by adding detectors. The resulting solution is
compared with other masking tolerant token-based mutual exclusion solutions in the next section.
5.1 The Fault-Intolerant Program
The processes are organized in a tree. Each process j maintains a variable P:j, to denote the parent
of j in this tree; a variable h:j, to denote the holder process of j which is a neighbor of j in the
direction of the process with the token; and a variable Request:j, to denote the set of requests that
were received from the neighbors of j in the tree and that are pending at j.
The program consists of three actions for each process, the first for making or propagating to the
holder process a request for getting the token; the second for transmitting the token to satisfy a
pending request from a neighbor; and the third for accessing the critical section when holding the
token. The actions are as follows:
(j needs to request critical section - Request:j 6= OE)
\Gamma!
\Gamma! h:k; h:j := j; j;
Request:k
\Gamma! access critical section
These actions maintain the holder relation so that it forms a directed tree rooted at the process that
has the token. The holder relation, moreover, conforms to the parent tree; i.e., if k is the holder of
are adjacent in the tree. Thus, the invariant of the fault-intolerant program is S IM ,
where
(j
where P N
5.2 A Nonmasking Tolerant Version
In the presence of faults, the parent tree used by IM may become partitioned. As a result, the
holder relation may also become inconsistent. Moreover, the token circulated by IM may be lost,
e.g., when the process that has the token (i.e., whose holder equals itself) fail-stops. Hence, to add
nonmasking tolerance to fail-stops, we need to add a corrector that restores the parent tree and
the holder tree. We build this corrector by superposing two correctors: NT which corrects the
parent tree and NH which corrects the holder tree. In particular, we ensure that in the presence
of fail-stops eventually the parent tree is constructed, the holder relation is identical to the parent
relation and, hence, the root process has the token.
5.2.1 Designing a Corrector NT for the parent Tree
For a corrector that reconstructs the parent tree, we reuse Arora's program [19] for tree main-
tenance. This program allows faults to yield program states where there are multiple trees and
unrooted trees. Continued execution of the program ensures convergence to a fix-point state where
there is exactly one rooted spanning tree.
To deal with multiple trees, the program has actions that merge trees. The merge actions use an
integer variable root:j, denoting the id of the process that j believes to be its tree root, as follows.
A process j merges into the tree of a neighboring process k when root:k ?root:j. Upon merging, j
sets root:j to be equal to root:k and P:j to be k. Also, j aligns its holder relation along the parent
relation by setting h:j to k. Observe that, by merging thus, no cycles are formed and the root value
of each process remains at most the root value of its parent. When no merge actions are enabled,
it follows that all rooted processes have the same root value.
To deal with unrooted trees, the program has actions that inform all processes in unrooted trees
that they have no root process. These actions use a variable col:j, denoting the color of j, as
follows. When a process detects that its parent has failed or the color of its parent is red, the
process sets its color to red. When a leaf process obtains the color red, it separates from its tree
and resets its color to green, thus forming a tree consisting only of itself. When a leaf separates
from its tree, it aligns its holder relation along the parent relation by setting its holder to itself.
Formally, the actions of the corrector NT for process j are as follows (Adj:j denotes the set of up
neighbors of process j):
\Gamma! col:j := red
\Gamma! P:j; root:j; h:j := k; root:k; k
Fault Actions. Formally, the fail-stop action for process j is as follows:
failstop:: up:j \Gamma! up:j := false
Fault-span and Invariant. In the presence of faults, the actions of NT preserve the acyclicity
of the graph of the parent relation as well as the fact that the root value of each process is at most
the root value of its parent. They also preserve the fact that if a process is colored red then its
parent is also colored red. Thus, the fault-span of corrector NT is the predicate TNT , where
the graph of the parent relation is a forest -
After faults stop occurring, eventually the program ensures that if a process is colored red then all
its children are colored red, i.e., all processes in any unrooted tree are colored red. Furthermore,
the program reaches a state where all processes are colored green, i.e., no process is in an unrooted
tree. Finally, the graph of parent relation forms a rooted spanning tree. In particular, the root
values of all processes are identical.
Remark. Henceforth, for brevity, we use the term ch:j to denote the children of j; the term j is a
root to denote that the parent of j is j, col:j is green, and j is up; and the term nbrs(X) to denote
the set of processes adjacent to processes in the set of processes X (including X). Formally,
j is a root j (P:j =j - col:j =green - up:j), and
5.2.2 Designing a Corrector NH for the holder Tree
After the parent tree is reconstructed, the holder relation may still be inconsistent, in two ways. (1)
The holder of j need not be adjacent to j in the parent tree, or (2) the holder of j may be adjacent
to j in the tree but the holder relation forms a cycle. Hence, the corrector NH that restores the
holder relation consists of two actions: Action NH1 corrects the holder of j when (1) holds, by
setting h:j to P:j. Action NH2 corrects the holder of j when (2) holds: if the parent of k is j,
holder of j is k and the holder of k is j, j breaks this cycle by setting h:j to P:j. The net effect
of executing these actions is that eventually the holder relation is identical to the parent relation
and, hence, the root process has the token.
\Gamma! h:j := P:j
\Gamma! h:j := P:j
Fault-Span and Invariant. The corrector NH ensures that the holder of j is adjacent to
j in the parent tree and for every edge (j; P:j) in the parent tree, either h:j is the same as P:j,
or h:(P:j) is the same as j, but not both. Thus, NH corrects the program to a state where
5.2.3 Adding the Corrector : Verifying Interference Freedom
As described earlier, the corrector we add to IM is built by superposing two correctors NT and
NH. NH updates only the holder relation and NT does not read the holder relation. Therefore,
NH does not interfere with NT . Also, after NT reconstructs the tree and satisfies SNT , none of
its actions are enabled. Therefore, NT does not interfere with NH.
IM updates variables that are not read by NT . Therefore, IM does not interfere with NT .
Also, NH reconstructs the holder relation by satisfying the predicates SNH1 :j and SNH2 :j for each
process j. Both SNH1 :j and SNH2 :j are respectively preserved by IM . Therefore, IM does not
interfere with NH. Finally, after the tree and the holder relation is reconstructed and (S NT - SNH )
is satisfied, actions of NT and NH are disabled. Therefore, NT and NH do not interfere with
IM .
It follows that the corrector consisting of both NT and NH ensures that a state satisfying (S NT -
SNH ) is reached, even when executed concurrently with IM . Since (S NT - SNH
may add the corrector to IM to obtain the nonmasking tolerant program NM , whose actions at
process j are as follows:
Fault-Span and Invariant. The invariant of program NM is the conjunction of SNT , and
SNH . Thus, the invariant of NM is
The fault-span of program NM is equal the TNT , i.e.,
Theorem 5.1. The mutual exclusion program NM is nonmasking fault-tolerant for SNM .
5.3 Enhancing the Tolerance to masking
Actions NM5;NM7 and NM8 can affect the safety of program execution only when the process
executing them sets the holder to itself, thereby generating a new token. The safe predicate that
should hold before generation of a token is therefore the condition "no process has a token". Towards
detection of this safe predicate, we exploit the fact that NM is nonmasking tolerant: if the token
is lost, NM eventually converges to a state where the graph of the parent relation is a rooted tree
and the holder of each processes is its parent. Hence, it suffices to check whether the program is at
such a state. To perform this check, we let j initiate a diffusing computation whenever j executes
action NM5;NM7 or NM8. Only when j completes the diffusing computation successfully, does
it safely generate a token.
Actions NM2 and NM3, which respectively let process k transmit a token to process j and
its critical section, can affect the safety of program execution only if they involve a
spurious token generated in the presence of fail-stops. The safe predicate that should hold before
these actions execute would certify that the token is not spurious. Towards detection of this safe
predicate, we exploit the fact that fail-stops are detectable faults and, hence, we can let the fail-stop
of a process force its neighboring processes to participate in a diffusing computation. Recalling from
above that a new token is safely generated only after a diffusing computation completes, we can
define the safe predicate for NM2 to be "k is not participating in a diffusing computation" and for
action NM3 to be "j is not participating in a diffusing computation".
Observe that the safe predicate detection to be performed for the first set of actions (NM5;NM7;
and NM8) is global, in that it involves the state of all processes, whereas the safe predicate detection
to be performed for the second set of actions (NM2 and NM3) is local. We will design a separate
detector for each set of actions, such that superposition of these detectors on NM yields a masking
fault-tolerant program.
5.3.1 Designing the Global Detector, GD
As discussed above, the global detector, GD, uses a diffusing computation to check if some process
has a token. Only a root process can initiate a diffusing computation. Upon initiation, the
root propagates the diffusing computation to all of its children. Each child likewise propagates
the computation to its children, and so on. It is convenient to think of these propagations as a
propagation wave. When a leaf process receives the propagation wave, it completes and responds to
its parent. Upon receiving responses from all its children, the parent of the leaf likewise completes
and responds to its parent, and so on. It is convenient to think of these completions as a completion
wave. In the completion wave, a process responds to its parent with a result denoting whether the
subtree rooted at that process has a token. Thus, when the root receives a completion wave, it can
decide whether some process has a token by inspecting the result.
The diffusing computation is complicated by the following situations: multiple (root) processes may
initiate a diffusing computation concurrently, processes may fail-stop while the diffusing computation
is in progress, and a process may receive a token after responding to its parent in a diffusing
computation that it does not have the token.
To deal with concurrent initiators, we let only the diffusing computation of the highest id process to
complete successfully; those of the others are aborted, by forcing them to complete with the result
false. Specifically, if a process propagating a diffusing computation observes another diffusing
computation initiated by a higher id process, it starts propagating the latter and aborts the former
diffusing computation by setting the result of its former parent (the process from it received the
former diffusing computation) to false. This ensures that the former parent completes the diffusing
computation of the lower id process with the result false.
To deal with the fail-stop of a process, we abort any diffusing diffusing computations that the
neighboring processes may be propagating: Specifically, if j is waiting for a reply from k to complete
in a diffusing computation and k fail-stops then j cannot decide if some descendent of k has a token.
Hence, upon detecting the fail-stop of k, j aborts its diffusing computation by setting its result to
false.
Finally, to deal with the potential race condition where a diffusing computation "misses" a token
because the token is sent to some process that has already completed in the diffusing computation
with the result true, we ensure that even if this occurs the diffusing computation completes at the
initiator only with the result false. Towards this end, we modify the global detector as follows:
A process completes in a diffusing computation with the result true only if all its neighbors have
propagated that diffusing computation. And, the variable result is maintained to be false if the
process ever had a token since the last diffusing computation was propagated. To see why this
modification works, consider the first process, say j, that receives a token after it has completed in
a diffusing computation with the result true. Let l denote the process that sent the token to j. It
follows that l has at least propagated the diffusing computation and its result is false. Moreover,
since j is the first process to receive a token after completing the diffusing computation with the
result true, l can only complete that diffusing computation with the result false. Since the result
of l is propagated towards the initiator of the diffusing computation in the completion wave, the
initiator is guaranteed to complete the diffusing computation with the result false.
In sum, the diffusing computation deals with each of these complications via an abort mechanism
that, by setting the result of the appropriate processes to false, fails the appropriate diffusing
computations.
When the initiator of a diffusing computation completes with the result false, it starts yet another
diffusing computation. Towards this end, the diffusing computation provides an initiation mechanism
that lets a root process initiate a new diffusing computation. To distinguish between the
different computations initiated by some process, we let each process maintain a sequence number
that is incremented in every diffusing computation. Furthermore, when a process propagates a new
diffusing computation, it resets its result to true provided that it does not have the token.
From the above discussion, process j needs to maintain a phase, phase:j, a sequence number, sn:j,
and a result, res:j. The phase of j is either prop or comp and denotes whether j is propagating
a diffusing computation or it has completed its diffusing computation. The sequence number of
distinguishes between successive diffusing computations initiated by a root process. Finally, the
result of j denotes whether j completed its diffusing computation correctly or it aborted its diffusing
computation.
Actions for the global detector. The global detector consists of four actions, viz INIT , PROP ,
COMP , and ABORT .
INIT lets process j initiate a diffusing computation by incrementing its sequence number. We
specify here only the statement of INIT ; the conditions under which j executes INIT are specified
later.
propagate a diffusing computation when j and P:j are in the same tree and sn:j is
different from sn:(P:j). If the holder relation of j is aligned along the parent relation and P:j is in
the propagate phase, j propagates that diffusing computation and sets its result to true. Otherwise,
completes that diffusing computation with the result false.
COMP lets j complete a diffusing computation if all children have completed the diffusing computation
and all neighbors have propagated or completed that diffusing computation. The result
computed by j is set to true iff the result returned by all its children is true, all neighbors of j have
propagated that diffusing computation, and the result of j is true. If the root completes a diffusing
computation with the result true, the safe predicate has been detected and the root process can
proceed to safely generate a new token and, consequently, change its result to false.
ABORT lets j complete a diffusing computation prematurely with the result false. When j aborts
a diffusing computation, j also sets the result of its parent to false to ensure that the parent of j
completes its diffusing computation with the result false. We specify here only the statement of
the conditions under which j executes ABORT are specified later. Formally, the actions
of detector GD for process j are as follows:
INIT (j) :: if (P:j =j) then
phase:j; sn:j := prop; newseq();
res:j := true
\Gamma! sn:j :=sn:(P:j);
phase:j; res:j := prop; true
else res:j := false
COMP (j) :: phase:j =prop -
\Gamma! res:j
if (P:j
else if (P:j =j - res:j) then
res:j :=false
if (P:j 2Adj:j) then res:(P:j) := false
Remark. In the ABORT action, j synchronously updates the state of its parent in addition to
its own. This action can be refined, since the parent of j completes its diffusing computation only
after j completes its diffusing computation, so that j only updates its own state and P:j reads the
state of j later.
Fault Actions. When a process fail-stops, all of its neighbors abort any diffusing computation
that they are propagating. Moreover, if the initiator aborts its diffusing computation it initiates a
new one. Hence, the fault action is as denotes that the statement X:l is
executed at all processes that satisfy R:l):
\Gamma! up:j := false;
Invariant. We relegate the invariant SGD of the global detector to Appendix A1.
5.3.2 Designing the Local Detector LD
The safe predicate for action NM2 is "k is not participating in a diffusing computation"; that
comp. The safe predicate for action NM3 is "j is not participating in a diffusing
computation"; that is, phase:j =comp. Therefore, these actions are modified as follows:
\Gamma! h:k; h:j := j; j;
Request:k
\Gamma! access critical section
5.3.3 Adding the Detectors : Verifying Interference Freedom
Actions NM5;NM7; and NM8 are restricted to execute INIT , to initiate a diffusing computation
whose successful completion, i.e. execution of COMP with result true will generate a new token.
And, as described above, actions NM2 and NM3 are restricted with the local detectors, to obtain
LD1 and LD2, respectively. We still need to verify that the composition is free from interference.
Note that the global detector, GD, is a superposition on NM and, hence, GD does not interfere
with NM . To ensure that NM does not interfere with GD, we restrict all actions of NM , other than
NM5;NM7; and NM8, to execute ABORT . (The alert reader will note that this last restriction
is overkill: some actions of NM need not be thus restricted, but we leave that optimization as an
exercise for the reader.) As long as the correctors of NM are executing, GD is safely aborted.
Once the correctors of NM terminate, GD makes progress. Hence, NM does not interfere with
GD. Also, execution of GD eventually reaches a state where the phase of all processes is comp.
Thus, LD does not interfere with NM , and since LD detects the safe predicate atomically, it is
not interfered by NM and GD.
Formally, the actions of the resulting masking tolerant program MM are as follows:
Fault Actions. The fault action is identical to the fault action described in Section 5.3.1.
Invariant. The invariant of program MM is the conjunction of TNM and SGD . Thus, the
invariant of MM is
Theorem 6.3. The mutual exclusion program MM is masking tolerant for SMM .
Remark. A leader election program can be easily extracted from our mutual exclusion case study.
To this end, we drop the variables h and Request from program MM . Thus, the resulting program
consists of the corrector NT (actions MM4\Gamma6) and the detector GD (actions MM9 and MM10).
In this program, a process is a leader iff it is a root and its phase is comp. This program is derived
by adding detector GD to the nonmasking tolerant program NT . The detailed design of such a
leader election program is presented in [20].
6 Discussion and Concluding Remarks
In this paper, we presented a compositional method for designing masking fault-tolerant programs.
First, by corrector composition, a nonmasking fault-tolerant program was designed to ensure that,
once faults stopped occurring, the program eventually reached a state from where the problem
specification was satisfied. Then, by detector composition, the program was augmented to ensure
that, even in the presence of the faults, the program always satisfied its safety specification.
We demonstrated the method by designing classical examples of masking fault-tolerant programs.
Notably, the examples covered a variety of fault-classes including Byzantine faults, message faults,
input faults and processor fail-stops and repairs. Also, they illustrated the generality of the method,
in terms of its ability to provide alternative designs for programs usually associated with other well-known
design methods for masking fault-tolerance: Specifically, the TMR and Byzantine examples
are usually associated with the method of replication or, more generally, the state-machine-approach
for designing client-server programs [21]. The alternating-bit protocol example is usually associated
with the method of exception handling or that of rollback-recovery -with the "timeout" action,
MD5, being the exception-handler or the recovery-procedure.
We found that judicious use of this method offers the potential for the design of improved masking
tolerant solutions, measured in terms of the scope of fault-classes that are masked and/or the
performance of the resulting programs. This is because, in contrast to some of the well-known
design methods, the method is not committed to the overhead of replication; instead, it encourages
the design of minimal components for achieving the required tolerance. And, in contrast to the
sometimes ad hoc treatment of exception-handling and recovery procedures, it focuses attention
on the systematic resolution of the interference between the underlying program and the added
tolerance components.
One example of an improved masking tolerant solution designed using the method is our token-based
mutual exclusion program. In terms of performance, in the absence of faults, our program
performs exactly as its fault-intolerant version (due to Raymond [17] and Snepscheut [18]) and
thus incurs no extra overhead in this case. By way of contrast, the acyclic-graph-based programs of
Dhamdhere and Kulkarni [22] and Chang, Singhal, and Liu [23] incur time overhead for providing
fault-tolerance, even in the absence of faults. Also, in the tree based program of Agrawal and
Abbadi [24], the amount of work performed for each critical section may increase when processes
fail (especially when the failed processes are close to the tree root); in our program, failure of a
process causes an overhead only during the convergence phase, but not after the program converges.
Moreover, in terms of tolerance, our program is more tolerant than that of [24] (which in the worst
case is intolerant to more than log n process fail-stops).
We note in passing that our mutual exclusion program can be systematically extended to tolerate
process repairs as well as channel failures and repairs. Also, it can be systematically transformed so
that processes cannot access the state of their neighbors atomically but only via asynchronous message
passing. For other examples of improved solutions designed using the method, the interested
reader is referred to our designs for leader election [20], termination detection [20], and distributed
reset [25].
We also note that although superposition was used for detector composition in our example designs,
superposition is only one of the possible strategies for detector composition. The advantage of superposing
the detectors on the underlying nonmasking tolerant program is the immediate guarantee
that the detectors did not interfere with the closure and convergence properties of the underlying
program.
One useful extension of the method would be to design programs that are nonmasking tolerant to
one fault-class and masking tolerant to another or, more generally, that possess multiple tolerance
properties (see [12, 25, 26]). The design of such multitolerant programs is motivated by the insight
that the fault-span of a program need not be unique [5]. Hence, multiple fault-spans may be
associated with a program, for instance, if the program is subject to multiple fault-classes. It
follows that the program can be nonmasking tolerant to one of these fault-classes and masking
tolerant to another. More generally, we find that multitolerance has several practical applications
[12].
Another useful extension would be to augment the method to allow "tolerance refinement", i.e., to
allow refinement of a tolerant program from an abstract level to a concrete level while preserving
its tolerance property. Tolerance refinement is orthogonal to the "tolerance addition" considered
in the paper, which adds the desired masking tolerance directly at any desired (but fixed) level of
implementation. With this extension we could, for instance, refine our mutual exclusion program so
that neighboring processes communicate only via asynchronous message passing within the scope
of the method itself.
Finally, alternative design methods based on detector and corrector compositions would be worth
studying. An alternative stepwise method would be to first perform detector composition and
then perform corrector composition, which we view as designing masking tolerance via fail-safe
tolerance [12]. Another alternative (but not stepwise) method would be to compose detectors and
correctors simultaneously. It would be especially interesting to compare these methods with respect
to design-complexity versus performance-complexity tradeoffs.
Acknowledgments
. We are grateful to Ted Herman for helpful comments on a preliminary
version of this paper and thank the anonymous referees for their detailed, constructive suggestions.
--R
A compositional framework for fault tolerance by specification transformation.
System structure for software fault tolerance.
Dependable computing and fault tolerance: Concepts and terminology.
Closure and convergence: A foundation of fault-tolerant comput- ing
Parallel Program Design: A Foundation.
Defining liveness.
A Discipline of Programming.
The Science of Programming.
Proving boolean combinations of deterministic properties.
Constraint satisfaction as a basis for designing nonmasking fault-tolerance
Component based design of multitolerance.
Structure of Distributed Algorithms.
The Byzantine generals problem.
Compositional design of multitolerant repetitive byzantine agree- ment
Predicate calculus and program semantics.
A tree based algorithm for mutual exclusion.
Fair mutual exclusion on a graph of processes.
Efficient reconfiguration of trees: A case study in the methodical design of nonmask- ing fault-tolerance
Designing masking fault-tolerance via nonmasking fault-tolerance
Implementing fault-tolerant services using the state machine approach: A tutorial
A token based k resilient mutual exclusion algorithm for distributed systems.
A fault tolerant algorithm for distributed mutual exclusion.
An efficient fault-tolerant solution for distributed mutual exclusion
Multitolerance in distributed reset.
Multitolerant barrier synchronization.
--TR
--CTR
Anil Hanumantharaya , Purnendu Sinha , Anjali Agarwal, A component-based design and compositional verification of a fault-tolerant multimedia communication protocol, Real-Time Imaging, v.9 n.6, p.401-422, December
Ted Herman, Superstabilizing mutual exclusion, Distributed Computing, v.13 n.1, p.1-17, January 2000
I-Ling Yen , Farokh B. Bastani , David J. Taylor, Design of Multi-Invariant Data Structures for Robust Shared Accesses in Multiprocessor Systems, IEEE Transactions on Software Engineering, v.27 n.3, p.193-207, March 2001
Meng Yu , Peng Liu , Wanyu Zang, Specifying and using intrusion masking models to process distributed operations, Journal of Computer Security, v.13 n.4, p.623-658, July 2005
Sushil Jajodia , Paul Ammann , Catherine D. McCollum, Surviving Information Warfare Attacks, Computer, v.32 n.4, p.57-63, April 1999
Felix C. Grtner, Fundamentals of fault-tolerant distributed computing in asynchronous environments, ACM Computing Surveys (CSUR), v.31 n.1, p.1-26, March 1999 | masking and nonmasking fault-tolerance;correctors;stepwise design formal methods;distributed systems;component based design;detectors |
284742 | Local Convergence of the Symmetric Rank-One Iteration. | We consider conditions under which the SR1 iteration is locally convergent. We apply the result to a pointwise structured SR1 method that has been used in optimal control. | Introduction
. The symmetric rank-one (SR1) update [1] is a quasi-Newton
method that preserves symmetry of an approximate Hessian (optimization problems)
or Jacobian (nonlinear equations). The analysis in this paper is from the nonlinear
equations point of view. Our purpose is to prove a local convergence result using
the concept of uniform linear independence from [5], extend that result to structured
updates where part of the Jacobian can be computed exactly, and then apply those
results to the pointwise SR1 update considered in [14] in the context of optimal control.
We we begin with a nonlinear equation
in R N . We make the standard assumptions in nonlinear equations.
Assumption 1.1. F has a root x . There is ffi ? 0 such that the Jacobian F 0 (x)
exists and is Lipschitz continuous in the set
with Lipschitz constant fl. F 0 (x ) is nonsingular.
Later in this paper we will assume that F 0 (x) is symmetric near x and use the
SR1 update to maintain symmetric approximations to F 0 (x ).
From current approximations iteration computes a
new point x+ by computing a search direction,
and updating x c ,
Version of May 17, 1995.
y North Carolina State University, Department of Mathematics and Center for Research in Scientific
Computation, Box 8205, Raleigh, N. C. 27695-8205 (Tim Kelley@ncsu.edu). The research of this
author was supported by National Science Foundation grant #DMS-9321938 and North Atlantic Treaty
Organization grant #CRG 920067. Computing was partially supported by an allocation of time from
the North Carolina Supercomputing Center.
z Universit?t Trier, FB IV - Mathematik and Graduiertenkolleg Mathematische Optimierung, 54296
Trier, Germany (sachs@uni-trier.de). The research of this author was supported by North Atlantic
Treaty Organization grant #CRG 920067.
is updated to form B+ by setting
and, if (y updating B c to obtain
the update for B is skipped, so
Observations have been reported [15], [4], [5], [19], [10], [18], [11], [20], [14], that
indicate that SR1 can outperform BFGS in the context of optimization, where either
the approximate Hessians can be expected to be positive definite or a trust-region
framework is used [3], [4], [5].
One modification of SR1 that allows for a convergence analysis has been proposed
in [17].
The SR1 update was considered in [2] where it was shown that the local superliner
convergence theory in that paper did not apply, because the updates could become
undefined.
Preservation of symmetry is an attractive feature, as is the possibility of storing one
vector per iterate in a matrix-free limited memory implementation. However, implementations
that store a single vector for each iteration are known for Broyden's method
[8], [13], and the BFGS method [21], which have much better understood convergence
properties. The advantage of the SR1 method over others is in a reduction in the number
of iterations. Such a reduction has been observed by many authors as we indicated
above.
As is standard, we update the approximate Jacobian only if
for some oe 2 (0; 1) fixed. In many recent papers [5], [15], [3], on SR1, (1.5), with an
arbitrary choice of oe, is one of the assumptions used to prove convergence of B n to
The numerical results presented in [5], [15], and [3] use very small values of oe.
In this paper, as was done in [14], we use a larger value of oe than in other treatments
of the SR1 iteration as a way to improve stability. The estimates in x 2 can be used as
a rough guide in selection of an appropriate value of oe. The numerical results in x 5
illustrate the benefits of varying oe.
In this paper we show that if the initial approximations to the solution and Jacobian
are sufficiently good and if the sequence of steps that satisfy (1.5) also satisfy a uniform
linear independence condition [5], then the iteration will be locally linearly convergent
and the sequence fB k g will remain near to F 0 (x ). A stronger uniform linear independence
condition implies k-step superlinear convergence for some integer k. Thus, our
goal is different from that in [5], [15], and [3], where convergence of the iteration to the
solution was an assumption and conditions were given that guaranteed convergence of
the approximate Jacobians to F 0 (x ).
While the uniform linear independence condition may seem strong, it is a reasonable
condition for very small problems. Such problems arise as part of the pointwise SR1
method proposed in [14] for certain optimal control problems and the results in this
paper give insight that we use to improve the performance of the pointwise SR1 method.
In x 2 we state and prove our basic convergence results. We consider structured updates
in x 3 and the application to pointwise updates in x 4. Finally we report numerical
results in x 5.
2. Basic Lemmas and Convergence Results. As we deal with local convergence
only in this paper, we take full steps use the fact that
which is a direct consequence of the definition and the
equation for the quasi-Newton step B c s Hence the SR1 update can be
We use the notation
for errors in Jacobian and solution approximations.
It may happen that many iterations take place between updates of B. We must
introduce notation to keep track of those iterations that result in a change in B. We
say that s n is a major step, x n+1 is a major iteration, and B n+1 a major update
1). In this case B n+1 6= B n .
A step is a minor step if it is not a major step (and hence no update of B takes place
local convergence theory must show that the new approximation
B n+1 to the Jacobian at the solution is nonsingular. We do this by proving an analog of
the "bounded deterioration" results used in the study of other quasi-Newton methods.
However, an inherent instability must be kept in check and this is where the uniform
linear independence becomes important.
2.1. Stability. We base our main result on several lemmas. The first simply summarizes
some well known results in nonlinear equations [6], [13], [16], and has nothing to
do with an assumption of symmetry of F 0 (x ) or any particular quasi-Newton method.
Lemma 2.1. Let Assumption 1.1 hold and let ae 2 (0; 1) be given. Then there are
ffl 0 and ffi 0 such that if x c and B c satisfy
then B c is nonsingular and
Moreover
Lemma 2.2. Let the hypotheses of Lemma 2.1 hold. Then there is C 1 such that if
c is a major step, then
ks c k
and
ks c
Proof. By (2.2)
and hence, using (2.6),
ks c k -
ks c k
This is (2.7) with C
We apply (2.9) again to obtain
and hence (2.8) follows from (2.6) and the the fact that C 1 ? fl.
At this point we need to consider a sequence of major steps. Several minor steps in
which (1.5) fails could lie between any two major steps, but they have no effect on the
estimates of the approximate Jacobians. This is also the first place where symmetry
of E (and hence of F 0 (x )) plays an important role. We remark that (2.11) and (2.12)
differ from estimates of kE k k, used in other recent papers [5],
[15] in that the assumption of good approximations to x and F 0 (x ) is used in a crucial
way. The next lemma uses the general approach of [5] and the observation from [5] and
[15] that only the major steps need be considered in the estimates.
Lemma 2.3. Let Assumption 1.1 hold, let ae 2 (0; 1), and let ffl 0 and ffi 0 be such that
the conclusions of Lemma 2.1 hold. Let - k - 0, x 0 , and B 0 be such that F 0
symmetric and
Then at least - k major steps can be taken with B k+1 nonsingular for all
Moreover, there is
are the
sequence of the first k - k major steps, iterations, and updates, then for any 1 - k - k,
Also, for any 0 -
ks
Proof. We set
We note that implies that
ks
We prove the lemma by induction on k. For
and (2.14). We obtain (2.12) from (2.8) and (2.13).
If (2.11) holds for all k ! K -
k, then from (2.7) and (2.14),
We use the induction hypothesis and C to obtain
which proves the first inequality in (2.11). The second inequality
from (2.10).
apply Lemma 2.1 to conclude that at least - k major steps
can be taken and
k. Hence, for
ks
ks l k:
Assuming that (2.12) holds for we note that (2.12) is a consequence of
Lemma 2.2 if
oeks K
Every E k is symmetric because E 0 is. Hence, if we write
then
Combining (2.16) and (2.6) yields
Hence, by (2.15),
ks
ks
As in the proof of (2.11) we have
and
We apply (2.18), (2.19), and the induction hypothesis to (2.17) to get
ks
verifying (2.12).
The estimates in Lemma 2.3 are analogs to the bounded deterioration results common
in the quasi-Newton method literature. However, in this case the deviation of
B k from F 0 (x ) is exponentially increasing and, at least according to the bounds in
Lemma 2.3, can eventually become so large that convergence may be lost. The SR1
update has, however, a self-correcting property in the linear case [9] that has been exploited
in much of the recent work on the method [15], [3], [5]. This self-correction
property overcomes the instability indicated in the estimates (2.11) and (2.12).
2.2. Uniform Linear Independence. Our uniform linear independence assumption
differs slightly from that in [5]. We only consider major steps and are not concerned
at this point with the total number (major steps required to form the sequence
of linearly independent major steps.
Assumption 2.1. There are c min ? 0 and - k - n such that the hypotheses of
Lemma 2.3 hold. Moreover from each of the sets of columns of normalized major steps
a subsequence fv p
can be extracted such that the matrix S p with columns fv p
has minimum singular value at least c min .
2.3. Convergence Results. Using the assumptions above we can prove q-linear
convergence directly using Lemma 2.3 and the methods for exploitation of uniform
linear independence from [5].
Theorem 2.4. Let Assumptions 2.1 and the hypotheses of Lemma 2.3 hold. Let
ae 2 (0; 1) be given. Then if sufficiently small the SR1 iterates
converge q-linearly to x with q-factor at most ae.
Proof. The proof is based on the simple observation that Lemma 2.3 states that
the iteration may proceed through -
major steps that then Assumption 2.1
implies that the iteration may continue.
2.3 implies that
k and that by (2.14)
ks ks k k
Note that
min
Now enough so that
then are replaced by E- k and e- k . Hence, we may continue the
iteration by Lemmas 2.1 and 2.3, obtaining q-linear convergence with q-factor at most
ae.
3. Structured Updates. In order to apply the convergence results to optimal
control problems in the context of pointwise updates, as a next step, we extend the
statements from the previous sections to the case of structured updates. We use the
notational conventions of [7] in the formulation.
Suppose that the Jacobian F 0 of F can be split into a Lipschitz continuous computable
part C(x) and a part A(x) which will be approximated by a SR1 update:
We define the SR1 update by
where the step is computed by solving
If we choose
the secant condition (B+ holds and we obtain from (3.2)
and (3.3)
Hence the SR1 update can be written with a perturbation ~
F+ of F (x+ )
~
c
~
We use the notation
for errors in the Jacobian.
Next we apply Lemma 2.1 to obtain a similar estimate
Lemma 3.1. Let Assumption 1.1 hold and let ae 2 (0; 1) be given. Then there are
ffl 0 and ~ ffi 0 such that if x c and B c for a structured SR1 update satisfy
then defined and satisfies
Moreover
ks c
Proof. Note that
with a Lipschitz constant fl C for C. Then holds and since Lemma 2.1 holds
for arbitrary approximations of the Jacobian, x+ exists and (3.8) holds. Observing that
ks c k 2
proves (3.9) and completes the proof.
The next lemma is an extension of Lemma 2.2. Before we can state the lemma we
define when an update is skipped. We update B only if
for some oe 2 (0; 1) fixed. The definition of minor and major steps is the same as that
in x 2 with (3.10) playing the role of (1.5).
Lemma 3.2. Let the hypotheses of Lemma 3.1 hold. Then there is C 1 such that if
c is a major step, then
ks c k
and
ks c
Proof. For the structured updates we have
~
If we note that
ks c k
and (3.10) has been changed accordingly then the proof is the same as for Lemma 2.2
with
and F (x) replaced by ~
F .
In the next lemma symmetry is assumed which causes some additional changes for
the proof.
Lemma 3.3. Let Assumption 1.1 and the hypotheses of Lemma 3.1 hold. Let -
be such that
Then at least - k major steps can be taken with B k+1 nonsingular for all
Moreover, there is
are the
sequence of the first k - k major steps, iterations, and updates, then for any 1 - k - k,
Also, for any 0 -
ks
Proof. The first part of the induction proof for (3.15) is identical to the one for
Lemma 2.3 and therefore omitted.
Assuming that (3.16) holds for we note that (3.16) is a consequence of
Lemma 3.2 if we have from (3.13)
oeks K
If we write
~
then by (3.9)
and
~
Combining (3.18) and (3.9) yields
Note that by assumption hence by definition (3.6)
Hence, by (3.17),
ks
where, as in x 2
As in the proof of (2.11) we have
and
We apply (3.20), (3.21), and the induction hypothesis to (3.19) to get
ks
verifying (3.16).
Using the definition of uniform linear independence from x 2.2 we may state the
structured analog of Theorem 2.4. The proof is essentially identical to that of Theorem
2.4.
Theorem 3.4. Let Assumptions 2.1 and the hypotheses of Lemma 3.3 hold. Let
ae 2 (0; 1) be given. Then if sufficiently small the SR1 iterates
converge q-linearly to x with q-factor at most ae.
4. Pointwise Structured Updates for Optimal Control. In order to apply
the convergence results to optimal control problems in the context of pointwise updates,
as a next step, we extend the statements from the previous sections to the case of
pointwise structured updates.
Our nonlinear equations represent the necessary conditions for the optimal control
problem
minimize
over
We set
solves the adjoint equation
We seek to solve the nonlinear system
@
H u (x; u; t)C
A =B
A
for which satisfy the boundary conditions
We use the following assumption
Assumption 4.1. f; L and their first and second partial derivatives with respect
to x and u are continuous on IR N \Theta IR M \Theta [0; T ].
Under the above assumption, F is Fr'echet -differentiable and the Fr'echet -derivative
is given by
@
A =B
@
@
A
with
dt and all other components as multiplication operators.
into two parts F 0 containing all
information from first derivatives
@
A
and A(z) consisting of second order derivatives
@
All entries depend on time t so that A(z) is typically approximated by a family of quasi
Newton updates depending also on time t
@
A
with
We use a pointwise analog of (3.10)
We update B at each t 2 [0; T ] by a structured SR1 update
In order to justify the use of pointwise updates, we state the next lemma.
Lemma 4.1. If B 0 is given of the form (4.3), then all B k defined by (4.6) are also
multiplication operators with
Proof. The proof is via induction and we show the step from B c to B+ . We write
Note that by (4.4)
Since the differentiation operator D appears linearly in F and in C it cancels in
H
This implies with the differentiability assumption on the data that pointwise
holds for some OE 2 L 1 [0; T ]. Since B c is also in L 1 we have
The decision in the update formula (4.6) shows that the components of B+ are measurable
functions. They are also essentially bounded because either
or using the choice of oe in (4.6) and (4.7)
This completes the proof.
The next Lemma gives pointwise estimates on the error in the Jacobian in the
context of pointwise updates which will be used later for uniform estimates.
Lemma 4.2. Assume that for some t 2 [0; T is a multiplication operator
and let is a major step, then E+ (t) is also a
multiplation operator and the following estimates hold:
ks c (t)k
and
ks c (t)k)ks c (t)k:
Proof. Observe that we can rewrite (4.4) pointwise
By assumption, the last term is a multiplication operator and therefore does not contain
the differentiation operator D. In first and the second term in parentheses the D cancels,
so that we can estimate pointwise under the given smoothness assumptions on the data:
ks c
Hence we obtain from (3.13) using (4.12) that pointwise
ks c (t)k ks c (t)k
To estimate kE+ s c k recall that from the secant condition
and therefore (4.9) holds. Furthermore, from (4.11)
and
ks c k 2
The next Lemma describes a linear rate estimate in a uniform norm.
Lemma 4.3. For ae 2 (0; 1) there are ffl 0 and ffi 0 such that if s c is a major step for
all t and B c for the pointwise structured SR1 update satisfies
defined and satisfies
Moreover for some fl; C
ks c k1
ks c k 2(4.15)
and
Proof. The assumptions imply that
is small so that Lemma 2.1 can be invoked to yield (4.13). From this we deduce
ks c k1 . (4.8) gives
ks c k1
which is (4.14). In the same way we use (4.9) and (4.10) to obtain (4.15) and (4.16),
resp.
5. Numerical Results. We present some numerical results which illustrate the
observations from the previous sections. Let us consider the following class of examples:
First we set
Furthermore, we set x . The initial data are given by
@
We did update if
is true. In [14] we used
The discretization parameter comes from the discretization of the two-point
boundary value problem by the trapezoid finite difference scheme used with a
Richardson extrapolation to achieve 4th order accurate results, see e.g. [12]. This
indicates that the termination criterion should be
We approximate the norm in the discrete case also with accuracy of order 4.
The numbers in column [No Upd %] give the percentage of the 121 3 \Theta 3-matrices
which have not been updated at iteration k. The difference in the matrices is computed
as follows:
where kB(t)k F denotes the Frobenius-Norm in R 3\Theta3 .
In
Table
5.1 we report on the results for the SR-1 update where the choice of oe is
based on the truncation error of the discretization scheme (h 4 - 0:6 \Theta 10 \Gamma5 ). Tables 5.2
and 5.3 show the effects of more conservative updating strategies. The analysis in the
preceding sections indicates that a larger value of oe will keep smaller and might
thereby allow for a more monotone iteration. While an increase by a factor of 100 did
reduce the size of kE k k, it did not lead to any improvement in overall performance
(see
Table
5.2). Increasing oe by a factor of 1000 (see Table 5.3) led to a performance
improvement of about 20%.
In
Table
5.2 we used a less stringent requirement for not updating the Hessians. In
this table oe was increased by a factor of 10 2 .
Note the reduced error in the Hessian updates in the early phase of the algorithm.
Table
9 0.72865D-04 0.002 1.7 332.568
Table
Table
--R
Analysis of a symmetric rank-one trust region method
Testing a class of methods for solving minimization problems with simple bounds on the variables
Numerical Methods for Nonlinear Equations and Unconstrained Optimization
Convergence theorems for least change secant update methods
Fast secant methods for the iterative solution of large nonsymmetric linear systems
John Wiley and Sons
An Algorithm for Optimizing Functions with Multiple Minima
Numerical Solution of Two Point Boundary Value Problems
Iterative Methods for Linear and Nonlinear Equations
A pointwise quasi-Newton method for unconstrained optimal control problems
A theoretical and experimental study of the symmetric rank one update
Iterative Solution of Nonlinear Equations in Several Variables
A new approach to the symmetric rank-one updating algorithm
Yield optimization using a GaAs process simulator coupled to a physical device model
On large scale nonlinear least squares calculations
Compact storage of Broyden-class quasi-Newton matrices
--TR
--CTR
P. Spellucci, A Modified Rank One Update Which Converges Q-Superlinearly, Computational Optimization and Applications, v.19 n.3, p.273-296, September 2001 | optimal control;SR1 update;pointwise quasi-Newton method |
284954 | Approximation Algorithms for the Feedback Vertex Set Problem with Applications to Constraint Satisfaction and Bayesian Inference. | A feedback vertex set of an undirected graph is a subset of vertices that intersects with the vertex set of each cycle in the graph. Given an undirected graph G with n vertices and weights on its vertices, polynomial-time algorithms are provided for approximating the problem of finding a feedback vertex set of G with smallest weight. When the weights of all vertices in G are equal, the performance ratio attained by these algorithms is 4-(2/n). This improves a previous algorithm which achieved an approximation factor of $O(\sqrt{\log n})$ for this case. For general vertex weights, the performance ratio becomes $\min\{2\Delta^2, 4 \log_2 n\}$ where $\Delta$ denotes the maximum degree in G. For the special case of planar graphs this ratio is reduced to 10. An interesting special case of weighted graphs where a performance ratio of 4-(2/n) is achieved is the one where a prescribed subset of the vertices, so-called blackout vertices, is not allowed to participate in any feedback vertex set.It is shown how these algorithms can improve the search performance for constraint satisfaction problems. An application in the area of Bayesian inference of graphs with blackout vertices is also presented. | Introduction
E) be an undirected graph and let be a weight function on
the vertices of G. A cycle in G is a path whose two terminal vertices coincide. A feedback
vertex set of G is a subset of vertices F ' V (G) such that each cycle in G passes through
at least one vertex in F . In other words, a feedback vertex set F is a set of vertices of
G such that by removing F from G, along with all the edges incident with F , a forest is
obtained. A minimum feedback vertex set of a weighted graph (G; w) is a feedback vertex
set of G of minimum weight. The weight of a minimum feedback vertex set will be denoted
by -(G; w).
The weighted feedback vertex set (WFVS) problem is defined as finding a minimum
feedback vertex set of a given weighted graph (G; w). The special case where w is the
constant function 1 is called the unweighted feedback vertex set (UFVS) problem. Given a
graph G and an integer k, the problem of deciding whether -(G; 1) - k is known to be
NP-Complete [GJ79, pp. 191-192]. Hence, it is natural to look for efficient approximation
algorithms for the feedback vertex set problem, particularly in view of the recent applications
of such algorithms in artificial intelligence, as we show in the sequel.
Suppose A is an algorithm that finds a feedback vertex set FA for any given undirected
weighted graph (G; w). We denote the sum of weights of the vertices in FA by w(FA ).
The performance ratio of A for (G; w) is defined by RA (G;
-(G;
performance ratio r A (n; w) of A for w is the supremum of RA (G; w) over all graphs G with
vertices and for the same weight function w. When w is the constant function 1, we call
r A (n; 1) the unweighted performance ratio of A. Finally, the performance ratio r A (n) of A
is the supremum of r A (n; w) over all weight functions w defined over graphs with n vertices.
An approximation algorithm for the UFVS problem that achieves an unweighted performance
ratio of 2 log 2 n is essentially contained in a lemma due to Erd-os and P'osa [EP62].
This result was improved by Monien and Schulz [MS81], where they achieved a performance
ratio of O(
log n).
In Section 2, we provide an approximation algorithm for the UFVS problem that achieves
an unweighted performance ratio of at most 4 \Gamma (2=n). Our algorithm draws upon a theorem
by Simonovits [Si67] and our analysis uses a result by Voss [Vo68]. Actually, we consider
a generalization of the UFVS problem, where a prescribed subset of the vertices, called
blackout vertices, is not allowed to participate in any feedback vertex set. This problem
is a subcase of the WFVS problem wherein each allowed vertex has unit weight and each
blackout vertex has infinite weight. Our interest in graphs with blackout vertices is motivated
by the loop cutset problem and its application to the updating problem in Bayesian
inference which is explored in Section 4.
In Section 3, we present two algorithms for the WFVS problem. We first devise a primal-dual
algorithm which is based on formulating the WFVS problem as an instance of the set
cover problem. The algorithm has a performance ratio of 10 for weighted planar graphs
and 4 log 2 n for general weighted graphs. This ratio is achieved by extending the Erd-os-
P'osa Lemma to weighted graphs. The second algorithm presented in Section 3 achieves
a performance ratio of weighted graphs, where \Delta(G) is the maximum
degree of G. This result is interesting for low degree graphs.
A notable application of approximation algorithms for the UFVS problem in artificial
intelligence due to Dechter and Pearl is as follows [DP87, De90]. We are given a set of
its values from a finite domain D i . Also, for
every are given a constraint subset R i;j ' D i \Theta D j which defines the allowable pairs
of values that can be taken by the pair of variables Our task is to find an assignment
for all variables such that all the constraints R i;j are satisfied. With each instance of the
problem we can associate an undirected graph G whose vertex set is the set of variables,
and for each constraint R i;j which is strictly contained in D i \Theta D j (i.e., R i;j
there is an edge in G connecting x i and x j . The resulting graph G is called a constraint
network and it is said to represent a constraint satisfaction problem.
A common method for solving a constraint satisfaction problem is by backtracking,
that is, by repeatedly assigning values to the variables in a predetermined order and then
backtracking whenever reaching a dead end. This approach can be improved as follows.
First, find a feedback vertex set of the constraint network. Then, arrange the variables
so that variables in the feedback vertex set precede all other variables, and apply the
backtracking procedure. Once the values of the variables in the feedback vertex set are
determined by the backtracking procedure, the algorithm switches to a polynomial-time
procedure solve-tree that solves the constraint satisfaction problem in the remaining
forest. If solve-tree succeeds, a solution is found; otherwise, another backtracking phase
occurs.
The complexity of the above modified backtracking algorithm grows exponentially with
the size of the feedback vertex set: If a feedback vertex set contains k variables, each
having a domain of size 2, then the procedure solve-tree might be invoked up to 2 k
times. A procedure solve-tree that runs in polynomial-time was developed by Dechter
and Pearl, who also proved the optimality of their tree algorithm [DP88]. Consequently,
our approximation algorithm for finding a small feedback vertex set reduces the complexity
of solving constraint satisfaction problems through the modified backtracking algorithm.
Furthermore, if the domain size of the variables varies, then solve-tree is called a number
of times which is bounded from above by the product of the domain-sizes of the variables
whose corresponding vertices participate in the feedback vertex set. If we take the logarithm
of the domain size as the weight of a vertex, then solving the WFVS problem with these
weights optimizes the complexity of the modified backtracking algorithm in the case where
the domain size is allowed to vary.
2 The Unweighted Feedback Vertex Set Problem
The best approximation algorithm prior to this work for the UFVS problem attained a
performance ratio of O(
log n) [MS81]. We now use some results of [Si67] and [Vo68]
in order to obtain an approximation algorithm for the UFVS problem which attains a
performance In fact, we actually consider a slight generalization of the UFVS
problem where we mark each vertex of a graph as either an allowed vertex or a blackout
vertex. In such graphs, feedback vertex sets cannot contain any blackout vertices. We
denote the set of allowed vertices in G by A(G) and the set of blackout vertices by B(G).
Note that when problem reduces to the UFVS problem. A feedback vertex
set can be found in a graph G with blackout vertices if and only if every cycle in G contains
at least one allowed vertex. A graph G with this property will be called a valid graph. The
motivation for dealing with this modified problem is clarified in Section 4 where we use the
algorithm developed herein to reduce the computational complexity of Bayesian inference.
Throughout this section, G denotes a valid graph with a nonempty set of vertices V (G)
which is partitioned into a nonempty set A(G) of allowed vertices, a possibly empty set
B(G) of blackout vertices, and a set of edges E(G) possibly with parallel edges and self-
loops. We use - a (G) as a short-hand notation for -(G; w) where w assigns unit weight to
each allowed vertex and an infinite weight to each blackout vertex. A neighbor of v is a
vertex which is connected to v by an edge in E(G). The degree \Delta G (v) of v in G
is the number of edges that are incident with v in G. A self-loop at a vertex v contributes 2
to the degree of v. The degree of G, denoted \Delta(G), is the largest among all degrees of
vertices in G. A vertex in G of degree 1 is called an endpoint . A vertex of degree 2 is called
a linkpoint and a vertex of any higher degree is called a branchpoint. A graph G is called
rich if every vertex in V (G) is a branchpoint. The notation \Delta a (G) will stand for the largest
among all degrees of vertices in A(G) (a degree of a vertex in A(G) takes into account all
incident edges, including those that lead to neighbors in B(G)). In a rich valid graph we
have \Delta a (G) - 3.
Two cycles in a valid graph G are independent if their vertex sets share only blackout
vertices. Note that the size of any feedback vertex set of G is bounded from below by the
largest number of pairwise independent cycles that can be found in G. A cycle \Gamma in G is
called simple if it visits every vertex in V (G) at most once. Clearly, a set F is a feedback
vertex set of G if and only if it intersects with every simple cycle in G. A graph is called
a singleton if it contains only one vertex. A singleton is called self-looped if it contains at
least one self loop; for a singleton we have -(G; it is self-looped and -(G;
otherwise.
A graph G is connected if for every two vertices there is a connecting path in G. Every
graph G can be uniquely decomposed into isolated connected components
Similarly, every feedback vertex set F of G can be partitioned into feedback vertex sets
such that F i is a feedback vertex set of G i . Hence, - a
A 2-3-subgraph of a valid graph G is a subgraph H of G such that the degree in H of
every vertex in A(G) is either 2 or 3. The degree of a vertex belonging to B(G) in H is
not restricted. A 2-3-subgraph exists in any valid graph which is not a forest. A maximal
2-3-subgraph of G is a 2-3-subgraph of G which is not a subgraph of any other 2-3-subgraph
of G. A maximal 2-3-subgraph can be easily found by applying depth-first-search (DFS)
on G.
A linkpoint v in a 2-3-subgraph H is called a critical linkpoint if v is an allowed vertex,
and there is a cycle \Gamma in G such that fvg. We refer to such a cycle \Gamma
in G as a witness cycle of v. Note that we can assume a witness cycle to be simple and, so,
verifying whether a linkpoint v in H is a critical linkpoint is easy: Remove the set of vertices
G, with all incident edges, and apply a breadth-first-search (BFS)
to check whether there is a cycle through v in the remaining graph.
A cycle in a valid graph G is branchpoint-free if it does not pass through any allowed
branchpoints; that is, a branchpoint-free cycle passes only through allowed linkpoints and
blackout vertices of G.
The rest of this section is devoted to showing that the following algorithm correctly
outputs a vertex feedback set and achieves an unweighted performance ratio less than 4.
Algorithm SubG-2-3 (Input: valid graph G;
Output: feedback vertex set F of G);
if G is a forest then
F / ;;
else begin:
Using DFS, find a maximal 2-3-subgraph H of G;
Using BFS, find the set X of critical linkpoints in H;
Let Y be the set of allowed branchpoints in H;
Find a set W that covers all branchpoint-free cycles of H which
are not covered by X;
end.
Note that if are isolated cycles in H and so
W consists of one vertex of each such cycle.
We elaborate on how the set W is computed when B(G) 6= ;. Let H 0 be a graph obtained
from H by removing the set X along with its incident edges. Let H b be the subgraph of
induced by the allowed linkpoints and blackout vertices of H 0 . For every isolated cycle
in H b , we arbitrarily choose an allowed linkpoint from that cycle to W . Next, we replace
each maximal (with respect to containment) chain of allowed linkpoints in H b by an edge,
resulting in a graph H
b . We assign unit cost to all edges corresponding to a chain of allowed
linkpoints, and a zero cost to all other edges, and compute a minimum-cost spanning forest
T of H
b . We now add to W one linkpoint from each chain of allowed linkpoints in H b that
corresponds to an edge in H
It is now straightforward to verify that the complexity
of SubG-2-3 is linear in jE(G)j.
The following two lemmas, which generalize some claims used in the proof of Theorem 1
in [Si67], are used to prove that SubG-2-3 outputs a feedback vertex set of a valid graph
G.
H be a maximal 2-3-subgraph of a valid graph G and let \Gamma be a simple cycle
in G. Then, one of the following holds:
(a) \Gamma is a witness cycle of some critical linkpoint of H, or -
allowed branchpoint of H, or -
(c) \Gamma is a cycle in H that consists only of blackout vertices or allowed linkpoints of H.
Proof. Let \Gamma be a simple cycle in G and assume to the contrary that neither of (a)-(c)
holds. This implies in particular that \Gamma cannot be entirely contained in H. We distinguish
between two cases: (1) \Gamma does not intersect with H; and (2) \Gamma intersects with H only in
blackout vertices and allowed linkpoints of H.
Case 1: In this case we could join \Gamma and H to obtain a 2-3-subgraph H of G that
contains H as a proper subgraph. This however contradicts the maximality of H.
Case 2: If \Gamma intersects with H only in blackout vertices, then as in case 1, we can join \Gamma
and H and contradict the maximality of H. Suppose now that \Gamma intersects with H in some
allowed linkpoints of H. Note that in such a case \Gamma must intersect with H in at least two
distinct allowed linkpoints of H, or else \Gamma would be a witness cycle of the only intersecting
(critical) linkpoint. Since \Gamma is not contained in H by assumption, we can find two allowed
linkpoints v 1 and v 2 in V (\Gamma) " V (H) that are connected by a path P along \Gamma such that
and P is not entirely contained in H. Joining P and H,
we obtain a 2-3-subgraph of G that contains H as a proper subgraph, thus contradicting
the maximality of H.
H be a maximal 2-3-subgraph of G and let \Gamma 1 and \Gamma 2 be witness cycles in
G of two distinct critical linkpoints in H. are independent cycles, namely,
Proof. Let v 1 and v 2 be the critical linkpoints associated with respectively,
and assume to the contrary that V contains an allowed vertex
Then, there is a path P in G that runs along parts of the cycles \Gamma 1 and \Gamma 2 , starting from
passing through u, and ending at v 2 . are witness cycles, we have
g. And, since v 1 and v 2 are distinct critical linkpoints, the
vertex u cannot possibly coincide with either of them. Therefore, the path P is not entirely
contained in H. Joining P and H we obtain a 2-3-subgraph of G that contains H as a
proper subgraph, thus reaching a contradiction.
Theorem 3 For every valid graph G, the set F computed by SubG-2-3 is a feedback vertex
set of G.
Proof. Let \Gamma be a simple cycle in G. We follow the three cases of Lemma 1 to show that
(a) \Gamma is a witness cycle of some critical linkpoint of H. By construction, all critical
linkpoints of H are in F .
allowed branchpoint of H. By construction, all allowed
branchpoints of H are in F .
(c) \Gamma is a cycle in H that consists only of blackout vertices or allowed linkpoints of
contains a critical linkpoint, then SubG-2-3 selects that linkpoint into
the feedback vertex set F . Otherwise, the cycle \Gamma must be entirely contained in the graph
H b that was used to create W . We now show that W covers all cycles in H b . Assume the
contrary and let \Gamma be a cycle in H b that is not covered by W . Recall that in the construction
of H
b , each chain of allowed linkpoints in \Gamma was replaced by an edge with a unit cost. Let
be the resulting cycle in H
b . Since W does not cover \Gamma, all unit-cost edges in \Gamma were
necessarily chosen to the minimum-cost spanning forest T . On the other hand, since T
does not contain any cycles, there must be at least one zero-cost edge of \Gamma which is not
contained in T . Hence, by deleting one of the unit-cost edges of \Gamma from T and inserting
instead a particular zero-cost edge of \Gamma into T , we can obtain a new spanning forest T 0 for
b . However, the cost of T 0 is smaller than that of T , which contradicts our assumption
that T is a minimum-cost spanning forest.
A reduction graph G 0 of an undirected graph G is a graph obtained from G by a sequence
of the following transformations:
ffl Delete an endpoint and its incident edge.
ffl Connect two neighbors of a linkpoint v (other than a self-looped singleton) by a new
edge and remove v from the graph with its two incident edges.
A reduction graph of a valid graph G is not necessarily valid, since the reduction process
may generate a cycle consisting of blackout vertices only. We will be interested in reduction
sequences in which each transformation yields a valid graph.
Lemma 4 Let G 0 be a reduction graph of G. If G 0 is valid, then - a
Proof. Let G; be a sequence of reduction graphs where each
H i is obtained by a removal of one linkpoint and possibly some endpoints from H i\Gamma1 . Since
G 0 is valid, each H i is a valid graph as well. Let v i be the linkpoint that is removed from
H i to obtain H i+1 .
First we show that - a (G 0 ) - a (G). Suppose F is a feedback vertex set of H i+1 for some
be a cycle in H i that passes through v i . A reduction of \Gamma obtained
by replacing the linkpoint v i on \Gamma by an edge connecting the neighbors of v i yields a cycle
in H i+1 . The vertex set of - \Gamma intersects the set F . Hence, F is also a feedback vertex set
of H i which implies that - a (H
Now we show that - a (G 0 ) - a (G). Suppose F is a minimal feedback vertex set of H i .
If F does not contain v i , then F is also a feedback vertex set of H i+1 . Otherwise, write
We claim that the set F 0 cannot fail to cover more than one cycle in H i+1 .
If it failed, then there would be two distinct cycles \Gamma 1 and \Gamma 2 in H i that contain v i , in
which case the cycle in H i induced by (V would not be covered by F ,
thus contradicting the fact that F is a feedback vertex set of H i . It follows by this and the
minimality of F that the set F 0 fails to cover exactly one cycle in H i+1 . This cycle contains
at least one allowed vertex u because H i+1 is a valid graph. Therefore, the set F 0 [ fug is
a feedback vertex set of H i+1 . Hence, - a (H
A reduction graph G of a graph G is minimal if and only if G is a valid graph and
any proper reduction graph G 0 of G is not valid.
Lemma 5 If G is a minimal reduction graph of G, then G does not contain blackout
linkpoints, and every feedback vertex set of G contains all allowed linkpoints of G .
Proof. Recall that G is a valid graph. If G contains a blackout linkpoint, then its removal
creates a valid reduction graph which contradicts the minimality of G . Now assume F is
a feedback vertex set of G and v is an allowed linkpoint which is not in F . If the removal
of v yields a graph that is not valid, then v must have been included in F . If the removal
of v yields a valid graph, then G is not minimal.
The next lemma is needed in order to establish the performance ratio of SubG-2-3. It
is a variant of Lemma 4 in [Vo68].
Lemma 6 Let G be a valid graph with no blackout linkpoints and such that no vertex has
degree less than 2. Then, for every feedback vertex set F of G which contains all linkpoints
of G,
Proof.
Suppose (G). In this case we have jV (G)j - 3jV
and, therefore, the lemma holds trivially. So we assume from now on that jF j ! jV (G)j.
denote the set of edges in E(G) whose terminal vertices are all vertices in F .
denote the set of edges in E(G) whose terminal vertices are
all vertices in X. Also, let E F;X denote the set of those edges in G that connect vertices in
F with vertices in X. Clearly, E F , EX , and E F;X form a partition on E(G). Now, the graph
obtained by deleting F from G is a nonempty forest on X and, therefore, jE
However, each vertex in X is a branchpoint in G because all linkpoints are assumed to be
in F and there are no vertices of degree less than 2. Therefore,
i.e.,
On the other hand,
Combining the last two inequalities we obtain
The main claim of this section now follows.
Theorem 7 The unweighted performance ratio of SubG-2-3 is at most 4 \Gamma (2=jV (G)j).
Proof. Let F be the feedback vertex set computed by SubG-2-3 for a valid graph G which
is not a forest. We show that jF j - 4 - a 2. The theorem follows immediately from
this inequality.
Let H, X, Y , and W be as in SubG-2-3. Suppose - a cycles in G
pass through some allowed vertex v in G and, so, no vertex other than v can be a critical
linkpoint in H. Now, if v is a linkpoint in H, then H is a cycle. Otherwise, one can readily
verify that H must contain exactly two branchpoints. In either case we have jF j - 2. We
assume from now on that - a (G) - 2.
For every v i 2 X, let \Gamma i be some witness cycle of v i in G. By Lemma 2, the cycles \Gamma i
are pairwise independent. Therefore, the minimum number of vertices needed to cover such
cycles is jXj.
Let f\Gamma
j g be the set of branchpoint-free cycles in H that do not contain any critical
linkpoints of H. Note that each cycle \Gamma
j is independent with any witness cycle \Gamma i . We
now claim that any smallest set W 0 of vertices of V (H) that intersects with the vertex set
of each \Gamma
j must be of size jW j. To see this, note that W 0 contains only allowed linkpoints
of H. If we remove from H
b all the edges that correspond to linkpoints belonging to W 0 ,
then we clearly end up with a forest. By construction, the minimum number of edges (or
allowed linkpoints), needed to be removed from H
b so as to make it into a forest, is jW j.
Recalling that every cycle \Gamma
j is independent with any witness cycle \Gamma i , the set W 0 cannot
possibly intersect with any of the cycles \Gamma i . Hence, in order to cover the cycles f\Gamma
in G, we will need at least jXj vertices. Therefore,
On the other hand, we recall that jF
We distinguish between the following two cases.
Case 1: Here we have,
Case 2: be a feedback vertex set of G of size - a (G) and let
W 0 be a smallest subset of F that intersects with the vertex set of each \Gamma
. Clearly, W 0
consists of allowed linkpoints of H, and, as we showed earlier in this proof, jW
H 1 be the subgraph of H obtained by removing all critical linkpoints of H and all linkpoints
in W 0 . With each deleted linkpoint, we also remove recursively all resulting endpoints from
H while obtaining H 1 . Thus, a deletion of a linkpoint from H can decrease the number
of branchpoints by 2 at most. Hence, the number of branchpoints left in H 1 is at least
Furthermore, the graph H 1 does not contain any endpoints.
1 be a minimal reduction graph of H 1 and let H 2 be a valid graph obtained by
removing all singleton components from H
1 . Since H 1 does not contain any endpoints,
the number of branchpoints of H 1 is preserved in H
1 and in H 2 . Therefore, the graph H 2
contains at least jY branchpoints. On the other hand, since H
1 is a minimal
reduction and due to Lemma 5, there are no blackout linkpoints in H
1 and every feedback
vertex set of H
1 contains all allowed linkpoints of H
1 . Furthermore, the graphs H
do not contain any endpoints.
It follows that we can apply Lemma 6 to H 2 and any feedback vertex set of H 2 , thus
obtaining
where the equality is due to Lemma 4. Therefore,
Recall that W 0 was chosen as a subset of a smallest feedback vertex set F of G. Let
X 0 be a smallest subset of F that covers the witness cycles f\Gamma i g and let Z 0 be a smallest
subset of F that covers the cycles of H 1 . Since H 1 does not contain any of the critical
linkpoints of H, each witness cycle \Gamma i is independent with any cycle in H 1 and, so, we have
It also follows from our previous discussion that In addition, by
construction of H 1 we have It thus follows that
Combining with (1), we obtain the desired result.
Weighted Feedback Vertex Set
In this section, we consider the approximation of the WFVS problem described in Section 1.
That is, given an undirected graph G and a weight function w on its vertices, find a feedback
vertex set of (G; w) with minimum weight. As in the previous section, we assume that G
may contain parallel edges and self-loops.
A weighted reduction graph G 0 of an undirected graph G is a graph obtained from G by
a sequence of the following transformations:
ffl Delete an endpoint and its incident edge.
ffl Let u and v be two adjacent vertices such that w(u) - w(v) and v is a linkpoint.
Connect u to the other neighbor of v, and remove v from the graph with its two
incident edges.
The following lemma can be easily verified. (See, e.g., the proof of Lemma 4).
weighted reduction graph of (G; w). Then, -(G
A weighted reduction graph G of a graph G is minimal if and only if any weighted
reduction graph G 0 of G is equal to G . A graph is called branchy if it has no endpoints
and, in addition, its set of linkpoints induces an independent set, i.e., each linkpoint is either
an isolated self-looped singleton or connected to two branchpoints. Clearly, any minimal
weighted reduction graph must be branchy. We note that the complexity of transforming a
graph into a branchy graph is linear in jE(G)j.
We are now ready to present our algorithms for finding an approximation for a minimum-weight
feedback vertex set of a given weighted graph. In Section 3.1 we give an algorithm
that achieves a performance ratio of 4 log 2 jV (G)j. In Section 3.2 we present an algorithm
that achieves a performance ratio of
3.1 The primal-dual algorithm
The basis of the first approximation algorithm is the next lemma which generalizes a lemma
due to Erd-os and P'osa [EP62, Lemma 3]. That lemma was obtained by Erd-os and P'osa
while estimating the smallest number of edges in a graph which contains a given number
of pairwise independent cycles. Later on, in [EP64], they provided bounds on the value of
-(G; 1) in terms of the largest number of pairwise independent cycles in G. Tighter bounds
on -(G; 1) were obtained by Simonovits [Si67] and Voss [Vo68].
Lemma 9 The shortest cycle in any branchy graph G with at least two vertices is of length
Proof. Let t be the smallest even integer such that 2 \Delta 2 t=2 ? jV (G)j. Apply BFS on G
of depth t starting at some vertex v. We now claim that the search hits some vertex twice
and so there exists a cycle of length - 2t in G. Indeed, if it were not so, then the induced
BFS tree would contain at least 2 \Delta 2 t=2 distinct vertices of G, which is a contradiction.
In each iteration of the proposed algorithm, we first find a minimal weighted reduction
graph, and then find a cycle \Gamma with the smallest number of vertices in the minimal weighted
reduction graph. The algorithm sets ffi to be the minimum among the weights of the vertices
in V (\Gamma). This value of ffi is subtracted, in turn, from the weight of each vertex in V (\Gamma).
Vertices whose weight becomes zero are added to the feedback vertex set and deleted from
the graph. Each such iteration is repeated until the graph is exhausted.
Algorithm MiniWCycle (Input: (G; w); Output: feedback vertex set F of (G; w));
While H is not a forest do begin:
Find a minimal weighted reduction graph
Find a cycle \Gamma 0 in H 0 with the smallest number of vertices;
Remove X (with all incident edges) from H
end.
Finding a shortest cycle can be done by running BFS from each vertex until a cycle is
found and then selecting the smallest. A more efficient approach for finding the shortest
cycle is described in [IR78].
It is not hard to see that MiniWCycle computes a feedback vertex set of G. We
now analyze the algorithm employing techniques similar to those used in [Ho82], [Ho83],
and [KhVY94]. We note that the algorithm can also be analyzed using the Local Ratio
Theorem of Bar-Yehuda and Even [BaEv85].
Theorem 10 The performance ratio of algorithm MiniWCycle is at most 4 log 2 jV (G)j.
Proof. We assume that jV (G)j ? 1. Given a feedback vertex set F of (G; w), let
be the indicator vector of F , namely, x
We denote by C the set of cycles in G. The problem of finding a minimum-weight feedback
vertex set of (G; w) can be formulated in terms of x by an integer programming problem
as follows:
minimize
ranging over all nonnegative integer vectors
(2)
Let C v denote the set of cycles passing through vertex v in G and consider the following
integer programming packing problem:
maximize
ranging over all nonnegative integer vectors \Gamma2C such that
Clearly, the linear relaxation of (3) is the dual of the linear relaxation of (2), with
being the dual variables.
weighted reduction graph computed at some iteration of
algorithm MiniWCycle. Then, for each cycle
as follows: If all vertices in V (\Gamma 0 ) belong to G, then Otherwise, we "unfold" the
transformation steps performed in obtaining H 0 from H in backward order, i.e., from H 0
back to H: In each such step we add to \Gamma 0 chains of linkpoints (connecting vertices in
that were deleted. When this process finishes, the cycle \Gamma 0 of H 0 transforms into a cycle \Gamma
of G.
We now show that MiniWCycle can be interpreted as a primal-dual algorithm. We
first show that it computes a dual feasible solution for (3) with a certain maximality prop-
erty. The initial dual feasible solution is the one in which all the dual variables y \Gamma are
zero.
i be a cycle chosen at iteration i of MiniWCycle and let \Gamma i be the associated
cycle in G. We may view the computation of iteration i of MiniWCycle as setting the
value of the dual variable y \Gamma i to the weight ffi of a lightest vertex in V (\Gamma 0
). The updated
weight wH 0 (v) of every
precisely the slack of the dual constraint
that corresponds to v.
It is clear that by the choice of ffi, the values of the dual variables y \Gamma at the end of
iteration i of MiniWCycle satisfy the dual constraints (4) corresponding to vertices
i ). It thus follows that the dual constraints hold for all vertices
Let v be a vertex that was removed from H to obtain H 0 in iteration i of MiniWCycle.
It remains to show that the dual constraint (4) corresponding to such a vertex holds in each
iteration j of the algorithm for every j - i.
We show this by backward induction on j. By the previous discussion it follows that
the constraints corresponding to vertices that exist in the last iteration all hold. Suppose
now that the dual constraints corresponding to vertices in V (H 0 ) in iteration j are not
violated. We show that the dual constraints corresponding to vertices in V
in that iteration are also not violated. Let c be a chain of linkpoints in H in iteration j,
and let v 1 and v 2 be the two branchpoints adjacent to c. Let u be a vertex of minimum
weight among v 1 , v 2 , and the vertices in c. We note that the weighted reduction procedure
deletes all vertices in c except possibly for one representative, depending on whether u is
in c or is one of its adjacent branchpoints. We now observe that the set of cycles that pass
through a linkpoint in c is the same for all linkpoints in c, and is contained in the set of
cycles that pass through v 1 , and is also contained in the set of cycles that pass through v 2 .
This implies that if the dual constraint corresponding to u is not violated, then the dual
constraints corresponding to any vertex in c is also not violated.
The algorithm essentially constructs a primal solution x from the dual solution y: It selects
into the feedback vertex set all vertices for which: (i) the corresponding dual constraints
are tight; and (ii) in the iteration the constraint first became tight, the corresponding vertex
belonged to the graph. As stated earlier, this construction yields a feasible solution.
Let x
denote the optimal primal and dual fractional
solutions, respectively. It follows from the duality Theorem that
w(v)
w(v)
\Gamma2C
y
\Gamma2C
Hence, to prove the theorem, it suffices to bound the ratio between the LHS and the RHS
of (5). First note that y \Gamma 6= 0 only for cycles \Gamma in G that are associated with cycles \Gamma 0
that were chosen at some iteration of MiniWCycle. By the above construction of x, it is
clear that the dual variable y \Gamma of each such cycle \Gamma contributes its value to at most V
vertices. Hence,
\Gamma2C
Now, in each iteration, the graph H 0 is a branchy graph. Therefore, by Lemma 9, we have
that jV (\Gamma 0 )j - 4 log 2 jV (G)j. Hence the theorem is proved.
Proposition 11 For planar graphs, the weighted performance ratio of MiniWCycle is at
most 10.
Proof. We first notice that the weighted reduction process preserves planarity and, there-
fore, at each iteration of algorithm MiniWCycle we remain with a planar graph.
We claim that every rich planar graph G must contain a face of length at most 5. Assume
the contrary. By summing up the lengths of all the faces, we get that 2jEj - 6jZj, where Z
denotes the set of faces of G. By Euler's formula,
Hence, However, since the degree of each vertex is at least 3, we get that
which is a contradiction. Furthermore, this implies that a branchy planar
graph must contain a cycle of length at most 10.
3.2 Low-degree graphs
The algorithm presented in this section is based on the following variant of Lemma 6.
Lemma 12 Let G be a branchy graph. Then, for every feedback vertex set F of G,
Proof. Let F be a feedback vertex set of G. We can assume without loss of generality that
F contains only branchpoints, since this assumption can only decrease jF j. Let G 0 be the
minimal (unweighted) reduction graph of G, i.e., G 0 contains only branchpoints or isolated
self-looped singletons. Clearly, F is also a feedback vertex set of G 0 . Thus, G 0 and F satisfy
the conditions of Lemma 6 (\Delta a = \Delta), yielding that,
Since G 0 is a branchy graph, the number of linkpoints in G can be at most \Delta(G 0 )
Hence,
We now present a weighted greedy algorithm for finding a feedback vertex set in a graph
G.
Algorithm WGreedy (Input: (G; w); Output: feedback vertex set F of (G; w));
while H is not a forest do begin:
Find a minimal weighted reduction graph (H 0
of (H; wH );
F
remove U i from H 0
i with its incident edges;
end.
For a subset S ' V , let w(S) denote the sum of weights of the vertices in S. We now
prove the following theorem.
Theorem 13 Let G be a branchy graph. Denote by F the feedback vertex set computed
by algorithm WGreedy, and by F a minimum-weight feedback vertex set in G. Then,
Proof. Assume that the number of iterations the while loop is executed in algorithm
WGreedy is p. We define the following weight functions w (G). The weight
function w i is defined, for 1 - i - p, as follows:
For all
For a subset S, let w i (S) denote the sum of weights of the vertices in S, where the
weight function is w i . Clearly,
Suppose that at one of the weighted reduction steps of algorithm WGreedy, a chain c of
equal weight linkpoints was reduced to a single vertex, say, v, which either belongs to c or
is one of the two branchpoints adjacent to c. Suppose further that v was added to F . If F
also contains a vertex from the chain c, then without loss of generality, we can assume that
this vertex can be replaced by v.
Let Obviously,
1 . We claim that if
p. Assume this is not the case. Then, with respect to the order in which vertices
entered F in algorithm WGreedy, let u be the first vertex such that u 2 F ,
was removed from the graph in a weighted reduction step. This means that u was at the
time of its removal a linkpoint that had an adjacent vertex u 0 with smaller weight. But then,
by exchanging u for u 0 in F , we obtain a feedback vertex set which has smaller weight,
contradicting the optimality of F . Hence, for a vertex
Therefore,
Notice that in the graph H 0
i , the weight function w i assigns the same weight to all vertices.
Hence, by Lemma 12, we have that w i
the theorem follows.
It follows from Lemma 8 that the performance ratio of algorithm WGreedy for (G; w)
is at most 2\Delta 2 (G) for any graph G.
4 The Loop Cutset Problem and its Application
In section 4.1 we consider a variant of the WFVS problem for directed graphs and in
section 4.2 we describe its application to Bayesian inference.
4.1 The loop cutset problem
The underlying graph of a directed graph D is the undirected graph formed by ignoring
the directions of the edges in D. A loop in D is a subgraph of D whose underlying graph
is a cycle. A vertex v is a sink with respect to a loop \Gamma if the two edges adjacent to v in
are directed into v. Every loop must contain at least one vertex that is not a sink with
respect to that loop. Each vertex that is not a sink with respect to a loop \Gamma is called an
allowed vertex with respect to \Gamma. A loop cutset of a directed graph D is a set of vertices
that contains at least one allowed vertex with respect to each loop in D. Our problem is
to find a minimum-weight loop cutset of a given directed graph D and a weight function
We denote by -(D; w) the sum of weights of the vertices in such a loop cutset. Greedy
approaches to the loop cutset problem have been suggested by [SuC90] and [St90]. Both
methods can be shown to have a performance ratio as bad as \Omega\Gamma n=4) in certain planar
graphs [St90]. An application of our approximation algorithms to the loop cutset problem
in the area of Bayesian inference is described later in this section.
The approach we take is to reduce the weighted loop cutset problem to the weighted
feedback vertex set problem solved in the previous section. Given a weighted directed graph
(D; w), we define the splitting weighted undirected graph (D s ; w s ) as follows. Split each
vertex v in D into two vertices v in
and v out
in D s such that all incoming edges to v become
undirected incident edges with v in
, and all outgoing edges from v become undirected incident
edges with v out
. In addition, we connect v in
and v out
by an undirected edge. Set w s (v in
and w s (v out
w(v). For a set of vertices X in D s , we define /(X) as the set obtained
by replacing each vertex v in
or v out
in X by the respective vertex v in D from which these
vertices originated.
Our algorithm can now be easily stated.
Algorithm LoopCutset (Input: (D; w); Output: loop cutset F of (D; w));
Construct (D s
Apply MiniWCycle on (D s ; w s ) to obtain a feedback vertex set X;
F / /(X).
Note that each loop in D is associated with a unique cycle in D s , and vice-versa, in a
straightforward manner. Let I (\Gamma) denote the loop image of a cycle \Gamma in D s , and I -1 (K)
denote the cycle image of a loop K in D. It is clear that the mapping I is
The next lemma shows that algorithm LoopCutset outputs a loop cutset of (D; w).
Lemma 14 Let (D; w) be a directed weighted graph and (D s ; w s ) be its splitting graph.
Then: (i) If F is a feedback vertex set of (D s ; w s ) having finite weight, then /(F ) is a loop
cutset of (D; w), and w s U is a loop cutset of D, then the set U s
obtained from U by replacing each vertex v 2 U by vertex v out
s is a feedback vertex set
of D s , and
Proof. We prove (i). The proof of (ii) is similar. Let \Gamma be a loop in D. To prove the lemma
we show that an allowed vertex with respect to \Gamma belongs to /(F ). Let I -1 (\Gamma) be the unique
cycle image of \Gamma in D s . Since F is a cycle cover of D s having finite weight, there must be
a vertex v out
2 F in I -1 (\Gamma). Now, it is clear that vertex v 2 \Gamma from which v out
originated is
an allowed vertex with respect to \Gamma as needed. To complete the proof, by the finiteness of
must have w s for each vertex in F .
It follows from Lemma 14 that -(D; In addition, due to Theorem 10 applied
to the graph D s , and since the number of vertices in D s is twice the number of vertices
in D, we get the following bound on the performance ratio of algorithm LoopCutset.
Theorem 15 The performance ratio of LoopCutset is at most 4 log 2 (2jV (D)j).
We now show that in the unweighted loop cutset problem, we can achieve a performance
ratio better than 4. In this case, for each vertex v 2 D, the weight of v in 2 D s is one unit, and
the weight of v out 2 D s is 1. This falls within the framework considered in Section 2, since
vertices with infinite weight in D s can be treated as blackout vertices. We can therefore
apply SubG-2-3 in the LoopCutset algorithm instead of applying MiniWCycle and
obtain the following improved performance ratio.
Theorem using SubG-2-3, the unweighted performance ratio of LoopCutset
is at most 4 \Gamma (2=jV (D)j).
Proof. We have,
where the equality is due to Lemma 14, and the inequality is due to Theorem 7. Since
(D)j, the claim is proved.
4.2 An application
We conclude this section with an application of approximation algorithms for the loop cutset
problem.
Let P distribution where each u i draws values from a finite
set called the domain of u i . A directed graph D with no directed cycles is called a Bayesian
network of P if there is a 1-1 mapping between fu and vertices in D, such that
associated with vertex i and P can be written as follows:
Y
are the source vertices of the incoming edges to vertex i in D.
It is worth noting that Bayesian networks are useful knowledge representation schemes
for many artificial intelligence tasks. Bayesian networks allow a wide spectrum of independence
assumptions to be considered by a model builder so that a practical balance can
be established between computational needs and adequacy of conclusions. For a complete
exploration of this subject see [Pe88].
Suppose now that some variables fv among fu are assigned specific
values respectively. The updating problem is to compute the probability
In principle, such computations are
straightforward because each Bayesian network defines the joint probability distribution
conditional probabilities can be computed by dividing the appropriate
sums. However, such computations are inefficient both in time and space unless
they use conditional independence assumptions defined by Eq. (6). We shall see next how
our approximation algorithms for the loop cutset problem reduce the computations needed
for solving the updating problem.
A trail in a Bayesian network is a subgraph whose underlying graph is a simple path. A
vertex b is called a sink with respect to a trail t if there exist two consecutive edges a ! b
and b / c on t. A trail t is active by a set of vertices Z if (1) every sink with respect to
t either is in Z or has a descendant in Z and (2) every other vertex along t is outside Z.
Otherwise, the trail is said to be blocked by Z.
Verma and Pearl [VePe88] have proved that if D is a Bayesian network of P
and all trails between a vertex in fr and a vertex in fs are blocked by
g, then the corresponding sets of variables fu r 1
are
independent conditioned on fu t 1
g. Furthermore, Geiger and Pearl [GP90] proved
a converse to this theorem. Both results are presented and extended in [GVP90].
Using the close relationship between blocked trails and conditional independence, Kim
and Pearl [KiP83] developed an algorithm update-tree that solves the updating problem
on Bayesian networks in which every two vertices are connected with at most one trail.
update-tree views each vertex as a processor that repeatedly sends messages to each of
its neighboring vertices. When equilibrium is reached, each vertex i contains the conditional
probability distribution P computations reach equilibrium
regardless of the order of execution in time proportional to the length of the longest trail
in the network.
Pearl [Pe86] solved the updating problem on any Bayesian network as follows. First, a
set of vertices S is selected, such that any two vertices in the network are connected by at
most one active trail in S [ Z, where Z is any subset of vertices. Then, update-tree is
applied once for each combination of value assignments to the variables corresponding to S,
and, finally, the results are combined. This algorithm is called the method of conditioning
and its complexity grows exponentially with the size of S. Note that according to the
definition of active trails, the set S in Pearl's algorithm is a loop cutset of the Bayesian
network. In this paper we have developed approximation algorithms for finding S.
When the domain size of the variables varies, then update-tree is called a number
of times which is bounded from above by the product of the domain sizes of the variables
whose corresponding vertices participate in the loop cutset. If we take the logarithm of the
domain size as the weight of a vertex, then solving the weighted loop cutset problem with
these weights optimizes Pearl's updating algorithm in the case where the domain sizes are
allowed to vary.
It is useful to relate the feedback vertex set problem with the vertex cover problem in order
to establish lower bounds on the performance ratios attainable for the feedback vertex set
problem. A vertex cover of an undirected graph is a subset of the vertex set that intersects
with each edge in the graph. The vertex cover problem is to find a minimum weight vertex
cover of a given graph. There is a simple polynomial reduction from the vertex cover
problem to the feedback vertex set problem: Given a graph G, we extend G to a graph
H by adding a vertex v e for each edge e 2 E(G), and connecting v e with the vertices in
G with which e is incident in G. It is easy to verify that there always exists a minimum
feedback vertex set in H whose vertices are all in V (G) and this feedback vertex set is also
a minimum vertex cover of G. In essence, this reduction replaces each edge in G with a
cycle in H, thus transforming any vertex cover of G to a feedback vertex set of H.
Due to this reduction, it follows that the performance ratio obtainable for the feedback
vertex set problem cannot be better than the one obtainable for the vertex cover problem.
The latter problem has attracted a lot of attention over the years but has so far resisted any
approximation algorithm that achieves in general graphs a constant performance ratio less
than 2. We note that the above reduction retains planarity. However, for planar graphs,
Baker [Bak94] provided a Polynomial Approximation Scheme (PAS) for the vertex cover
problem. For the UFVS problem, there are examples showing that 4 is the tightest constant
performance ratio of algorithm SubG-2-3.
Another consequence of the above reduction is a lower bound on the unweighted performance
ratio of the following greedy algorithm, GreedyCyc, for the feedback vertex
set problem. In each iteration, GreedyCyc removes a vertex of maximal degree from
the graph, adds it to the feedback vertex set, and removes all endpoints in the graph. A
similar greedy algorithm for the vertex cover problem is presented in [Jo74] and in [Lo75].
The latter algorithm was shown to have an unweighted performance ratio no better than
\Omega\Gammahan jV (G)j) [Jo74]. Due to the reduction to the cycle cover problem, the same lower
bound holds also for GreedyCyc, as demonstrated by the graphs of [Jo74]. A tight upper
bound on the worst-case performance ratio of GreedyCyc is unknown.
Finally, one should notice that the following heuristics may improve the performance
ratios of our algorithms. For example, in each iteration MiniWCycle chooses to place
in the cover all zero-weight vertices found on the smallest cycle. This choice might be
rather poor especially if many weights are equal. It may be useful in this case to perturb
the weights of the vertices before running the algorithm. Similarly, in algorithm SubG-
2-3, there is no point in taking blindly all branchpoints of H. An appropriate heuristic
here may be to pick the branchpoints one by one in decreasing order of residual degrees.
Furthermore, the subgraph H itself should be constructed such that it contains as many
high degree vertices as possible.
Remark
In a preliminary version of this paper, presented in [BaGNR94], we conjectured that a
constant performance ratio is attainable by a polynomial time algorithm for the WFVS
problem. This has been recently verified in [BeG94, BaBF94] where a performance ratio of
2 has been obtained.
Acknowledgment
We would like to thank David Johnson for bringing [EP62] to our attention, and Samir
Khuller for helpful discussions.
--R
Constant ratio approximations of the weighted feedback vertex set problem for undirected graphs
Approximation algorithms for NP-complete problems on planar graphs
A local-ratio theorem for approximating the weighted vertex cover problem
Approximation Algorithms for the Feedback Vertex Set Problem with Applications to Constraint Satisfaction and Bayesian Inference
Approximation algorithms for the loop cutset prob- lem
The cycle cutset method for improving search performance in AI
Enhancement schemes for constraint processing: backjumping
On the maximal number of disjoint circuits of a graph
On the independent circuits contained in a graph
On the logic of causal models
independence in Bayesian networks
Approximation algorithms for set covering and vertex covering problems
Efficient bounds for the stable set
Finding a minimum circuit in a graph
Approximation algorithms for combinatorial problems
A primal-dual parallel approximation technique applied to weighted set and vertex cover
A computational model for combined causal and diagnostic reasoning in inference systems
On the ratio of optimal integral and fractional covers
Four approximation algorithms for the feedback vertex set problem
Probabilistic reasoning in intelligent systems: Networks of plausible infer- ence
A new proof and generalizations of a theorem by Erd-os and P'osa on graphs without k
On heuristics for finding loop cutsets in multiply connected belief networks
Cooper G.
Semantics and expressiveness
Some properties of graphs containing k independent circuits
--TR
--CTR
Paola Festa , Panos M. Pardalos , Mauricio G. C. Resende, Algorithm 815: FORTRAN subroutines for computing approximate solutions of feedback set problems using GRASP, ACM Transactions on Mathematical Software (TOMS), v.27 n.4, p.456-464, December 2001
Rudolf Berghammer , Alexander Fronk, Exact computation of minimum feedback vertex sets with relational algebra, Fundamenta Informaticae, v.70 n.4, p.301-316, April 2006
Rudolf Berghammer , Alexander Fronk, Exact Computation of Minimum Feedback Vertex Sets with Relational Algebra, Fundamenta Informaticae, v.70 n.4, p.301-316, December 2006
Ioannis Caragiannis , Christos Kaklamanis , Panagiotis Kanellopoulos, New bounds on the size of the minimum feedback vertex set in meshes and butterflies, Information Processing Letters, v.83 n.5, p.275-280, 15 September 2002
Maw-Shang Chang , Chin-Hua Lin , Chuan-Min Lee, New upper bounds on feedback vertex numbers in butterflies, Information Processing Letters, v.90 n.6, p.279-285,
Camil Demetrescu , Irene Finocchi, Combinatorial algorithms for feedback problems in directed graphs, Information Processing Letters, v.86 n.3, p.129-136, 16 May
Rastislav Krlovi , Peter Ruika, Minimum feedback vertex sets in shuffle-based interconnection networks, Information Processing Letters, v.86 n.4, p.191-196, 31 May
Jiong Guo , Jens Gramm , Falk Hffner , Rolf Niedermeier , Sebastian Wernicke, Compression-based fixed-parameter algorithms for feedback vertex set and edge bipartization, Journal of Computer and System Sciences, v.72 n.8, p.1386-1396, December, 2006
Venkatesh Raman , Saket Saurabh , C. R. Subramanian, Faster fixed parameter tractable algorithms for finding feedback vertex sets, ACM Transactions on Algorithms (TALG), v.2 n.3, p.403-415, July 2006
Reuven Bar-Yehuda , Keren Bendel , Ari Freund , Dror Rawitz, Local ratio: A unified framework for approximation algorithms. In Memoriam: Shimon Even 1935-2004, ACM Computing Surveys (CSUR), v.36 n.4, p.422-463, December 2004 | bayesian networks;combinatorial optimization;approximation algorithms;constraint satisfaction;vertex feedback set |
284962 | Surface Approximation and Geometric Partitions. | Motivated by applications in computer graphics, visualization, and scientific computation, we study the computational complexity of the following problem: given a set S of n points sampled from a bivariate function f(x,y) and an input parameter $\eps > 0$, compute a piecewise-linear function $\Sigma(x,y)$ of minimum complexity (that is, an xy-monotone polyhedral surface, with a minimum number of vertices, edges, or faces) such that $| \Sigma(x_p, y_p) \; - \; z_p | \:\:\leq\:\: \eps$ for all $(x_p, y_p, z_p) \in S$. We give hardness evidence for this problem, by showing that a closely related problem is NP-hard. The main result of our paper is a polynomial-time approximation algorithm that computes a piecewise-linear surface of size O(Ko log Ko), where Ko is the complexity of an optimal surface satisfying the constraints of the problem.The technique developed in our paper is more general and applies to several other problems that deal with partitioning of points (or other objects) subject to certain geometric constraints. For instance, we get the same approximation bound for the following problem arising in machine learning: given n "red" and m "blue" points in the plane, find a minimum number of pairwise disjoint triangles such that each blue point is covered by some triangle and no red point lies in any of the triangles. | Introduction
In scientific computation, visualization, and computer graphics, the modeling and construction
of surfaces is an important area. A small sample of some recent papers [2, 3, 5, 7, 10,
13, 20, 21] on this topic gives an indication of the scope and importance of this problem.
The first author has been supported by National Science Foundation Grant CCR-93-01259 and an NYI
award.
Rather than delve into any specific problem studied in these papers, we focus on a general,
abstract problem that seems to underlie them all.
In many scientific and computer graphics applications, computation takes place over
a surface in three dimensions. The surface is generally modeled by piecewise linear (or,
sometimes piecewise cubic) patches, whose vertices lie either on or in the close vicinity of
the actual surface. In order to ensure that all local features of the surface are captured,
algorithms for an automatic generation of these models generally sample at a dense set of
regularly spaced points. Demands for real-time speed and reasonable performance, however,
require the models to have as small a combinatorial complexity as possible. A common
technique employed to reduce the complexity of the model is to somehow "thin" the surface
by deleting vertices with relatively "flat" neighborhoods. Only ad hoc and heuristic methods
are known for this key step. Most of the thinning methods follow a set of local rules (such
as deleting edges or vertices whose incident faces are almost coplanar), which are applied
in an arbitrary order until they are no longer applicable. Not surprisingly, these methods
come with no performance guarantee, and generally no quantitative statement can be made
about the surface approximation computed by them.
In this paper, we address the complexity issues of the surface approximation problem
for surfaces that are xy-monotone. These surfaces represent the graphs of bivariate functions
f(x; y), and they arise quite naturally in many scientific applications. One possible
approach for handling arbitrary surfaces is to break them into monotone pieces, and apply
our algorithm individually on each piece. Let us formally define the main problem studied
in our paper.
Let f be a function of two variables x and y, and let S be a set of n points sampled
from f . A piecewise linear function \Sigma is called an "-approximation of f , for " ? 0, if
for every point Given S and ", the surface approximation problem is
to compute a piecewise linear function \Sigma that "-approximates f with a minimum number
of breakpoints. The breakpoints of \Sigma can be arbitrary points of R 3 , and they are not
necessarily points of S. In many applications, f is generally a function hypothesized to
fit the observed data-the function \Sigma is a computationally efficient substitute for f . The
parameter " is used to achieve a complexity-quality tradeoff - smaller the ", higher the
fidelity of the approximation. (The graph of a piecewise linear function of two variables is
also called a polyhedral terrain in computational geometry literature; the breakpoints of the
function are the vertices of the terrain.)
The state of theoretical knowledge on the surface approximation problem appears to be
rather slim. The provable performance bounds are known only for convex surfaces. For this
special case, an O(n 3 presented by Mitchell and Suri [16] for computing
an "-approximation of a convex polytope of n vertices in R 3 . Their algorithm produces
an approximation of size O(K o log n), where K o is the size of an optimal "-approximation.
Extending their work, Clarkson [6] has proposed an O(K randomized
algorithm for computing an approximation of size O(K o log K can be an
arbitrarily small positive number.
In this paper, we study the approximation problem for surfaces that correspond to
graphs of bivariate functions. We show that it is NP-Hard to decide if a surface can be
"-approximated using k vertices (or facets). The main result of our paper, however, is a
polynomial-time algorithm for computing an "-approximating surface of a guaranteed qual-
ity. If an optimal "-approximating surface of f has K our algorithm produces
a surface with O(K vertices. Observe that we are dealing with two approximation
measures here: ", which measures the absolute z difference between f and the "simplified"
surface \Sigma, and the ratio between the sizes of the optimal surface and the output of our
algorithm. For the lack of better terminology, we use the term "approximation" for both
measures. Notice though that " is an input (user-specified) parameter, while log K o is the
approximation guarantee provided by the analysis of our algorithm.
The key to our approximation method is an algorithm for partitioning a set of points
in the plane by pairwise disjoint triangles. This is an instance of the geometric set cover
problem, with an additional disjointness constraint on the covering objects (triangles).
Observe that the disjointness condition on covering objects precludes the well-known greedy
method for set covering [11, 14]; in fact, we can show that a greedy solution has size
in the worst-case. Let us now reformulate our surface approximation problem as a
constrained geometric partitioning problem.
p denote the orthogonal projection of a point p 2 R 3 onto the xy-plane z = 0. In
general, for any set A ae R 3 , we use -
A to denote the orthogonal projection of A onto the
xy-plane. Then, in order to get an "-approximation of f , it suffices to find a set of triangles
in 3-space such that (i) the projections of these triangles on plane z = 0 are pairwise disjoint
and they cover the projected set of points -
S, and (ii) the vertical distance between a triangle
and any point of S directly above/below it is at most ". Our polynomial-time algorithm
produces a family of O(K We stitch together these triangles to
produce the desired surface \Sigma. The "stitching" process introduces at most a constant factor
more triangles.
The geometric partition framework also includes several extensions and generalizations
of the basic surface approximation problem. For instance, we can formulate a stronger
version of the problem by replacing each sample point by a horizontal triangle (or, any
polygon). Specifically, we are given a family of horizontal triangles (or polygons) in 3-
space, whose projections on the xy-plane are pairwise disjoint. We want a piecewise linear,
"-approximating surface whose maximum vertical distance from any point on the triangles
is ". Our approximation algorithm works equally well for this variant of the problem-this
variant addresses the case when some local features of the surface are known in detail;
unfortunately, our method works only for horizontal features.
Finally, let us mention the planar bichromatic partition problem that is of independent
interest in the machine learning literature: Given a set R of 'red' points and another set
B of 'blue' points in the plane, find a minimum number of pairwise disjoint triangles such
that each blue point lies in a triangle and no red point lies in any of the triangles. Our
algorithm gives a solution with O(K
The running time of our algorithms, though polynomial, is quite high, and at the moment
has only theoretical value. These being some of the first results in this area, however, we
expect that the theoretical time complexity of these problems would improve with further
work. Perhaps, some of the ideas in our paper may also shed light on the theoretical
performance of some of the "practical" algorithms that are used in the trade.
A Proof of NP-Hardness
We show that both the planar bichromatic partition problem and the surface approximation
problem are NP-Hard , by a reduction from the planar 3-SAT. We do not know whether
they are in NP since the coordinates of the triangles in the solution may be quite large. We
recall that the 3-SAT problem consists of n variables x clauses
each with three literals C
x k . The problem is to decide
whether the boolean formula
has a truth assignment. An instance of 3-SAT is called planar if its variable-clause graph
is planar. In other words, F is an instance of the planar 3-SAT if the graph G(F E)
is planar (see [12]), where V and E are defined as follows:
appears in C
Theorem 2.1 The planar bichromatic partition problem is NP-Hard.
Proof: Our construction is similar to the one used by Fowler et al. [9], who prove the
intractability of certain planar geometric covering problems (without the disjointness con-
dition); see also [4, 8] for similar constructions. We first describe our construction for the
bichromatic partition problem. To simplify the proof, our construction allows three or more
points to lie on a line-the construction can be modified easily to remove these degeneracies.
Let F be a boolean formula, and let E) be a straight-line planar embedding of
the graph G(F ). We construct an instance of the red-blue point partition problem whose
solution determines whether F is satisfiable.
a 11
(iv)
Figure
1: (i) An instance of planar 3-SAT:
instance of bichromatic partition, (iii) Details of P 1 and C 1 , only some of the red
points lying near P 1 and C 1 are shown, (iv) Two possible coverings of blue points on P 2 .
We start by placing a 'blue' point at each clause node C m. Let x i be a
variable node, and let e i1 il be the edges incident to it. In the plane embedding of
G, the edges e ij form a "star" (see Figure 1 (i)). We replace this star by its Minkowski sum
with a disk of radius ffi , for a sufficiently small ffi ? 0. Before performing the Minkowski sum,
however, we shrink all the edges of the star at x i by 2ffi, so that the "star-shaped polygons"
meeting at a clause node do not overlap (see Figure 1 (ii)). Let P i denote the star-shaped
polygon corresponding to x i . In the polygon P i , corresponding to each edge e ij , there is a
tube, consisting of two copies of e ij , each translated by distance ffi, plus a circular arc s ij
near the clause node C j .
We place an even number of (say 2k i ) 'blue' points on the boundary of P i , as follows.
We put two points a ij and b ij on the circular arc s ij near its tip. If C j contains the literal
six points on the straight-line portion of P i 's boundary, three each on translated
copies of the edge e ij . On each copy, we move the middle point slightly inwards so as to
replace the original edge of P i by a path of length two. On the other hand, if C j contains
the literal -
four points on straight-line portion of P i 's boundary, two each on
translated copies of the edge e ij . Thus, the number of blue points added for edge e ij is
either six of eight. (2k i is the total number of points put along P i .) Let B denote the set
of all blue points placed in this way, and let
Finally, we scatter a large (but polynomially bounded) number of 'red' points so that (i)
any segment connecting two blue points that are not adjacent along the boundary of some
contains a red point, and (ii) any triangle with three blue points as its vertices contains
at least one red point unless the triangle is defined by a
Figure
(iii).) Let R be a set of red points satisfying the above two properties.
We claim that the set of blue points B can be covered by k disjoint triangles, none of
which contains any red point, if and only if the formula F has a truth assignment. Our
proof is similar to the one in Fowler et al. [9]; we only sketch the main ideas. The red points
act as enforcers, ensuring that only those blue points that are adjacent on the boundary of
a P i can be covered by a single triangle. Thus, the minimum number of triangles needed
to cover all the points on P i is k i . Further, there are precisely two ways to cover these
points using k i triangles- in one covering, a ij and b ij are covered by a single triangle for
those clauses only in which x and, in the other covering, a ij and b ij are covered by
a single triangle for those clauses only in which -
Figure 4 (iv). We regard the
first covering as setting x and the second covering as setting x
Suppose 1. For any clause C j that contains x i , the points a ij and b ij are covered
by a single triangle, and we can cover the clause point corresponding to C j by the same
triangle. The same holds if x and the clause C j contains -
In other words, the clause
point of C j can be covered for free if C j is satisfied. Thus, the set of blue points B can be
covered by k triangles if and only if the clause point for each clause C j is covered for free,
that is, the formula F has a truth assignment. This completes our proof of NP-Hardness
of the planar bichromatic partition problem. 2
Remark: The preceding construction is degenerate in that most of the red points lie on
segments connecting two blue points. There are several ways to remove these collinearities;
we briefly describe one of them. For each polygon P i , replace every blue point b on P i by
two blue points b placed very close to b. (We do not make copies of 'clause points' C j ,
m.) For every pair of blue points b l that we did not want to cover by a single
triangle in the original construction, we place a red point in the convex hull of b 0
l .
If there are 4k i blue points on the boundary of P i , they can be covered by k i triangles, and
there are exactly two ways to cover these blue points by k i triangles, as earlier. Following
a similar, but more involved, argument, we can prove that the set of all blue points can be
covered by
triangles if and only if F is satisfiable.
Theorem 2.2 The 3-dimensional surface approximation problem is NP-Hard.
Proof: Our construction is similar in spirit to the one for the bichromatic partition problem,
albeit slightly more complex in detail. We use points of three colors: red, white and black.
The 'white' points lie on the plane z = 0, the `black' points lie on the plane z = 2A, and
the 'red' points lie between A is a sufficiently large constant.
To maintain a connection with the previous construction, the black and white points play
the role of blue points, while the red points play the role of enforcers as before, restricting
the choice of "legal" triangles that can cover the black or white points. We will describe
the construction in the xy-plane, which represents the orthogonal projection of the actual
construction. The actual construction is obtained simply by vertically translating each
point to its correct plane, depending on its color.
We start out again by putting a 'black' point at each clause node C j . Then, for each
variable x i , we construct the "star-shaped" polygon P i ; this part is identical to the previous
construction. We replace each of the two straight-line edges of P i by "concave chains," bent
inward, and also make a small "dent" at the tip of the circular arc s ij , as shown in Figure 2.
We place 12 points on each arm of P i , alternating in color black and white, as follows. At
the tip of the circular arc s ij , we put a white point c ij at the outer endpoint of the dent
and a black point d ij at the inner endpoint of the dent (Figure 2 (ii)). The rest of the
construction is shown in Figure 2 (i) - we put two more points a ij on the circular
arc and 4 points ff l
on each of the two concave chains. The two points
surrounding namely, a ij and b ij , are such that any segment connecting them to any
point on the two concave chains lies inside P i . Next, corresponding to each edge e ij of the
graph G(F ), we put a 'white' point c 0
ij on the segment joining c ij and the clause point C j ,
very close to c ij such that
(See
Figure
2 (iii).) This condition says that, in the final construction when the black
and white points have been translated to their correct z-plane, the vertical distance between
ij and the segment C no more than "-recall that " is the input measure of
approximation.
This completes the placement of white and black points. The only remaining part of
the construction is the placement of 'red' points, which we now describe.
a 22
a
d 21
c 0c 0d 22
c 22
e
c 0(iii)
c 0b21
c 21
ff 1a 21
Figure
2: Placing points on the polygon P 2 corresponding to Figure 1: (i) Modified P 2
and points on P 2 , (ii) points and triangles in the neighborhood of c points and
triangles near C 1 .
We add a set of triangles, each containing a large (but polynomially bounded) number of
red points-the role of these triangles is to restrict the choice of legal triangles that can cover
black/white points. The set of triangles associated with P i is labeled T i . The construction
of T i is detailed in Figure 2 (i). Specifically, for an edge e ij , if x then we put a small
triangle that intersects the segment b ij c 0
ij but not b ij c ij . On the other hand, if -
then we put a small triangle that intersects a ij c 0
ij but not a ij c ij . Next, we put a small
number of triangles inside P i , near its concave chains, so that at most three consecutive
points along P i may be covered by one triangle without intersecting any triangle of T i . We
ensure that one of these triangles intersects the triangle 4ff 1
so that fff 1
cannot be covered by a single triangle. We also place three triangles near each clause C j ,
each containing a large number of red points; see Figure 2 (iii). Finally, we translate black
and white points in the z-direction, as described earlier. Let f- be the set of all
'red' triangles. We move all points in - i vertically to the plane z
There are
two ways to cover the points of P i with 2k i legal (non-intersecting) triangles-one in which
are covered by a single triangle, and the one in which b ij are covered by a
triangle. These coverings are associated with the true and false settings of the variable x i .
Let P denote the set of all points constructed, let t denote the total number of 'red'
triangles, and let
We claim that there exist a polyhedral terrain with
3(k+m+t) vertices that "-approximates P if and only if F has a truth assignment, provided
that " is sufficiently small-recall that m is the number of clauses in F . The claim follows
from the observations that it is always better to cover all red points lying in a horizontal
triangle - i by - i itself, and that a clause C j requires one triangle to cover its points if and
only if one of the literals in C j is set true; otherwise it requires two triangles. (For instance,
if C j contains the literal x i and x i is set true, then the triangle a ij can be enlarged
slightly to cover c 0
ij . The remaining three points for the clause C j can be covered by one
additional triangle.) The rest of the argument is the same as for the bichromatic partition
problem. Finally, we can perturb the points slightly so that no four of them are coplanar.In the remainder of the paper, we develop our approximation algorithms.
3 A Canonical Trapezoidal Partition
We introduce an abstract geometric partitioning problem in the plane, which captures
the essence of both the surface approximation problem as well as the bichromatic points
partition problem. The abstract problem deals with trapezoidal partitions under a boolean
constraint function C satisfying the "subset restriction" property. More precisely, let C be
a boolean function from compact, connected subsets of the plane to f0; 1g satisfying the
following property:
For technical reasons, we choose to work with "trapezoids" instead of triangles, where
the top and bottom edges of each trapezoid are parallel to the X-axis. The trapezoids and
triangles are equivalent for the purpose of approximation-each triangle can be decomposed
into two trapezoids, and each trapezoid can be decomposed into two triangles.
Given a set of n points P in the plane, a family of trapezoids is
called a valid trapezoidal partition (a trapezoidal partition for brevity) of P with respect to
a boolean constraint function C if the following conditions hold:
(ii) \Delta covers all the points: P ae
(iii) The trapezoids in \Delta have pairwise disjoint interiors.
We can cast our bichromatic partition problem in this abstract framework by setting
set of 'blue' points) and, for a trapezoid - ae R 2 , defining only
if - is empty of red points, that is, - In the surface approximation problem, we
set
(the orthogonal projection of S on the plane z = 0) and a trapezoid - ae R 2 has
only if - can be vertically lifted to a planar trapezoid - in R 3 so that the
vertical distance between -
- and any point of S directly above/below it is at most ".
The space of optimal solutions for our abstract problem is potentially infinite-the
vertices of the triangles in our problem can be anywhere in the plane. For our approximation
results, however, we show that a restricted choice of trapezoids suffices.
Given a set of n points P in the plane, let L(P ) denote the set consisting of the following
lines: the horizontal lines passing through a point of P , and the lines passing through two
points of P . Thus, jL(P We will call the lines of L(P ) the canonical lines
determined by P . We say that a trapezoid \Delta ae R 2 is canonical if all of its edges belong to
lines in L(P ). A trapezoidal partition \Delta is canonical if all of its trapezoids are canonical.
The following lemma shows that by limiting ourselves to canonical trapezoidal partitions
only, we sacrifice at most a constant (multiplicative) factor in our approximation.
Figure
3: A canonical trapezoidal partition
Lemma 3.1 Any trapezoidal partition of P with k trapezoids can be transformed into a
canonical trapezoidal partition of P with at most 4k trapezoids.
Proof: We give a construction for transforming each trapezoid \Delta 2 \Delta into four trapezoids
4, with pairwise disjoint interiors, so that \Delta i together cover all the
points in P " \Delta. By (3.1), the new set of trapezoids is a valid trapezoidal partition of P .
Our construction works as follows.
Consider the convex hull of the points P "\Delta. If the convex hull itself is a trapezoid,
we return that trapezoid. Otherwise, let '; denote the left, right, top and bottom edges
of \Delta, as shown in Figure 4 (i). We perform the following four steps, which constitute our
transformation algorithm.
r
(iv)
r
Figure
4: Illustration of the canonicalization.
(i) We shrink the trapezoid \Delta by translating each of its four bounding edges towards the
interior, until it meets a point of P \Delta . Let \Delta 0 ' \Delta denote the smaller trapezoid thus
obtained respectively, denote a point of P \Delta lying on
the left, right, top, and bottom edge of \Delta 0 ; we break ties arbitrarily if more than one
point lies on an edge.
(ii) We partition \Delta 0 into two trapezoids, \Delta L and \Delta R , by drawing the line segment p u
as shown in (Figure 4 (ii).
(iii) We next partition \Delta L into two trapezoids \Delta LU and \Delta LB , by drawing the maximal
horizontal segment through p ' . Let p 0
' denote right endpoint of this segment. Similarly,
we partition \Delta R into \Delta RU and \Delta RB , lying respectively above and below the horizontal
line segment p r p 0
r .
(iv) We rotate the line supporting the left boundary of \Delta LU around the point p ' in clock-wise
direction until it meets a point of the set denote the intersection
of this line and the top edge of \Delta LU . We set
Figure
4 (iv)). (If
a triangle, which we regard as a degenerate trapezoid; e.g. \Delta 4 in
Figure
4 (iii).) The top and bottom edges of \Delta 1 contain p u and p ' , respectively, the
left edge contains p ' and q ' , and the right edge is determined by the pair of points
p u and p b . Thus, the trapezoid \Delta 1 is in canonical position. The three remaining
trapezoids are constructed similarly.
In the above construction, if any of the four trapezoids \Delta i does not cover any point
of P \Delta , then we can discard it. Thus, an arbitrary trapezoid of the partition \Delta can be
transformed into at most four canonical trapezoids. This completes the proof of the lemma.4 Greedy Algorithms
At this point, we can obtain a weak approximation result using the canonical trapezoidal
partition. Roughly speaking, we can use the greedy set covering heuristic [6, 14], ignoring the
disjointness constraint, and then refine the heuristic output to produce disjoint trapezoids.
Unfortunately, the last step can increase the complexity of the solution quite significantly.
Theorem 4.1 Given a set P of n points in the plane and a boolean constraint function C
satisfying (3.1) that can be evaluated in polynomial time, we can compute an O(K
log K
size trapezoidal partition of P respecting C in polynomial time, where K o is the size of an
optimal trapezoidal partition.
Proof: Consider the set \Xi of all valid, canonical trapezoids in the plane-the set \Xi has
O(n 6 ) trapezoids. We form the geometric set-system
X can be computed by testing each \Delta 2 \Xi whether it is valid. We compute a set cover of
X using the greedy algorithm [11, 14] in polynomial time. The greedy algorithm returns a
set R consisting of O(K necessarily disjoint. In order to produce a
disjoint cover, we first compute the arrangement A(R) of the plane induced by R. Then,
we decompose each face of A(R) into trapezoids by drawing a horizontal segment through
each vertex until the segment hits an edge of the arrangement. The resulting partition is
a trapezoidal partition of P . The number of trapezoids in the partition is O((K
- since the arrangement A(R) has this size. The total running time of the algorithm is
polynomial. 2
Remark: (i) The canonical form of trapezoids is used only to construct a finite family of
trapezoids to search for an approximate solution. A direct application of the definition in
the previous subsection gives a family of O(n 6 ) canonical trapezoids. By using a slightly
different canonical form, we can reduce the size of canonical triangles to O(n 4 ). In another
paper [1], we present an near-quadratic time algorithm for finding an approximation of size
(ii) One can show that the number of trapezoids produced by the above algorithm is \Omega\Gamma K
in the worst case.
A Recursively Separable Partition 13
5 A Recursively Separable Partition
We now develop our main approximation algorithm. The algorithm is based on dynamic
programming, and it depends on two key ideas-a recursively separable partition and a
compliant partition. These partitions are further specializations of the canonical trapezoidal
partition introduced in the previous section, and they are central to our algorithm's
performance.
A trapezoidal partition \Delta is called recursively separable if the following holds:
ffl \Delta consists of a single trapezoid, or
ffl there exists a line ' not intersecting the interior of any trapezoid in \Delta such that (i)
are both nonempty, where ' are the two
half-planes defined by ', and (ii) each of \Delta recursively separable.
The following key lemma gives an upper bound on the penalty incurred by our approximation
algorithm if only recursively separable trapezoidal partitions are used.
Lemma 5.1 Let P be a finite set of points in the plane and let \Delta be a trapezoidal partition
of P with k trapezoids. There exists a recursively separable partition \Delta ? of P with O(k log
trapezoids. In addition, each separating line is either a horizontal line passing through a
vertex of \Delta or a line supporting an edge of a trapezoid in \Delta.
Proof: We present a recursive algorithm for computing \Delta ? . Our algorithm is similar to
the binary space partition algorithm proposed by Paterson and Yao [17]. We assume that
the boundaries of the trapezoids in \Delta are also pairwise disjoint-this assumption is only to
simplify our proof.
At each recursive step of the algorithm, the subproblem under consideration lies in a
trapezoid T . (This containing trapezoid may degenerate to a triangle, or it may even be
unbounded.) The top and bottom edges of T (if they exist) pass through the vertices of \Delta,
while the left and right edges (if they exist) are portions of edges of \Delta. Initially T is a set to
an appropriately large trapezoid containing the family \Delta. Let \Delta T denote the trapezoidal
partition of P " T obtained by intersecting \Delta with T , and let V T be the set of vertices of
lying in the interior of T . An edge of \Delta T cannot intersect the left or right edge of T ,
because they are portions of the edge of T . Therefore, each edge of \Delta T either lies in the
interior of T , or intersects only the top and bottom edges of T .
If stop. Otherwise, we proceed as follows. If there is
a trapezoid 4 2 \Delta T that completely crosses T (that is, its vertices lie on the top and
bottom edges of T ), then we do the following. If 4 is the leftmost trapezoid of \Delta T , then
we partition T into two trapezoids by drawing a line through the right edge of \Delta,
so that T 1 contains 4 and T 2 contains the remaining trapezoids of \Delta T . If 4 is not the
leftmost trapezoid of \Delta T , then we partition T into by drawing a line through the left
edge of \Delta.
If every trapezoid \Delta in \Delta T has at least one vertex in the interior of T , and so the
previous condition is not met, then we choose a point with a median y-coordinate.
We partition T into trapezoids T by drawing a horizontal line ' v passing through v.
Each trapezoid partitioned into two trapezoids by adding the
segment . At the end of this dividing step, let \Delta 1 and \Delta 2 be the set of trapezoids of
that lie in T 1 and T 2 , respectively. We recursively refine \Delta 1 and \Delta 2 into separable partitions
respectively, and return \Delta ?
2 . This completes the description of the
algorithm.
We now prove that \Delta satisfies the properties claimed in the lemma. It is clear that \Delta
is recursively separable and that each separating line of \Delta either supports an edge of \Delta or
it is horizontal. To bound the size of \Delta , we charge each trapezoid of \Delta to its bottom-left
vertex. Each such vertex is either a bottom-left vertex of a trapezoid of \Delta, or it is an
intersection point of a left edge of a trapezoid of \Delta with the extension of a horizontal edge
of another trapezoid of \Delta. There are only k vertices of the first type, so it suffices to bound
the number of vertices of the second type. Since the algorithm extends a horizontal edge
of a trapezoid of \Delta T only if every trapezoid of \Delta T has at least one vertex in the interior
of T , and we always extend a horizontal edge with a median y-coordinate, it is easily seen
that the number of vertices of the second type is O(k log k). This completes the proof. 2
Remark: Given a family \Delta of k disjoint orthogonal rectangles partitioning P , we can find
a set of O(k) recursively separable rectangles that forms a rectangular partition of P -this
uses the orthogonal binary space partition algorithm of Paterson and Yao [18].
6 An Approximation Algorithm
Lemma 5.1 applies to any trapezoidal partition of P . In particular, if we start with a
canonical trapezoidal partition \Delta, then the output partition \Delta ? is both canonical and
recursively separable, and each separating line in \Delta ? belongs to the family of canonical
lines L(P ). For the lack of a better term, we call a trapezoidal partition of P that satisfies
these conditions a compliant partition. Lemmas 3.1 and 5.1 together imply the following
useful theorem.
Theorem 6.1 Let P be a set of n points in the plane and let C be a boolean constraint
function satisfying the condition (3.1). If there is a trapezoidal partition of P respecting C
with k trapezoids, then there is a compliant partition of P also respecting C with O(k log
trapezoids.
In the remainder of this section, we give a polynomial-time algorithm, using dynamic
programming, for constructing an optimal compliant partition. By Theorem 6.1, this partition
has O(K trapezoids. Recall that the set consists of all canonical
lines determined by P .
Consider a subset of points R ' P , and a canonical trapezoid \Delta containing R. Let
oe(R; \Delta) denote the size of an optimal compliant partition of R in \Delta; the size of a partition
is the number of trapezoids in the partition. Theorem 6.1 gives the following recursive
definition of oe:
min
where the minimum is over all those lines ' 2 L that are either horizontal and intersect
\Delta, or intersect both the top and bottom edges of \Delta; ' denote the positive and
negative half-planes induced by . The goal of our dynamic
programming algorithm is to compute oe(P; T ), for some canonical trapezoid T enclosing
all the points P . We now describe how the dynamic programming systematically computes
the required partial answers.
Every canonical trapezoid \Delta in the plane can be described (uniquely) by a 6-tuple
consisting of integers between 1 and n. The first two numbers fix two
points p i and p j through which the lines containing the top and bottom edges of \Delta pass;
the second pair fixes the points p k 1
through which the line containing the left edge of
passes; and the third pair fixes the points p l 1
through which the line containing the
right edge of \Delta passes. (In case of ties, we may choose the points closest to the corners of
\Delta.) We use the notation \Delta(i; for the trapezoid associated with the 6-tuple
If the 6-tuple does not produce a trapezoid, then \Delta(i;
undefined.
We use the abbreviated notation
to denote the size of an optimal compliant partition for the points contained
in \Delta(i;
The quantity oe(i; undefined if the trapezoid \Delta(i;
If the points in P are sorted in increasing order of their y-coordinates, then \Delta(i;
is defined only for i - j. Our dynamic programming algorithm computes the oe values as
follows.
If C (\Delta(i;
Otherwise,
min
where the last minimum varies over all pairs of points such that the line passing
through them intersects both the top and the bottom edge of \Delta(i;
If \Xi denotes the set of all canonical trapezoids, then )-each 6-tuple is
associated with at most one unique trapezoid. If Q(n) denotes the time to decide whether
for an arbitrary trapezoid \Delta, then we can initially compute all trapezoids for
which C these trapezoids, we initially set
For all the remaining trapezoids in \Xi, we use the recursive formula presented above to
compute their oe. Computing oe for a trapezoid requires computing the minimum of O(n 2 )
quantities. Thus the total running time of the algorithm is O(n 8 ). The following theorem
states the main result of our paper.
Theorem 6.2 Given a set P of n points in the plane and a boolean constraint C satisfying
condition (3.1), we can compute a geometric partition of P with respect to C using
the number of trapezoids in an optimal partition. Our
algorithm runs in worst-case time O(n 8 +n 6 Q(n)), where Q(n) is the time to decide whether
any subset R ' P .
Remark: By computing oe's in a more clever order and exploiting certain geometric properties
of a geometric partition, the time complexity of the above algorithm can be improved
by one order of magnitude. This minor improvement, however, doesn't seem worth the
effort needed to explain it.
Theorem 6.2 immediately implies polynomial-time approximation algorithms for the
surface approximation and the planar bichromatic partition problem. In the case of the
surface approximation problem, deciding C (\Delta) for a trapezoid \Delta requires checking whether
there is a plane - in R 3 such that the vertical distance between - and the points covered by
\Delta is at most ". This problem can be solved in linear time using the fixed-dimensional linear
programming algorithm of Megiddo [15]. more practical algorithm, running in time
O(n log n), is the following. Let A ' P denote the set of points covered by the trapezoid
\Delta. For a point respectively, denote the point p translated vertically
up and down by ". Let A
only if sets A + and A \Gamma can be separated by a plane. The two sets are separable if their
convex hulls are disjoint. This can be tested in O(n log n) time-for instance, see the book
by Preparata and Shamos [19].)
Theorem 6.3 Given a set S of n points in R 3 and a parameter " ? 0, in polynomial time
we can compute an "-approximate polyhedral terrain \Sigma with O(K
K o is the number of vertices in an optimal terrain. Our algorithm runs in O(n 8 ) worse-case
time.
In the planar bichromatic partition problem, deciding whether C checking
whether \Delta contains any point from the red set R. This can clearly be done in O(n)
time. Actually, with a preprocessing requiring O(n 2 log O(1) n) time, this test can be made in
O(log n) time for any trapezoid \Delta. Our main purpose, however, is to show the polynomial
time for the approximation algorithm.
Theorem 6.4 Given a set R of 'red' points and another set B of `blue' points in the plane,
we can find in polynomial time a set of O(K disjoint triangles that cover B but do
not contain any red point; K o is the number of triangles in an optimal solution.
Remark: In view of the remark following Theorem 6.1, given a set R of 'red' points and
another set B of 'blue' points in the plane, we can find in polynomial time a set of O(K
disjoint orthogonal rectangles that cover B but do not contain any red point. In this
case, the time complexity improves by a few orders of magnitude, because there are only
rectangle and each rectangle is subdivided into two rectangles by drawing
a horizontal or vertical line passing through one of the points. Omitting all the details, the
running time in this case is O(n 5 ).
Extensions
We can extend our algorithm to a slightly stronger form of surface approximation. In the
basic problem, the implicit function (surface) is represented by a set of sample points S.
What if the sample consists of two-dimensional compact, connected pieces? In this section,
we show an extension of our algorithm that deals with the case when the sample consists of
a set T of n horizontal triangles with pairwise disjoint xy-projection. (Since any polygon
can be decomposed into triangles, this case also handles polygons.) Our goal is to compute
a polyhedral terrain \Sigma, so that the vertical distance between any point in T
is at most ". We produce a terrain \Sigma with O(K is the
number of vertices in an optimal such surface.
be the input set of n horizontal triangles in R 3 with the property
that their vertical projections on the plane are pairwise disjoint. We will consistently
use the following notational convention: for an object s 2 R 3 , -
s denotes its orthogonal
projection on the plane z = 0, and for a subset g ' -
denotes the portion of s in R 3 such
that g. Abusing the notation slightly, we say that a set \Xi of trapezoids (or triangles)
in R 3 "-approximates T within a region Q ' R 2 if the vertical distance between T and \Xi
in Q is at most " and the vertical projections of trapezoids of \Xi are disjoint on z = 0.
Let S denote the set of vertices of the triangles in T , and let -
S be their orthogonal
projection on z = 0. We set
S, as the set of points in our abstract problem. The
constraint function is defined as follows. Given a trapezoid \Delta 2 R 3 , we have C
and only if the vertical distance between \Delta and any point in
directly above/below
\Delta is at most ". (Thus, while the point set P includes only the vertices of T , the constraint
set takes into consideration the whole triangles.) The constraint C satisfies (3.1), and it can
be computed in polynomial time.
It is also clear that the size of an optimal trapezoidal partition of P with respect to
C is a lower bound on the size of a similar partition for T , the set of triangles. We first
apply Theorem 6.3 to obtain a family \Delta of O(k log trapezoids that "-approximates P
with respect to C; clearly k - K . The next step of our algorithm is to extend \Delta to a
polyhedral terrain that "-approximates the triangles of T . Care must to be exercised in this
step if one wants to add only O(k log new trapezoids. In the second step, we work with
the projection of T and \Delta in the plane
(i) (ii) (iii)
Figure
5: (i) -
T and -
\Delta, (ii) R 1 and G; (iii) R 2 and Q i 's.
\Deltag and -
g. Let R be the set of connected components
of the closure of
\Delta. That is, R is the portion of
T lying in the
common exterior of -
\Delta, as shown in Figure 5 (i). R is a collection of simple polygons, each
of which is contained in a triangle of -
T . Since the corners of the triangles of -
are covered
by -
\Delta, the vertices of all polygons in R lie on the boundary of -
\Delta, and each edge of R is
contained in an edge of -
\Delta or of -
T . Let R 1 ' R be the subset of polygons that touch at
least three edges of trapezoids in -
\Delta, and let R
For each polygon P i 2 R 1 , we compute a set of triangles that "-approximate T within
. For a vertex lying on the boundary of a trapezoid -
\Delta, let -
v denote the point
on \Delta whose xy-projection is v. Let -
T i be the triangle containing P i . We triangulate P i ,
and, for each triangle 4abc in the triangulation, we pick 4-a - b-c. Since T i is parallel to the
xy-plane, it can be proved that the maximum vertical distance between 4-a - b-c and T i is ".
We repeat this step for all polygons in R 1 . The number of triangles generated in this step
is proportional to the number of vertices in the polygons of R 1 , which we bound in the
following lemma.
Lemma 7.1 The total number of vertices in the polygons of R 1 is O(k log k).
Proof: Each vertex of a polygon in R 1 is either (i) a vertex of a trapezoid in -
\Delta, or (ii) an
intersection point of an edge of -
\Delta with an edge of a triangle in T . There are only O(k log
vertices of type (i), so it remains to bound the number of vertices of type (ii).
We construct an undirected graph be the
set of edges in -
\Delta. 1 To avoid confusion, we will call the edges of \Gamma segments and those of E
arcs. For each segment fl i , we place a point i close to fl i , inside the trapezoid bounded by
. The set of resulting points forms the node set V . If there is an edge pq of a polygon in
R 1 such that we add the arc (i; j) to E; see Figure 5 (ii). It is easily seen
that G is a planar graph, and that log k). Fix a pair of segments
that (1; be the set of edges in R 1 , sorted either left to
right or top to bottom, as the case may be, that are incident to fl 1 and fl 2 . Let jE 12
Assume that for every 1 - lies on fl 1 and q i lies on fl 2 . The number of vertices
of type (ii) is obviously 2
We call two edges equivalent if the
interior of the convex hull of p i q i and p j q j does not intersect any trapezoid of -
\Delta. This
equivalence relation partitions E 12 into equivalent classes, each consisting of a contiguous
subsequence of E 12 . Let - ij denote the number of equivalence classes in E ij .
1: There are at most two edges in each equivalence class of E 12 .
Proof: Assume for the sake of a contradiction that three edges
belong to an equivalence class. Further, assume that the triangle -
T bounded by p i q i
lies below p i q i (see Figure 6 (i)). Since the quadrilateral Q defined by p i q i and q
does not contain any trapezoid of -
\Delta, p i+1 q i+1 is also an edge of -
T . But then Q
is a connected component of R and it touches only two edges of -
\Delta, thereby implying that
is not an edge of a polygon of R 1 , a contradiction. 2
Thus,
1 The segments of \Gamma may overlap, because the trapezoids of -
can touch each other. If a segment fl i of
\Gamma is an edge of two trapezoids, then no edge of R1 can be incident to fl i .
Figure
Edges in an equivalent class of E 12 , and (ii) edges in different equivalent classes
Next, we bound the quantity
12 be two consecutive
equivalent classes of E 12 , let p i q i be the bottom edge of E j
12 , and let p i+1 q i+1 be the top
edge of E j+1
12 . The quadrilateral Q
contains at least one trapezoid -
of -
\Delta.
We call the triangle edges of Q, and p i the trapezoidal edges of Q.
The triangle edges of Q
are adjacent in E 12 . Let
be
the set of resulting quadrilaterals. Since
suffices to bound the
number of quadrilaterals in Q.
Consider the planar subdivision induced by Q and call it A(Q). For each bounded face
Q(f) be the smallest quadrilateral of Q that contains f . Since the boundaries
of quadrilaterals do not cross, Q(f) is well defined.
2: Every face f of A(Q) can be uniquely associated with a trapezoid
\Delta such
that
Proof: The claim is obviously true for the unbounded face, so assume that f is a bounded
does not contain any other quadrilateral of Q, then so by
definition f contains a trapezoid of -
\Delta. If Q does contain another quadrilateral
of Q, let Q j be the largest trapezoid that lies inside Q i -that is, @Q j is a part of @f . If
none of the trapezoidal edges of Q j lies in the interior of Q i , then the trapezoidal edges of
both lie on the same segments of \Gamma, say, . Consequently, the triangle edges of
both belong to E 12 , which is impossible because then the triangle edges of Q i are
not adjacent in E 12 . Hence, one of the trapezoidal edges of Q j lies in the interior of Q i . Let
\Delta be the trapezoid bounded by this edge. Since the triangle edges of Q j lie outside -
the interior of -
\Delta does not intersect any edge of R 1 , -
lies in f . This completes the proof
of Claim 2. 2
By Claim 2, the number of faces in A(Q) is at most j -
log k). This completes
the proof of the lemma. 2
Next, we partition the polygons of R 2 into equivalence classes in the same way as we
Closing Remarks 21
partitioned the edges of E 12 in the proof of Lemma 7.1. That is, we call two polygons
endpoints lie on the same pair of edges in -
\Delta, and (ii) the
interior of the convex hull of does not intersect any trapezoid of -
\Delta. Using the same
argument as in the proof of the above lemma, the following lemma can be established.
Lemma 7.2 The edges of R 2 can be partitioned into O(k log equivalence classes.
For each equivalence class be the convex hull of E i -observe that Q i
is a convex quadrilateral, as illustrated in Figure 5 (iii). Each quadrilateral Q i can be "-
approximated using at most three triangles in R 3 in the same way as we approximated each
polygon P i of R 1 . By Lemma 7.2, the total number of triangles created in this step is also
O(k log k).
Putting together these pieces, we obtain the following lemma.
Lemma 7.3 The family of trapezoids \Delta can be supplemented with O(k log additional
trapezoids in R 3 so that all the triangles of T are "-approximated. The orthogonal projection
of all the trapezoids on the plane disjoint.
The area not covered by the projection of trapezoids found in the preceding lemma, of
course, can be approximated without any regards to the triangles of T . The final surface
has O(K trapezoids and it "-approximates the family of triangles T . We finish with
a statement of our main theorem in this section.
Theorem 7.4 Given a set of n horizontal triangles in R 3 , with pairwise disjoint projection
on the plane z = 0, and a parameter " ? 0, we can compute in polynomial time a "-
approximate polyhedral terrain of size O(K o log K is the size of an optimal
"-approximate terrain.
8 Closing Remarks
We presented an approximation technique for certain geometric covering problems with
a disjointness constraint. Our algorithm achieves a logarithmic performance guarantee
on the size of the cover, thus matching the bound achieved by the "greedy set cover"
heuristic for arbitrary sets and no disjointness constraint. Applications of our result include
polynomial time algorithms to approximate a monotone, polyhedral surface in 3-space, and
to approximate the disjoint cover by triangles of red-blue points. We also proved that these
problems are NP-Hard .
The surface approximation problem is an important problem in visualization and computer
graphics. The state of theoretical knowledge on this problem appears to be rather
slim. Except for the convex surfaces, no approximation algorithms with good performance
guarantees are known [6, 16]. For the approximation of convex polytopes, it turns out that
one does not need disjoint covering, and therefore the greedy set cover heuristic works.
We conclude by mentioning some open problems. An obvious open problem is to reduce
the running time of our algorithm for it to be of any practical value. Finding efficient
heuristics with good performance guarantees seems hard for most of the geometric partitioning
problems, and requires further work. A second problem of great practical interest
is to "-approximate general polyhedra-this problem arises in many real applications of
computer modeling. To the best of our knowledge, the latter problem remains open even
for the special case where one wants to find a minimum-vertex polyhedral surface that lies
between two monotone surfaces. The extension of our algorithm presented in Section 7 does
not work because we do not know how to handle the last fill-in stage.
--R
Fast greedy algorithms for geometric covering and other problems
An algorithm for piecewise linear approximation of an implicitly defined two-dimensional surfaces
An algorithm for piecewise linear approximation of an implicitly defined manifold
Decision trees for geometric models
Polygonization of implicit surfaces
Algorithms for polytope covering and approximation
Simplification of objects rendered by polygonal approximations
Several hardness results on problems of point separation and line stabbing
Optimal packing and covering in the plane are NP-complete
Piecewise linear approximations of digitized space curves with applications Scientific Visualization of Physical Phenomena pp.
Approximation algorithms for combinatorial problems
Computing 11
A high resolution 3D surface construction algo- rithm
On the ratio of optimal integral and fractional cover
Separation and approximation of polyhedral surfaces
Efficient binary space partitions for hidden-surface removal and solid modeling
Optimal binary space partitions for orthogonal objects
Computational Geometry: An Introduction
Decimation of triangle meshes
An adaptive subdivision method for surface fitting from sampled data
On some link distance problems in a simple polygon
--TR
--CTR
Gabriel Peyr , Stphane Mallat, Surface compression with geometric bandelets, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
David Cohen-Steiner , Pierre Alliez , Mathieu Desbrun, Variational shape approximation, ACM Transactions on Graphics (TOG), v.23 n.3, August 2004
Pankaj K. Agarwal , Boris Aronov , Vladlen Koltun, Efficient algorithms for bichromatic separability, ACM Transactions on Algorithms (TALG), v.2 n.2, p.209-227, April 2006
Pankaj K. Agarwal , Boris Aronov , Vladlen Koltun, Efficient algorithms for bichromatic separability, Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, January 11-14, 2004, New Orleans, Louisiana
Michiel Smid , Rahul Ray , Ulrich Wendt , Katharina Lange, Computing large planar regions in terrains, with an application to fracture surfaces, Discrete Applied Mathematics, v.139 n.1-3, p.253-264,
Mark de Berg , Micha Streppel, Approximate range searching using binary space partitions, Computational Geometry: Theory and Applications, v.33 n.3, p.139-151, February 2006
Bernard Chazelle , C. Seshadhri, Online geometric reconstruction, Proceedings of the twenty-second annual symposium on Computational geometry, June 05-07, 2006, Sedona, Arizona, USA
Joseph S. B. Mitchell , Joseph O'Rourke, Computational geometry, ACM SIGACT News, v.32 n.3, p.63-72, 09/01/2001 | simplification;visualization;approximation algorithms;levels of detail;dynamic programming;machine learning;terrains |
284973 | Computational Complexity and Knowledge Complexity. | We study the computational complexity of languages which have interactive proofs of logarithmic knowledge complexity. We show that all such languages can be recognized in ${\cal BPP}^{\cal NP}$. Prior to this work, for languages with greater-than-zero knowledge complexity only trivial computational complexity bounds were known. In the course of our proof, we relate statistical knowledge complexity to perfect knowledge complexity; specifically, we show that, for the honest verifier, these hierarchies coincide up to a logarithmic additive term. | Introduction
The notion of knowledge-complexity was introduced in the seminal paper of Goldwasser
Micali and Rackoff [GMR-85, GMR-89]. Knowledge-complexity (KC) is intended to measure
the computational advantage gained by interaction. Satisfactory formulations of knowledge-
complexity, for the case that it is not zero, have recently appeared in [GP-91]. A natural
suggestion, made by Goldwasser, Micali and Rackoff, is to classify languages according to
the knowledge-complexity of their interactive-proofs [GMR-89]. We feel that it is worthwhile
to give this suggestion a fair try.
The lowest level of the knowledge-complexity hierarchy is the class of languages having
interactive proofs of knowledge-complexity zero, better known as zero-knowledge. Actually,
there are three hierarchies extending the three standard definitions of zero-knowledge; namely
An extended abstract of this paper appeared in the 26th ACM Symposium on Theory of Computing
(STOC 94), held in Montreal, Quebec, Canada, May 23-25, 1994.
y Department of Applied Mathematics and Computer Science, Weizmann Institute of Science, Rehovot,
Israel. E-mail: oded@wisdom.weizmann.ac.il. Supported by grant no. 92-00226 from the United States
Israel Binational Science Foundation, Jerusalem, Israel.
z Computer Science Division, University of California at Berkeley, and International Computer Science
Institute, Berkeley, CA 94720. E-mail: rafail@cs.Berkeley.EDU. Supported by an NSF Postdoctoral
Fellowship and ICSI.
x Computer Science Department, Technion - Israel Institute of Technology, Haifa 32000, Israel. E-mail:
erez@cs.technion.ac.il.
perfect, statistical and computational. Let us denote the corresponding hierarchies by PKC(\Delta),
SKC(\Delta), and CKC(\Delta). Assuming the existence of one-way functions, the third hierarchy
collapses, namely differently,
the zero level of computational knowledge-complexity extends to the maximum possible.
Anyhow, in the rest of this paper we will be only interested in the other two hierarchies.
Previous works have provided information only concerning the zero level of these hierar-
chies. Fortnow has pioneered the attempts to investigate the computational complexity of
(perfect/statistical) zero-knowledge [F-89], and was followed by Aiello and Hastad [AH-87].
Their results can be summarized by the following theorem that bounds the computational
complexity of languages having zero-knowledge proofs.
Theorem [F-89, AH-87]:
co-AM
Hence, languages having statistical zero-knowledge must lie in the second level of the
polynomial-time hierarchy. Needless to say that function k
and in particular for k j 0.
On the other hand, if we allow polynomial amount of knowledge to be revealed, then every
language in IP can be proven.
Theorem [LFKN-90, Sh-90]:
As indicated in [GP-91], the first equality is a property of an adequate definition (of knowledge
complexity) rather than a result.
In this paper we study the class of languages that have interactive-proofs with logarithmic
knowledge-complexity. In particular, we bound the computational complexity of such
languages, showing that they can be recognized by probabilistic polynomial-time machines
with access to an NP oracle.
Main Theorem:
We recall that BPP NP is contained in the third level of the polynomial-time hierarchy
(PH). It is believed that PH is a proper subset of PSPACE. Thus, assuming PH ae
PSPACE, our result yields the first proof that there exist languages in PSPACE which
cannot be proven by an interactive-proof that yields O(log n) bits of knowledge. In other
words, there exist languages which do have interactive proofs but only interactive proofs
with super-logarithmic knowledge-complexity.
Prior to our work, there was no solid indication 1 that would contradict the possibility
that all languages in PSPACE have interactive-proofs which yield only one bit of knowledge.
Alas, if one had been willing to assume that all languages in PSPACE have interactive proofs of logarithmically
many rounds, an assumption that we consider unreasonable, then the result in [BP-92] would
have yielded a proof that PSPACE is not contained in SKC(1), provided (again) that PH ae PSPACE .
The only attempt to bound the computational complexity of languages having interactive
proofs of low knowledge-complexity was done by Bellare and Petrank. Yet, their work
refers only to languages having interactive proofs that are both of few rounds and of low
knowledge complexity [BP-92]. Specifically, they showed that if a language L has a r(n)-round
interactive-proof of knowledge-complexity O( log n
then the language can be recognized in
BPP NP .
Our proof of the Main Theorem consists of two parts. In the first part, we show that the
procedure described by Bellare and Petrank [BP-92] suffices for recognizing languages having
interactive proofs of logarithmic perfect knowledge complexity. To this end, we use a more
careful analysis than the one used in [BP-92]. In the second part of our proof we transform
interactive proofs of statistical knowledge complexity k(n) into interactive proofs of perfect
knowledge complexity k(n)+log n. This transformation refers only to knowledge-complexity
with respect to the honest verifier, but this suffices since the first part of our proof applies
to the knowledge-complexity with respect to the honest verifier. Yet, the transformation is
interesting for its own sake, and a few words are in place.
The question of whether statistical zero-knowledge equals perfect zero-knowledge is one
of the better known open problems in this area. The question has been open also for the
case of zero-knowledge with respect to the honest verifier. We show that for every poly-time
computable function k : N7!N (and in particular for k j
This result may be considered an indication that these two hierarchies may collide.
Techniques Used
As stated above, the first part of our proof consists of presenting a more careful analysis of
an existing procedure, namely the procedure suggested by Bellare and Petrank in [BP-92].
Their procedure, in turn, is a culmination of two sequences of works discussed bellow.
The first sequence originates in Fortnow's definition of a simulator-based prover [F-89].
Fortnow [F-89], and consequently Aiello and Hastad [AH-87], used the simulator-based prover
in order to infer, by way of contradiction, bounds on the sizes of specific sets. A more
explicit usage of the simulator-based prover was introduced by Bellare, Micali and Ostrovsky
specifically, they have suggested to use a PSPACE-implementation of the
simulator-based prover, instead of using the original prover (of unbounded complexity) witnessing
the existence of a zero-knowledge interactive proof system. (Thus, they obtained
a bound on the complexity of provers required for zero-knowledge proof systems.) Ostrovsky
[Ost-91] suggested to use an implementation of the interaction between the verifier
and the simulation-based prover as a procedure for deciding the language. Furthermore,
assuming that one-way functions do not exist, he used "universal extrapolation" procedures
of [ILu-90, ILe-90] to approximate the behavior of the simulator-based prover. (Thus, assuming
that one-way function do not exists, he presented an efficient procedure that decides
languages in SKC(0) and inferred that one-way functions are essential to the non-triviality
of statistical zero-knowledge). Bellare and Petrank distilled the decision procedure from the
context of one-way functions, showing that the simulator-based prover can be implemented
using a perfect universal extrapolator (also known as a "uniform generation" procedure)
[BP-92]. The error in the implementation is directly related to the deviation of the uniform
generation procedure.
The second sequence of works deals with the two related problems of approximating the
size of sets and uniformly generating elements in them. These problems were related by
Jerrum et. al. [JVV-86]. Procedures for approximating the size of sets were invented by
Sipser [Si-83] and Stockmeyer [St-83], and further improved in [GS-89, AH-87], all using the
"hashing paradigm". The same hashing technique, is the basis of the "universal extrapo-
lation" procedures of [ILu-90, ILe-90]. However, the output of these procedures deviates
from the objective (i.e., uniform distribution on the target set) by a non-negligible amount
(i.e., 1=poly(T ) when running for time T ). On the other hand, Jerrum et. al. have also
pointed out that (perfect) uniform generation can be done by a BPP \Sigma P
Bellare and Petrank combined the hashing-based approximation methods with the ideas of
[JVV-86] to obtain a BPP NP -procedure for uniform generation with exponentially vanishing
error probability [BP-92]. Actually, if the procedure is allowed to halt with no output
with constant (or exponentially vanishing) probability, then its output distribution is exactly
uniform on the target set.
Motivation for studying KC
In addition to the self-evident fundamental appeal of knowledge complexity, we wish to point
out some practical motivation for considering knowledge-complexity greater than zero. In
particular, cryptographic protocols that release a small (i.e., logarithmic) amount of knowledge
may be of practical value, especially if they are only applied once or if one can obtain
sub-additive bounds on the knowledge complexity of their repeated executions. Note that
typically, a (single application of a) sub-protocol leaking logarithmically many bits (of knowl-
edge) does not compromise the security of the entire protocol. The reason being that these
(logarithmically many) bits can be guessed with non-negligible probability, which in turn
means that any attack due to the "leaked bits" can be simulated with non-negligible probability
without them.
But why use low knowledge-complexity protocols when one can use zero-knowledge ones
(see, [GMW-86, GMW-87])? The reason is that the non-zero-knowledge protocols may be
more efficient and/or may require weaker computational assumptions (see, for example,
[OVY-91]).
Remarks
A remark concerning two definitions. Throughout the paper, SKC(k(\Delta)) and PKC(k(\Delta))
denote the classes of knowledge-complexity with respect to the honest verifier. Note that the
Main Theorem is only strengthen by this, whereas the transformation (mentioned above) is
indeed weaker. Furthermore, by an interactive proof we mean one in which the error probability
is negligible (i.e., smaller than any polynomial fraction). A few words of justification
appear in Section 2.
A remark concerning Fortnow's paper [F-89]. In course of this research, we found out
that the proof that SKC(0) ' co-AM as it appears in [F-89] is not correct. In particular,
there is a flaw in the AM-protocol presented in [F-89] for the complement language (see
Appendix
A). However, the paper of Aiello and Hastad provides all the necessary machinery
for proving Fortnow's result as well [AH-87, H-94]. Needless to say that the basic approach
presented by Fortnow (i.e., looking at the "simulator-based prover") is valid and has inspired
all subsequent works (e.g., [AH-87, BMO-90, Ost-91, BP-92, OW-93]) as well as the current
one.
Preliminaries
Let us state some of the definitions and conventions we use in the paper. Throughout this
paper we use n to denote the length of the input x. A function f called
negligible if for every polynomial p and all sufficiently large n's
p(n) .
2.1 Interactive proofs
Let us recall the concept of interactive proofs, presented by [GMR-89]. For formal definitions
and motivating discussions the reader is referred to [GMR-89]. A protocol between a
(computationally unbounded) prover P and a (probabilistic polynomial-time) verifier V constitutes
an interactive proof for a language L if there exists a negligible function ffl
such that
1. Completeness: If x 2 L then
2. Soundness: If x 62 L then for any prover P
Remark: Usually, the definition of interactive proofs is robust in the sense that setting the
error probability to be bounded away from 1does not change their expressive power, since
the error probability can be reduced by repetitions. However, this standard procedure is not
applicable when knowledge-complexity is measured, since (even sequential) repetitions may
increase the knowledge-complexity. The question is, thus, what is the right definition. The
definition used above is quite standard and natural; it is certainly less arbitrary then setting
the error to be some favorite constant (e.g., 1) or function (e.g., 2 \Gamman ). Yet, our techniques
yield non-trivial results also in case one defines interactive proofs with non-negligible error
probability (e.g., constant error probability). For example, languages having interactive
proofs with error probability 1=4 and perfect knowledge complexity 1 are also in BPP NP .
For more details see Appendix B. Also note that we have allowed two-sided error probability;
this strengthens our main result but weakens the statistical to perfect transformation 2 .
Suppose you had a transformation for the one-sided case. Then, given a two-sided interactive proof
of some statistical knowledge complexity you could have transformed it to a one-sided error proof of the
same knowledge complexity (cf., [GMS-87]). Applying the transformation for the one-sided case would have
yielded an even better result.
2.2 Knowledge Complexity
Throughout the rest of the paper, we refer to knowledge-complexity with respect to the honest
verifier; namely, the ability to simulate the (honest) verifier's view of its interaction with the
prover. (In the stronger definition, one considers the ability to simulate the point of view of
any efficient verifier while interacting with the prover.)
We let denote the random variable that represents V 's view of the interaction
with P on common input x. The view contains the verifier's random tape as well as the
sequence of messages exchanged between the parties.
We begin by briefly recalling the definitions of perfect and statistical zero-knowledge. A
perfect zero-knowledge (resp., statistical zero-knowledge) over a language L
if there is a probabilistic polynomial time simulator M such that for every x 2 L the random
variable M(x) is distributed identically to the statistical difference between
M(x) and (P; V )(x) is a negligible function in jxj).
Next, we present the definitions of perfect (resp., statistical) knowledge-complexity which
we use in the sequel. These definitions extend the definition of perfect (resp., statistical) zero-
knowledge, in the sense that knowledge-complexity zero is exactly zero-knowledge. Actually,
there are two alternative formulations of knowledge-complexity, called the oracle version and
the fraction version. These formulations coincide at the zero level and differ by at most an
additive constant otherwise [GP-91]. For further intuition and motivation see [GP-91]. It
will be convenient to use both definitions in this paper 3 .
By the oracle formulation, the knowledge-complexity of a protocol is the number
of oracle (bit) queries that are needed to simulate the protocol efficiently.
Definition 2.1 (knowledge complexity - oracle version): Let k: N ! N. We say that an
interactive proof language L has perfect (resp., statistical) knowledge complexity
k(n) in the oracle sense if there exists a probabilistic polynomial time oracle machine M and
an oracle A such that:
1. On input x 2 L, machine M queries the oracle A for at most k(jxj) bits.
2. For each x 2 L, machine M A produces an output with probability at least 1, and given
that M A halts with an output, M A (x) is identically distributed (resp., statistically close)
to
In the fraction formulation, the simulator is not given any explicit help. Instead, one
measures the density of the largest subspace of simulator's executions (i.e., coins) which is
identical (resp., close) to the
Definition 2.2 (knowledge complexity - fraction version): Let ae: N ! (0; 1]. We say that an
interactive proof language L has perfect (resp., statistical) knowledge-complexity
log 2
(1=ae(n)) in the fraction sense if there exists a probabilistic polynomial-time machine M
with the following "good subspace" property. For any x 2 L there is a subset of M's possible
random tapes S x , such that:
3 The analysis of the [BP-92] procedure is easier when using the fraction version, whereas the transformation
from statistical to perfect is easier when using the oracle version.
1. The set S x contains at least a ae(jxj) fraction of the set of all possible coin tosses of M(x).
2. Conditioned on the event that M(x)'s coins fall in S x , the random variable M(x) is
identically distributed (resp., statistically close) to )(x). Namely, for the perfect
case this means that for every -
c
where M(x;!) denotes the output of the simulator M on input x and coin tosses sequence
!.
As mentioned above, these two measures are almost equal.
Theorem [GP-91]: The fraction measure and the oracle measure are equal up to an additive
constant.
Since none of our results is sensitive to a difference of an additive constant in the measure, we
ignore this difference in the subsequent definition as well as in the statement of our results.
Definition 2.3 (knowledge complexity classes):
languages having interactive proofs of perfect knowledge complexity k(\Delta).
languages having interactive proofs of statistical knowledge complexity k(\Delta).
2.3 The simulation based prover
An important ingredient in our proof is the notion of a simulation based prover, introduced
by Fortnow [F-89]. Consider a simulator M that outputs conversations of an interaction
between a prover P and a verifier V . We define a new prover P , called the simulation based
prover, which selects its messages according to the conditional probabilities induced by the
simulation. Namely, on a partial history h of a conversation, P outputs a message ff with
probability
denotes the distribution induced by M on t-long prefixes of conversations. (Here,
the length of a prefix means the number of messages in it.)
It is important to note that the behavior of P is not necessarily close to the behavior
of the original prover P . Specifically, if the knowledge complexity is greater than 0 and
we consider the simulator guaranteed by the fraction definition, then P and P might have
quite a different behavior. Our main objective will be to show that even in this case P still
behaves in a manner from which we can benefit.
3 The Perfect Case
In this section we prove that the Main Theorem holds for the special case of perfect knowledge
complexity. Combining this result with the transformation (Theorem 2) of the subsequent
section, we get the Main Theorem.
Theorem 1 PKC(O(log n)) ' BPP NP
Our proof follows the procedure suggested in [BP-92], which in turn follows the approach
of [F-89, BMO-90, Ost-91] while introducing a new uniform generation procedure which
builds on ideas of [Si-83, St-83, GS-89, JVV-86] (see introduction).
Suppose that is an interactive proof of perfect knowledge complexity
O(log n) for the languages L, and let M be the simulator guaranteed by the fraction formulation
(i.e., Definition 2.2). We consider the conversations of the original verifier V with
the simulation-based-prover P (see definition in Section 2.3). We are going to show that
the probability that the interaction (P accepting is negligible if x 62 L and greater
than a polynomial fraction if x 2 L. Our proof differs from [BP-92] in the analysis of the
case x 2 L (and thus we get a stronger result although we use the same procedure). This
separation between the cases x 62 L and x 2 L can be amplified by sequential repetitions of
the protocol (P remains to observe that we can sample the (P
in probabilistic polynomial-time having access to an NP oracle. This observation originates
from [BP-92] and is justified as follows. Clearly, V 's part of the interaction can be produced
in polynomial-time. Also, by the uniform generation procedure of [BP-92] we can implement
by a probabilistic polynomial time machine that has access to an NP oracle. Actually,
the implementation may fail with negligible probability, but this does not matter. Thus, it
remains only to prove the following lemma.
Lemma 3.1
1. If x 2 L then the probability that (P outputs an accepting conversation is at least2
2. If x 62 L then the probability that (P outputs an accepting conversation is negligible.
Remark: In [BP-92], a weaker lemma is proven. Specifically, they show that the probability
that (P output an accepting conversation (on x 2 L) is related to 2 \Gammak \Deltat , where t is the
number of rounds in the protocol. Note that in our proof t could be an arbitrary polynomial
number of rounds.
proof: The second part of the lemma follows from the soundness property as before. We
thus concentrate on the first part. We fix an arbitrary x 2 L for the rest of the proof and
allow ourselves not to mention it in the sequel discussion and notation. Let
q be the number of coin tosses made by M . We denote
q the set of all possible
coin tosses, and by S the "good subspace" of M (i.e., S has density 2 \Gammak
in\Omega and for ! chosen
uniformly in S the simulator outputs exactly the distribution of the interaction
Consider the conversations that are output by the simulator on ! 2 S. The probability
to get such a conversation when the simulator is run on ! uniformly selected in \Omega\Gamma is at
least 2 \Gammak . We claim that the probability to get these conversations in the interaction (P
is also at least 2 \Gammak . This is not obvious, since the distribution produced by (P
not be identical to the distribution produced by M on a uniformly selected ! 2 \Omega\Gamma Nor is
it necessarily identical to the distribution produced by M on a uniformly selected ! 2 S.
However, the prover's moves in (P are distributed as in the case that the simulator
selects ! uniformly in \Omega\Gamma whereas the verifier's moves (in (P are distributed as in the
case that the simulator selects ! uniformly in S. Thus, it should not be too surprising that
the above claim can be proven.
However, we need more than the above claim: It is not enough that the (P conversations
have an origin in S, they must be accepting as well. (Note that this is not obvious
since M simulates an interactive proof that may have two-sided error.) Again, the density
of the accepting conversations in the "good subspace" of M is high (i.e.,
need to show that this is the case also for the (P Actually, we will show
that the probability than an (P conversation is accepting and "has an origin" in S is at
least
Let us begin the formal argument with some notations. For each possible history of the
interaction, h, we define subsets of the random tapes of the simulator (i.e., subsets of \Omega\Gamma
as
h is the set of !
2\Omega which cause the simulator to output a conversation with
prefix h. S h is the subset of !'s
in\Omega h which are also in S. A h is the set of !'s in S h which
are also accepting.
Thus, letting M t (!) denote the t-message long prefix output by the simulator M on coins
!, we get
A h
Let C be a random variable representing the (P be an indicator so
that the conversation -
c is accepting and Our aim is to prove
that . Note that
-c
-c
The above expression is exactly the expectation value of jAc j
. Thus, we need to show that:
where the expectation is over the possible conversations - c as produced by the interaction
Once Equation (1) is proven, we are done. Denote the empty history by -. To
prove Equation (1) it suffices to prove that
since using jA - j
The proof of Equation (2) is by induction on the number of rounds. Namely, for each round
i, we show that the expected value of jA h j
over all possible histories h of i rounds (i.e.,
length i) is greater or equal to the expected value of this expression over all histories h 0 of
rounds. In order to show the induction step we consider two cases:
1. the current step is by the prover (i.e., P ); and
2. the current step is by the verifier (i.e., V ).
In both cases we show, for any history h,
where the expectation is over the possible current moves m, given history h, as produced by
the interaction (P
Technical Claim
The following technical claim is used for deriving the inequalities in both cases.
positive reals. Then,
Proof: The Cauchy-Schwartz Inequality asserts:
a i!
Setting a i
can do this since y i is positive) and b i
a i
, and rearranging the terms,
we get the desired inequality. 2
Prover Step - denoted ff
Given history h, the prover P sends ff as its next message with probability
. Thus,
ff
ff
The inequality is justified by using the Technical Claim and noting that
and
Verifier Step - denoted fi
By the perfectness of the simulation, when restricted to the good subspace S, we know that
given history h, the verifier V sends fi as its next message with probability jS hffifi j
. Thus,
The inequality is justified by using the Technical Claim and noting that
and
j\Omega h j.
Having proven Equation (3) for both cases, Equation (2) follows and so does the lemma. 2
4 The Transformation
In this section we show how to transform statistical knowledge complexity into perfect knowledge
complexity, incurring only a logarithmic additive term. This transformation combined
with Theorem 1 yields the Main Theorem.
Theorem 2 For every (poly-time computable) k : N 7! N,
We stress again that these knowledge complexity classes refer to the honest verifier and that
we don't know whether such a result holds for the analogous knowledge complexity classes
referring to arbitrary (poly-time) verifiers.
proof: Here we use the oracle formulation of knowledge complexity (see Definition 2.1). We
start with an overview of the proof. Suppose we are given a simulator M which produces
output that is statistically close to the real prover-verifier interaction. We change both the
interactive proof and its simulation so that they produce exactly the same distribution space.
We will take advantage of the fact that the prover in the interactive proof and the oracle that
"assists" the simulator are both infinitely powerful. Thus, the modification to the prover's
program and the augmentation to the oracle need not be efficiently computable. We stress
that the modification to the simulator itself will be efficiently computable. Also, we maintain
the original verifier (of the interactive proof), and thus the resulting interactive proof is still
sound. Furthermore, the resulting interaction will be statistically close to the original one
(on any x 2 L) and therefore the completeness property of the original interactive proof is
maintained (although the error probability here may increase by a negligible amount).
Preliminaries
be the guaranteed interactive proof. Without loss of gener-
ality, we may assume that all messages are of length 1. This message-length convention is
merely a matter of encoding.
Recall that Definition 2.1 only guarantees that the simulator produces output with probability
- 1. Yet, employing Proposition 3.8 in [GP-91], we get that there exists an oracle
machine M , that after asking k(n) log log n queries, always produces an output so that
the output is statistically close to the interaction of (P; V ). Let A denote the associated or-
acle, and let be the simulation-based prover and verifier 4 induced
by M 0 (i.e.,
In the rest of the presentation, we fix a generic input x 2 L and omit it from the notation.
notations: Let [A; B] i be a random variable representing the i-message (i-bit) long prefix of
the interaction between A and B (the common input x is implicit in the notation). We denote
by A(h) the random variable representing the message sent by A after interaction-history
h. Thus, if the i th message is sent by A, we can write [A; B]
Y we denote the fact that the random variables X and Y are statistically close.
Using these notations we may write for every h 2 f0; 1g i and oe 2 f0; 1g:
and similarly,
4.1 The distribution induced by (P statistically close to the distributions induced
by both M
proof: By definition, the distributions produced by M are statistically
close. Thus, we have
s
We prove that [P statistically close to [P by induction on the length of the
interaction. Assuming that [P
s
we wish to prove it for i + 1. We distinguish
two cases. In case the i st move is by the prover, we get
s
(use the induction hypothesis for s
=). In case the i st move is by the verifier, we get
s
s
s
4 A simulator-based verifier is defined analogously to the simulator-based prover. It is a fictitious entity
which does not necessarily coincide with V .
where the first s
is justified by the induction hypothesis and the two others by Eq. (4).
We stress that since the induction hypothesis is used only once in the induction step, the
statistical distance is linear in the number of induction steps (rather than exponential). 2
Motivating discussion: Note that the statistical difference between the interaction (P
the simulation M due solely to the difference between the proper verifier (i.e.,
and the verifier induced by the simulator (i.e., V 0 ). This difference is due to V 0 putting
too much probability weight on certain moves and thus also too little weight on their sibling
messages (recall that a message in the interaction contains one bit). In what follows we deal
with two cases.
The first case is when this difference between the behavior of V 0 (induced by M 0 ) and
the behavior of the verifier V is "more than tiny". This case receives most of our attention.
We are going to use the oracle in order to move weight from a verifier message fi that gets
too much weight (after a history h) to its sibling message fi \Phi 1 that gets too little weight
(after the history h) in the simulation. Specifically, when the new simulator M 00 invokes M 0
and comes up with a conversation that has h ffi fi as a prefix, the simulator M 00 (with the
help of the oracle) will output (a different) conversation with the prefix h ffi (fi \Phi 1) instead
of outputting the original conversation. The simulator M 00 will do this with probability that
exactly compensates for the difference between V 0 and V . This leaves one problem. How
does the new simulator M 00 come up with a conversation that has a prefix h ffi (fi \Phi 1)? The
cost of letting the oracle supply the rest of the conversation (after the known prefix hffi(fi \Phi1))
is too high. We adopt a "brutal" solution in which we truncate all conversations that have
as a prefix. The truncation takes place both in the interaction (P
stops the conversation after fi \Phi 1 (with a special STOP message) and in the simulation
where the oracle recognizes cases in which the simulator M 00 should output a truncated
conversation. These changes make M 00 and V behave exactly the same on messages for
which the difference between V 0 and V is more than tiny. Naturally, V immediately rejects
when P 00 stops the interaction abruptly, so we have to make sure that this change does not
foil the ability of P 00 to convince V on an input x 2 L. It turns out that these truncations
happen with negligible probability since such truncation is needed only when the difference
between V and V 0 is more than tiny. Thus, P 00 convinces V on x 2 L almost with the same
probability as P 0 does.
The second possible case is that the difference between the behavior of V and V 0 is tiny.
In this case, looking at a full conversation -
c, we get that the tiny differences sum up to a
small difference between the probability of -
c in the distributions of M 0 and in the distribution
of We correct these differences by lowering the probabilities of all conversations in
the new simulator. The probability of each conversation is lowered so that its relative weight
(relatively to all other conversations) is equal to its relative weight in the interaction (P
Technically, this is done by M 00 not producing an output in certain cases that M 0 did produce
an output.
Technical remark: The oracle can be used to allow the simulator to toss bias coins when the
simulator does not "know" the bias. Suppose that the simulator needs to toss a coin so that
it comes-up head with probability N
and both N and m are integers. The
simulator supplies the oracle with a uniformly chosen r 2 f0; 1g m and the oracle answers
head if r is among the first N strings in f0; 1g m and tail otherwise. A similar procedure
is applicable for implementing a lottery with more than two a-priori known values. Using
this procedure, we can get extremely good approximations of probability spaces at a cost
related to an a-priori known upper bound on the size of the support (i.e., the oracle answer
is logarithmic in the size of the support).
O(t)
where t is the number of rounds in the interaction
ffl Let h be a partial history of the interaction and fi be a possible next move by the verifier.
We say that fi is weak with respect to h if
ffl A conversation - with respect to
it is i-good. (Note that a conversation can be i-weak only if the i th move is a verifier
move.)
ffl A conversation -
it is i-weak but j-good for every
A conversation - i-co-critical if the conversation obtained from - c, by
complementing (only) the i th bit, is i-critical. (Note that a conversation can be i-critical
only for a single i, yet it may be i-co-critical for many i's.)
ffl A conversation is weak if it is i-weak for some i, otherwise it is good.
conversations with negligible probability.
proof: Recall that [P
and that the same holds also for prefixes of the conver-
sations. Namely, for any 1 - i - t, [P
s
us define a prefix h 2 f0; 1g i of
a conversation to be bad if either
or
ffl'
The claim follows by combining two facts.
Fact 4.3 The probability that (P outputs a conversation with a bad prefix is negligible.
to be the set of bad prefixes of length i. By the statistical closeness of
we get that
for some negligible fraction fl. On the other hand, \Delta can be bounded from bellow by
which by definition of B i is at least
Thus,
and the fact follows. 2
Fact 4.4 If a conversation -
contains a bad prefix.
proof: Suppose that fi is a bad prefix then
we are done. Otherwise it holds that
Using the fact that fi is weak with respect to h, we get
which implies that h ffi fi is a bad prefix of - c. 2
Combining Facts 4.3 and 4.4, Claim 4.2 follows. 2
conversation. Then, the probability that - c is
output by M 0 is at least (1 \Gamma ffl) dt=2e \Delta Prob([P
is i-good for every
proof: To see that this is the case, we write the probabilities step by step conditioned on
the history so far. We note that the prover's steps happen with equal probabilities in both
sides of the inequality, and therefore can be reduced. Since the relevant verifier's steps are
not weak, we get the mentioned inequality. The actual proof proceeds by induction on k \Gamma l.
Clearly, the claim holds. We note that if k \Gamma l = 1 the claim also holds since
step k in the conversation is either a prover step or a k-good verifier step.
To show the induction step we use the induction hypothesis for 2. Namely,
include one prover message and one verifier message. Assume, without
loss of generality, that the prover step is k \Gamma 1. Since P 0 is the simulator based prover, we
get
Since step k of the verifier is good, we also have:
Combining Equations 5, 6, and 7, the induction step follows and we are done. 2
Dealing with weak conversations
We start by modifying the prover P 0 , resulting in a modified prover, denoted P 00 , that stops
once it gets a verifier message which is weak with respect to the current history; otherwise,
Namely,
Definition (modified prover - P 00
STOP if fi is weak with respect to
We assume that the verifier V stops and rejects immediately upon receiving an illegal message
from the prover (and in particular upon receiving this STOP message).
Next, we modify the simulator so that it outputs either good conversations or truncated
conversations which are originally i-critical. Jumping ahead, we stress that such truncated
i-critical conversations will be generated from both i-critical and i-co-critical conversations.
The modified simulator, denoted M 00 , proceeds as follows 5 . First, it invokes M 0 and obtains
a conversation - queries the augmented oracle on -
c. The oracle answers
probabilistically and its answers are of the form (i; oe), where i 2 f1; :::; tg and oe 2 f0; 1g.
The probability distribution will be specified below, at this point we only wish to remark
that the oracle only returns pairs (i; oe) for which one of the following three conditions holds
1. -
c is good, is good and is not i-co-critical for any i's then the oracle
always answers this way);
2. -
c is i-critical and
3. -
c is i-co-critical and oe = 1.
Finally, the new simulator (M 00 ) halts outputting which in case
not a prefix of -
c. Note that i may be smaller than t, in which case M 00 outputs a truncated
conversation which is always i-critical; otherwise, M 00 outputs a non-truncated conversation.
Note that this oracle message contains at most 1 log t bits where t is the length of the
interaction between P 0 and V . It remains to specify the oracle's answer distribution.
Let us start by considering two special cases. In the first case, the conversation generated
by M 0 is i-critical, for some i, but is not j-co-critical for any j ! i. In this case the oracle
always answers (i; 0) and consequently the simulator always outputs the i-bit long prefix.
However, this prefix is still being output with too low probability. This will be corrected by
the second case hereby described. In this ("second") case, the conversation - c generated by M 0
is good and i-co-critical for a single i. This means that the i-bit long prefix is given too much
probability weight whereas the prefix obtained by complimenting the i th bit gets too little
weight. To correct this, the oracle outputs (i; 1) with probability q and (t;
q will be specified. What happens is that the M 00 will output the "i-complimented prefix"
with higher probability than with which it has appeared in M 0 . The value of q is determined
as follows. Denote
Then, setting q so that
allows the simulator to output
the prefix with the right probability.
5 We stress that P 00 is not necessarily the simulator-based prover of M 00 .
In the general case, the conversation generated by M 0 may be i-co-critical for many
i's as well as j-critical for some (single) j. In case it is j-critical, it can be i-co-critical
only for us consider the sequence of indices, (i 1 ; :::; i l ), for which the generated
conversation is critical or co-critical (i.e., the conversation is i k -co-critical for all k ! l and
is either i l -critical or i l -co-critical). We consider two cases. In both cases the q k 's are set as
in the above example; namely, q
\Phi 1) and
\Phi 1).
1. The generated conversation, -
-co-critical for every k ! l and is i l -
critical. In this case, the distribution of the oracle answers is as follows. For every
l, the pair (i k ; 1) is returned with probability (
the pair
appears with probability
We stress that no other pair appears in this
distribution. 6
2. The generated conversation, -
-co-critical for every k - l. In this case,
the distribution of the oracle answers is as follows. For every k - l, the pair (i
is returned with probability (
the pair (t; 0) appears with
probability
appears in this distribution.
1. [P
2. Each conversation of (P )-conversation or a truncated (i.e.,
critical) one, is output by M 00 with probability that is at least a 3fraction of
the probability that it appears in [P
proof: The weak conversations are negligible in the output distribution of (P
4.2). The only difference between [P originates from a different behavior
of P 00 on weak conversations, specifically P 00 truncates them while P 0 does not. Yet,
the distribution on the good conversations remains unchanged. Therefore the distribution
of [P statistically close to the distribution of [P and we are done with Part (1).
For Part (2) let us start with an intuitive discussion which may help reading through the
formal proof that follows. First, we recall that the behavior of the simulation M 0 in prover
steps is identical to the behavior of the interaction (P steps. This follows
simply from the fact that P 0 is the simulation based prover of M 0 . We will show that this
property still holds for the new interaction (P and the new simulation M 00 . We will do
this by noting two different cases. In one case, the prover step is conducted by P 00 exactly
as it is done by P 0 and then M 00 behaves exactly as M 0 . The second possible case is that the
prover step contains the special message STOP. We shall note that this occurs with exactly
the same probability in the distribution (P in the distribution of M 00 .
Next, we consider the verifier steps. In the construction of M 00 and P 00 we considered the
behavior of M 0 and V on verifier steps and made changes when these differences were not
"tiny". We called a message fi weak with respect to a history h, if the simulator assigns the
message fi (after outputting h) a probability which is smaller by a factor of more than
from the probability that the verifier V outputs the message fi on history h. We did not
6 Indeed the reader can easily verify that these probabilities sum up to 1.
make changes in messages whose difference in weight (between the simulation M 0 and the
interaction were smaller than that. In the proof, we consider two cases. First, the
message fi is weak with respect to the history h. Clearly, the sibling message fi \Phi 1 is getting
too much weight in the simulation M 0 . So in the definition of M 00 we made adjustments to
move weight from the prefix h ffi (fi \Phi 1) to the prefix h ffi fi. We will show that this transfer
of weight exactly cancels the difference between the behavior of V and the behavior of M 0 .
Namely, the weak messages (and their siblings) are assigned exactly the same probability
both in M 00 and by V . Thus, we show that when a weak step is involved, the behavior of
and the behavior of M 00 are exactly equivalent. It remains to deal with messages for
which the difference between the conditional behavior of V and M 0 is "tiny" and was not
considered so far. In this case, M 00 behaves like M 0 . However, since the difference is so tiny,
we get that even if we accumulate the differences throughout the conversation, they sum up
to at most the multiplicative factor 3=4 stated in the claim.
Let us begin the formal proof by writing again the probability that (P 00
c as
the product of the conditional probabilities of the t steps. Namely,
Y
We do the same for the probability that M 00 outputs a conversation
c. We will show by induction that each step of any conversation is produced by M 00 with at
least times the probability of the same step in the (P )-interaction. Once we have
shown this, we are done. Clearly this claim holds for the null prefix. To prove the induction
step, we consider the two possibilities for the party making the st step.
st step is by the prover: Consider the conditional behavior of M 00 given the history so
far. We will show that this behavior is identical to the behavior of P 00 on the same partial
history.
A delicate point to note here is that we may talk about the behavior of M 00 on a prefix
only if this prefix appears with positive probability in the output distribution [M 00
However, by the induction hypothesis any prefix that is output by [P appears with
positive probability in [M 00
We partition the analysis into two cases.
1. First, we consider the case in which the last message of the verifier is weak with respect
to the history that precedes it. Namely, and fi is weak with respect to h 0 . In
this case, both in the interaction (P in the simulation M 00 , the next message of
the prover is set to STOP with probability 1. Namely,
2. The other possible case is that the last message of the verifier is not weak with respect
to its preceding history. In this case, the simulator M 00 behaves like M 0 and the prover
(Note that the changes in critical and co-critical steps apply only to
verifier steps.) Thus,
To summarize, the conditional behavior of M 00 in the prover steps and the conditional
behavior of P 00 are exactly equal.
st step is by the verifier: Again, we consider the conditional behavior of M 00 given the
history so far. Let us recall the second modification applied to M 0 when deriving M 00 . This
modification changes the conditional probability of the verifier steps in the distribution of M 0
in order to add weight to steps having low probability in the simulation. We note that this
modification is made only in critical or co-critical steps of the verifier. Consider a history h i
which might appear in the interaction (P possible response fi of V to h i . Again,
by the induction hypothesis, h i has a positive probability to be output by the simulation
M 00 and therefore we may consider the conditional behavior of M 00 on this history h i . There
are three cases to be considered, corresponding to whether either fi or fi \Phi 1 or none is weak
with respect to h i .
We start with the simplest case in which neither fi nor fi \Phi 1 is weak (w.r.t. h i ). In this
case, the behavior of M 00 is identical to the behavior of M 0 since the oracle never sends the
message (i in this case. However, by the fact that fi is not weak, we get that
and we are done with this simple case.
We now turn to the case in which fi is weak (w.r.t. h i ). In this case, given that M 00 has
produced the prefix h i , it produces h i ffifi whenever M 0 produces the prefix h i ffifi. Furthermore,
with conditional probability q (as defined above), M 00 produces the prefix h i ffi fi also in case
produces the prefix h i ffi (fi \Phi 1). As above, we define
is the simulation (M 0 ) based verifier, we may also write
Also, recall that q was defined as p\Gammap 0
using these notations:
Using Equation (8), we get
Finally, we turn to the case in which fi \Phi 1 is weak (w.r.t. h i ). Again, this means that fi is
co-critical in - c. Given that M 00 has produced the prefix h i , it produces h i ffi fi only when M 0
produces the prefix h i ffi fi, and furthermore, M 00 does so only with probability
q is again as defined above). We denote p and p 0 , with respect to the critical message fi \Phi 1.
Namely,
Thus, recalling that
This completes the proof of Claim 4.6. 2
Lowering the probability of some simulator outputs
After handling the differences between M 0 and (P which are not tiny, we make the last
modification, in which we deal with tiny differences. We do that by lowering the probability
that the simulator outputs a conversation, in case it outputs this conversation more frequently
than it appears in (P 00 ; V ). The modified simulator, denoted M 000 , runs M 00 to obtain a
conversation - c. (Note that M 00 always produces output.) Using the further-augmented
oracle, M 000 outputs -
c with probability
c
Note that p - c - 1 holds due to Part 2 of Claim 4.6.
1. M 000 produces output with probability 3;
2. The output distribution of M 000 (i.e., in case it has output) is identical to the distribution
proof: The probability that M 000 produces an output is exactly:
As for part (2), we note that the probability that a conversation -
c is output by M 000 is exactly4
the simulator halts with an output with probability exactly 3,
we get that given that M 000 halts with an output, it outputs -
c with probability exactly
and we are done. 2
An important point not explicitly addressed so far is whether all the modifications applied to
the simulator preserve its ability to be implemented by a probabilistic polynomial-time with
bounded access to an oracle. Clearly, this is the case with respect to M 00 (at the expense of
additional
regarding the last modification there
is a subtle points which needs to be addressed. Specifically, we need to verify that the
definition of M 000 is implementable; namely, that M 000 can (with help of an augmented oracle)
"sieve" conversations with exactly the desired probability. Note that the method presented
above (in the "technical remark") may yield exponentially small deviation from the desired
probability. This will get very close to a perfect simulation, but yet will not achieve it.
To this end, we modify the "sieving process" suggested in the technical remark to deal
with the specific case we have here. But first we modify P 00 so that it makes its random
choices (in case it has any) by flipping a polynomial number of unbiased coins. 7 This rounding
does change a bit the behavior of P 00 , but the deviation can be made so small that the above
assertions (specifically Claim 4.6) still hold.
Consider the specific sieving probability we need here.
c=d
, where
a
d
observation is that c is the number
of coin tosses which lead M 00 to output -
c (i.e., using the notation of the previous section,
j). Observing that b is the size of probability space for [P using the above
modification to P 00 , we rewrite p - c as 3ad
c
are some
non-negative integers.
We now note, that the oracle can allow the simulator to sieve conversations with probability
e
c
in the following way. M 000 sends to the oracle the
random tape ! that it has tossed for M 00 , and the oracle sieves only e out of the possible c
random tapes which lead M 00 to output - c. The general case of p -
c2 f is deal by writing
c
To implement this sieve, M 000 supplies
the oracle with a uniformly chosen f-bit long string (in addition to !). The oracle sieves out
q random-tapes (of M 00 ) as before, and uses the extra bits in order to decide on the sieve in
case ! equals a specific (different) random-tape.
Combining Claims 4.1, 4.6 (part 1), and 4.7, we conclude that (P 00 is an interactive proof
system of perfect knowledge complexity O(log n) for L. This completes the proof of
Theorem 2.
7 The implementation of P 00 was not discussed explicitly. It is possible that P 00 uses an infinite number
of coin tosses to select its next message (either 0 or 1). However, an infinite number of coin tosses is not
really needed since rounding the probabilities so that a polynomial number of coins suffices, causes only
exponentially small rounding errors.
Concluding Remarks
We consider our main result as a very first step towards a classification of languages according
to the knowledge complexity of their interactive proof systems. Indeed there is much to be
known. Below we first mention two questions which do not seem too ambitious. The first
is to try to provide evidence that NP-complete languages cannot be proven within low
(say logarithmic or even constant) knowledge complexity. A possible avenue for proving this
conjecture is to show that languages having logarithmic knowledge complexity are in co-AM,
rather than in BPP NP (recall that NP is unlikely to be in co-AM - see also [BHZ-87]). The
second suggestion is to try to provide indications that there are languages in PSPACE which
do not have interactive proofs of linear (rather than logarithmic) knowledge complexity. The
reader can easily envision more moderate and more ambitious challenges in this direction.
Another interesting question is whether all levels greater then zero of the knowledge-
complexity hierarchy contain strictly more languages than previous levels, or if some partial
collapse occurs. For example, it is open whether constant or even logarithmic knowledge
complexity classes do not collapse to the zero level.
Regarding our transformation of statistical knowledge complexity into perfect knowledge
complexity (i.e., Theorem 2), a few interesting questions arise. Firstly, can the cost of the
transformation be reduced to bellow O(log n) bits of knowledge? A result for the special
case of statistical zero-knowledge will be almost as interesting. Secondly, can one present an
analogous transformation that preserves one-sided error probability of the interactive proof?
(Note that our transformation introduces a negligible error probability into the completeness
condition.) Finally, can one present an analogous transformation that applies to knowledge
complexity with respect to arbitrary verifiers? (Our transformation applies only to knowledge
complexity with respect to the honest verifier.)
6
Acknowledgement
We thank Leonard Shulman for providing us with a simpler proof of Claim 3.2.
--R
The (True) Complexity of Statistical Zero-Knowledge
Making Zero-Knowledge Provers Efficient
The Complexity of Perfect Zero-Knowledge
Interactive Proof Systems: Provers that never Fail and Random Selection.
"Proofs that Yield Nothing But their Validity and a Methodology of Cryptographic Protocol Design"
"How to Play any Mental Game or a Completeness Theorems for Protocols of Honest Majority"
Quantifying Knowledge Complexity.
The Knowledge Complexity of Interactive Proofs.
The Knowledge Complexity of Interactive Proofs.
Public Coins in Interactive Proof Systems
Better Ways to Generate Hard NP Instances than Picking Uniformly at Random
Direct Minimum-Knowledge computations
Random Generation of Combinatorial Structures from a Uniform Distribution.
Algebraic Methods for Interactive Proof Systems.
Fair Games Against an All-Powerful Adversary
A Complexity Theoretic Approach to Randomness.
The Complexity of Approximate Counting.
--TR
--CTR
Amos Beimel , Paz Carmi , Kobbi Nissim , Enav Weinreb, Private approximation of search problems, Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, May 21-23, 2006, Seattle, WA, USA
Oded Goldreich , Salil Vadhan , Avi Wigderson, On interactive proofs with a laconic prover, Computational Complexity, v.11 n.1/2, p.1-53, January
Amit Sahai , Salil Vadhan, A complete problem for statistical zero knowledge, Journal of the ACM (JACM), v.50 n.2, p.196-249, March | knowledge complexity;interactive proofs;randomness;cryptography;complexity classes;zero knowledge |
284976 | Guaranteeing Fair Service to Persistent Dependent Tasks. | We introduce a new scheduling problem that is motivated by applications in the area of access and flow control in high-speed and wireless networks. An instance of the problem consists of a set of persistent tasks that have to be scheduled repeatedly. Each task has a demand to be scheduled "as often as possible." There is no explicit limit on the number of tasks that can be scheduled concurrently. However, such limits are imposed implicitly because some tasks may be in conflict and cannot be scheduled simultaneously. These conflicts are presented in the form of a conflict graph. We define parameters which quantify the fairness and regularity of a given schedule. We then proceed to show lower bounds on these parameters and present fair and efficient scheduling algorithms for the case where the conflict graph is an interval graph. Some of the results presented here extend to the case of perfect graphs and circular-arc graphs as well. | Introduction
In this paper we consider a new form of a scheduling
problem which is characterized by two features:
Persistence of the tasks: A task does not simply go
away once it is scheduled. Instead, each task must
be scheduled innitely many times. The goal is to
schedule every task as frequently as possible.
Dependence among the tasks: Some tasks con
ict with
each other and hence cannot be scheduled concur-
rently. These con
icts are given by a con
ict graph.
This graph imposes constraints on the sets of tasks
that may be scheduled concurrently. Note that these
constraints are not based simply on the cardinality of
the sets, but rather on the identity of the tasks within
the sets.
Extended summary
y IBM { Research Division, T. J. Watson Research Center,
Yorktown Heights, NY 10598.
Email: famotz,sbar,madhug@watson.ibm.com.
z Dept. of Computer Science, Columbia University, New
York, NY 10027. Email: mayer@cs.columbia.edu. Part of this
work was done while the author was at the IBM T. J. Watson
Research Center. Partially supported by an IBM Graduate
Fellowship, NSF grant CCR-93-16209, and CISE Institutional
Infrastructure Grant CDA-90-24735
We consider both the problems of allocation, i.e.,
how often should a task be scheduled and regularity,
i.e., how evenly spaced are lengths of the intervals
between successive scheduling of a specic task. We
present a more formal description of this problem
next and discuss our primary motivation immediately
afterwards. While all our denitions are presented for
general con
ict graphs, our applications, bounds, and
algorithms are for special subclasses { namely, perfect
graphs, interval graphs and circular arc-graphs 1 .
Problem statement An instance of the scheduling
problem consists of a con
ict graph G with n vertices.
The vertices of G are the tasks to be scheduled and
the edges of G dene pairs of tasks that cannot be
scheduled concurrently. The output of the scheduling
algorithm is an innite sequence of subsets of the
vertices, I 1 ; I lists the tasks that are
scheduled at time t. Notice that for all t, I t must be
an independent set of G.
In the form above, it is hard to analyze the running
time of the scheduling algorithm. We consider instead
a nite version of the above problem and use it to
analyze the running time.
Input: A con
ict graph G and a time t.
Output: An independent set I t denoting the set of tasks
scheduled at time unit t.
The objective of the scheduling algorithm is to
achieve a fair allocation and a regular schedule. We
next give some motivation and describe the context
of our work. As we will see, none of the existing
measures can appropriately capture the \goodness"
of a schedule in our framework. Hence we proceed to
introduce measures which allow for easier presentation
of our results.
1 A graph is perfect if for all its induced subgraphs the size of
the maximum clique is equal to the chromatic number (cf. [11]).
A graph is an interval graph (circular-arc graph) if its vertices
correspond to intervals on a line (circle), and two vertices are
adjacent if the corresponding intervals intersect (cf. [20]).
A. Bar-Noy, A. Mayer, B. Schieber, and M. Sudan
1.1 Motivation
Session scheduling in high-speed local-area net-
works. MetaRing ([7]) is a recent high-speed local-area
ring-network that allows \spatial reuse", i.e., concurrent
access and transmission of user sessions, using
only minimal intermediate buering of packets. The
basic operations in MetaRing can be approximated by
the following: if some node has to send data to some
other node a session is established between the source
and the destination. Sessions typically last for a while
and can be active only if they have exclusive use of
all the links in their routes. Hence, sessions whose
routes share at least one link are in con
ict. These
con
icts need to be regulated by breaking the data
sent in a session into units of quotas that are transmitted
according to some schedule. This schedule has
to be e-cient and fair. E-cient means that the total
number of quotas transmitted (throughput) is maximized
whereas fair means that the throughput of each
session is maximized, and that the time between successive
activation of a session is minimized, so that
large buers at the source nodes can be avoided. It
has been recognized ([5]) that the access and
control in such a network should depend on locality in
the con
ict graph. However, no rm theoretical basis
for an algorithmic framework has been proposed
up to now. To express this problem as our scheduling
problem we create a circular-arc graph whose vertices
are the sessions, and in which vertices are adjacent if
the corresponding paths associated with the sessions
intersect in a link.
Time sharing in wireless networks. Most indoor
designs of wireless networks are based on a cellular architecture
with a very small cell size. (See, e.g., [13].)
The cellular architecture comprises two levels { a stationary
level and a mobile level. The stationary level
consists of xed base stations that are interconnected
through a backbone network. The mobile level consists
of mobile units that communicate with the base stations
via wireless links. The geographic area within
which mobile units can communicate with a particular
base station is referred to as a cell. Neighboring
cells overlap with each other, thus ensuring continuity
of communications. The mobile units communicate
among themselves, as well as with the xed information
networks, through the base stations and the back-bone
network. The continuity of communications is a
crucial issue in such networks. A mobile user who
crosses boundaries of cells should be able to continue
its communication via the new base-station. To ensure
this, base-stations periodically need to transmit their
identity using the wireless communication. In some
implementations the wireless links use infra-red waves.
Therefore, two base-station whose cells overlap are in
con
ict and cannot transmit their identity simulta-
neously. These con
icts have to be regulated by a
time-sharing scheme. This time sharing has to be ecient
and fair. E-cient means that the scheme should
accommodate the maximal number of base stations
whereas fair means that the time between two consecutive
transmissions of the same base-station should
be less then the time it takes a user to cross its corresponding
cell. Once again this problem can be posed
as our graph-scheduling problem where the vertices of
the graph are the base-stations and an edge indicates
that the base stations have overlapping cells.
1.2 Relationship to past work
Scheduling problems that only consider either persistence
of the tasks or dependence among the tasks (but
not both) have been dealt with before.
The task of scheduling persistent tasks has been
studied in the work of Baruah et al. [2]. They consider
the problem of scheduling a set of n tasks with given
(arbitrary) frequencies on m machines. (Hence,
yields an instance of our problem where the con
ict
graph is a clique.) To measure \regularity" of a
schedule for their problem they introduce the notion
of P -fairness. A schedule for this problem is P -fair
(proportionate-fair) if at each time t for each task i
the absolute value of the dierence in the number of
times i has been scheduled and f i t is strictly less than
1, where f i is the frequency of task i. They provide
an algorithm for computing a P -fair solution to their
problem. Their problem fails to capture our situation
due to two reasons. First, we would like to constrain
the sets of tasks that can be scheduled concurrently
according to the topology of the con
ict graph and
not according to their cardinality. Moreover, in their
problem every \feasible" frequency requirement can
be scheduled in a P -fair manner. For our scheduling
problem we show that such a P -fair schedule cannot
always be achieved. To deal with feasible frequencies
that cannot be scheduled in a P -fair manner, we dene
weaker versions of \regularity".
The dependency property captures most of the
work done based on the well-known \Dining Philoso-
phers" paradigm, see for example [9], [18], [6], [1], [8],
and [4]. In this setting, Lynch [18] was the rst to explicitly
consider the response time for each task. The
goal of successive works was to make the response time
of a node to depend only on its local neighborhood in
the con
ict graph. (See, e.g., [4].) While response time
in terms of a node's degree is adequate for \one-shot"
tasks, it does not capture our requirement that a task
Guaranteeing Fair Service to Persistent Dependent Tasks 3
should be scheduled in a regular and fair fashion over
a period of time.
1.3 Notations and denitions
A schedule S is an innite sequence of independent
sets I 1 ; I We use the notation S(i; t) to
represent the schedule: S(i;
lim inf t!1 ff (t)
g. We refer to f i as the frequency of
the i-th task in schedule S.
Definition 1.1. A vector of frequencies ^
feasible if there exists a schedule S such
that the frequency of the i-th task under schedule S is
at least f i .
Definition 1.2. A schedule S realizes a vector of
f if the frequency of the i-th task under
schedule S is at least f i . A schedule S c-approximates
a vector of frequencies ^
f if the frequency of the i-th
task under schedule S is at least f i =c.
A measure of fairness Fairness is determined via a
partial order that we dene on the set of frequency
vectors.
Definition 1.3. Given two frequency vectors ^
f is less fair
there exists an index j and a threshold f
such that f j < f g j and for all i such that g i f ,
Definition 1.4. A vector of frequencies ^
f is max-min
fair if no feasible vector ^
g.
Less formally, in a max-min fair frequency vector
one cannot increase the frequency of some task at
the expense of more frequently scheduled tasks. This
means that our goal is to let task i have more of the
resource as long as we have to take the resource away
only from tasks which are better o, i.e., they have
more of the resource than task i.
Measures of regularity Here, we provide two measures
by which one can evaluate a schedule for its reg-
ularity. We call these measures the response time and
the drift.
Given a schedule S, the response time for task i,
denoted r i , is the largest interval of time for which the
i-th task waits between successive schedulings. More
precisely,
For any time t, the number of expected occurrences
of task i can be expressed as f i t. But note that if r i is
larger than 1=f i , it is possible that, for some period of
time, a schedule allows a task to \drift away" from its
expected number of occurrences. In order to capture
this, we introduce a second measure for the regularity
of a schedule. We denote by d i the drift of a task
i. It indicates how much a schedule allows task i to
drift away from its expected number of scheduled units
(based on its frequency):
Note that if a schedule S achieves drift d i < 1 for all
then it is P-fair as dened in [2].
Finally, a schedule achieves its strongest form of
regularity if each task i is scheduled every 1=f i time-units
(except for its rst appearance). Hence we say
that a schedule is rigid if for each task i there exists
a starting point s i such that the task is scheduled on
exactly the time units
1.4 Results
In Section 2 we motivate our denition of max-min
fairness and show several of its properties. First, we
provide an equivalent alternate denition of feasibility
which shows that deciding feasibility of a frequency
vector is computable. We prove that every graph has a
unique max-min fair frequency vector. Then, we show
that the task of even weakly-approximating the max-min
fair frequencies on general graphs is NP-hard.
As we mentioned above many practical applications
of this problem arise from simpler networks, such
as buses and rings (i.e., interval con
ict graphs and
circular-arc con
ict graphs). For the case of perfect-
graphs (and hence for interval graphs), we describe
an e-cient algorithm for computing max-min fair
frequencies. We prove that the period T of a schedule
realizing such frequencies satises and that
there exist interval graphs such that
n) .
The rest of our results deal with the problem of
nding the most \regular" schedule (under the above
mentioned measures) that realizes any feasible frequency
vector. Section 3 shows the existence of interval
graphs for which there is no P -fair schedule
that realizes their max-min fair frequencies. In Section
4 we introduce an algorithm for computing a
schedule that realizes any given feasible frequencies
on interval graphs. The schedule computed by the
algorithm achieves response-time of d4=f i e and drift
of O(
log slight modication of this algorithm
yields a schedule that 2-approximates the given
frequencies. The advantage of this schedule is that
4 A. Bar-Noy, A. Mayer, B. Schieber, and M. Sudan
it achieves a bound of 1 on the drift and hence a
bound of d2=f i e on the response time. In Section 5 we
present an algorithm for computing a schedule that
12-approximates any given feasible frequencies on interval
graphs and has the advantage of being rigid.
All algorithms run in polynomial time. In Section 6
we show how to transform any algorithm for computing
a schedule that c-approximates any given feasible
frequencies on interval graphs into an algorithm
for computing a schedule that 2c-approximates any
given feasible frequencies on circular-arc graphs. (The
response-time and drift of the resulting schedule are
doubled as well.) Finally, in Section 7 we list a number
of open problems and sketch what additional properties
are required to obtain solutions for actual net-
works. Due to space constraints some of the proofs are
either omitted or sketched in this extended summary.
Allocation
Our denition for max-min fair allocation is based
on the denition used by Jae [14] and Bertsekas
and Gallager [3], but diers in one key ingredient
{ namely our notion of feasibility. We study some
elementary properties of our denition in this section.
In particular, we show that the denition guarantees a
unique max-min fair frequency vector for every con
ict
graph. We also show the hardness of computing the
frequency vector for general graphs. However, for the
special case of perfect graphs our notion turns out to
be the same as of [3].
The denition of [14] and [3] is considered the
traditional way to measure throughput fairness and
is also based on the partial order as used in
our denition. The primary dierence between our
denition and theirs is in the denition of feasibility.
Bertsekas and Gallager [3] use a denition, which we
call clique feasible, that is dened as follows:
A vector of frequencies (f
feasible for a con
ict graph G, if
for all cliques C in the graph G.
The notion of max-min fairness of Bertsekas and
Gallager [3] is now exactly our notion, with feasibility
replaced by clique feasibility.
The denition of [3] is useful for capturing the
notion of fractional allocation of a resource such as
bandwidth in a communication networks. However, in
our application we need to capture a notion of integral
allocation of resources and hence their denition does
not su-ce for our purposes. It is easy to see that every
frequency vector that is feasible in our sense is clique
feasible. However, the converse is not true. Consider
the case where the con
ict graph is the ve-cycle. For
this graph the vector (1=2; 1=2; 1=2; 1=2; 1=2) is clique
feasible, but no schedule can achieve this frequency.
2.1 An alternate denition of feasibility
Given a con
ict graph G, let I denote the family of
all independent sets in G. For I 2 I, let (I) denote
the characteristic vector of I .
Proposition 2.1. A vector of frequencies ^
f is
feasible if and only if there exist weights f I g I2I , such
that
I2I I
f .
The main impact of this assertion is that it shows
that the space of all feasible frequencies is well behaved
(i.e., it is a closed, connected, compact space).
Immediately it shows that determining whether a frequency
vector is feasible is a computable task (a fact
that may not have been easy to see from the earlier
denition). We now use this denition to see the following
connection:
Proposition 2.2. Given a con
ict graph G, the
notions of feasibility and clique feasibility are equivalent
if and only if G is perfect.
Proof (sketch): The proof is follows directly from well-known
polyhedral properties of perfect graphs. (See
[12], [16].) In the notation of Knuth [16] the space
of all feasible vectors is the polytope STAB(G) and
the space of all clique-feasible vectors is the polytope
QSTAB(G). The result follows from the theorem on
page 38 in [16] which says that a graph G is perfect if
and only if
2.2 Uniqueness and computability of max-min
fair frequencies
In the full paper we prove the following theorem.
Theorem 2.3. There exists a unique max-min fair
frequency vector.
Now, we turn to the issue of the computability
of the max-min fair frequencies. While we do not
know the exact complexity of computing max-min fair
frequencies 2 it does seem to be a very hard task in gen-
eral. In particular, we consider the problem of computing
the smallest frequency assigned to any vertex
by a max-min allocation and show the following:
Theorem 2.4. There exists an > 0, such that
given a con
ict graph on n vertices approximating the
In particular, we do not know if deciding whether a frequency
vector is feasible is in NP [ coNP
Guaranteeing Fair Service to Persistent Dependent Tasks 5
smallest frequency assigned to any vertex in a max-min
fair allocation, to within a factor of n , is NP-hard.
Proof (sketch): We relate the computation of max-min
fair frequencies in a general graph to the computation
of the fractional chromatic number of a graph. The
fractional chromatic number problem (cf. [17]) is
dened as follows: To each independent set I in the
graph, assign a weight w I , so as to minimize the
quantity
I w I , subject to the constraint that for
every vertex v in the graph, the quantity
I3v w I is
at least 1. The quantity
I w I is called the fractional
chromatic number of the graph. Observe that if the
w I 's are forced to be integral, then the fractional
chromatic number is the chromatic number of the
graph.
The following claim shows a relationship between
the fractional chromatic number and the assignment
of feasible frequencies.
2.5. Let (f be a feasible assignment
of frequencies to the vertices in a graph G. Then
is an upper bound on the fractional chromatic
number of the graph. Conversely, if k is the
fractional chromatic number of a graph, then a schedule
that sets the frequency of every vertex to be 1=k is
feasible.
The above claim, combined with the hardness of
computing the fractional chromatic number [17], suf-
ces to show the NP-hardness of deciding whether a
given assignment of frequencies is feasible for a given
graph. To show that the claim also implies the hardness
of approximating the smallest frequency in the
max-min fair frequency vector we inspect the Lund-
Yannakakis construction a bit more closely. Their construction
yields a graph in which every vertex participates
in a clique of size k such that deciding if the
(fractional) chromatic number is k or kn is NP-hard.
In the former case, the max-min fair frequency assignment
is 1=k to every vertex. In the latter case at least
some vertex will have frequency smaller that 1=(kn ).
Thus this implies that approximating the smallest frequency
in the max-min fair frequencies to within a
factor of n is NP-hard. 2
2.3 Max-min fair frequencies on perfect
graphs
We now turn to perfect graphs. We show how to compute
in polynomial time max-min fair frequencies for
this class of graphs and give bounds on the period of
a schedule realizing such frequencies. As our main focus
of the subsequent sections will be interval graphs,
we will give our algorithms and bounds rst in terms
Figure
1: An interval graph for which
n) .
of this subclass and then show how to generalize the
results to perfect graphs.
We start by describing an algorithm for computing
max-min fair frequencies on interval graphs. As
we know that clique-feasibility equals feasibility (by
Proposition 2.2), we can use an adaptation of [3]:
Algorithm 1: Let C be the collection of maximal
cliques in the interval graph. (Notice that C has at
most n elements and can be computed in polynomial
time.) For each clique C 2 C the algorithm maintains
a residual capacity which is initially 1. To each vertex
the algorithm associates a label assigned/unassigned.
All vertices are initially unassigned. Dividing the
residual capacity of a clique by the number of unassigned
vertices in this clique yields the relative residual
capacity. Iteratively, we consider the clique with the
smallest current relative residual capacity and assign
to each of the clique's unassigned vertices this capacity
as its frequency. For each such vertex in the clique we
mark it assigned and subtract its frequency from the
residual capacity of every clique that contains it. We
repeat the process till every vertex has been assigned
some frequency.
It is not hard to see that Algorithm 1 correctly
computes max-min fair frequencies in polynomial-
time. We now use its behavior to prove a tight bound
on the period of a schedule for an interval graph. The
following theorem establishes this bound. (See also
Figure
1.)
Theorem 2.6. Let f be the frequencies
in a max-min fair schedule for an interval graph G,
are relatively prime. Then, the period
for the schedule
Furthermore, there exist interval graphs for which T =n) .
6 A. Bar-Noy, A. Mayer, B. Schieber, and M. Sudan
It is clear that Algorithm 1 works for all graphs
where clique feasibility determines feasibility, i.e., perfect
graphs. However, the algorithm does not remain
computationally e-cient. Still, Theorem 2.6 can be
directly extended to the class of perfect graphs. We
now use this fact to describe a polynomial-time algorithm
for assigning max-min fair frequencies to perfect
graphs.
Algorithm 2: This algorithm maintains the labelling
procedure assigned/unassigned of Algorithm 1. At
each phase, the algorithm starts with a set of assigned
frequencies and tries to nd the largest f such that
all unassigned vertices can be assigned the frequency
f . To compute f in polynomial time, the algorithm
uses the fact that deciding if a given set of frequencies
is feasible is reducible to the task of computing the
size of the largest weighted clique in a graph with
weights on vertices. The latter task is well known to
be computable in polynomial-time for perfect graphs.
Using this decision procedure the algorithm performs
a binary search to nd the largest achievable f . (The
binary search does not have to be too rened due
to Theorem 2.6). Having found the largest f , the
algorithm nds a set of vertices which are saturated
under f as follows: Let be some small number, for
instance
is su-cient. Now it raises, one at a
time, the frequency of each unassigned vertex to f +,
while maintaining the other unassigned frequencies
at f . If the so obtained set of frequencies is not
feasible, then it marks the vertex as assigned and its
frequency is assigned to be f . The algorithm now
repeats the phase until all vertices have been assigned
some frequency.
3 Non-existence of P-fair allocations
Here we show that a P -fair scheduling realizing max-min
fair frequencies need not exist for every interval
graph.
Theorem 3.1. There exist interval graphs G for
which there is no P-fair schedule that realizes their
max-min frequency assignment.
In order to prove this theorem we construct such
a graph G as follows. We choose a parameter k and
for every permutation of the elements
dene an interval graph G . We show a necessary condition
that must satisfy if G has a P-fair schedule.
Lastly we show that there exists a permutation of
12 elements which does not satisfy this condition.
Given a permutation on k elements, G consists
of 3k intervals. For the
Figure
2: The graph G for
that the max-min
frequency assignment to G is the following: All the
tasks B(i) have frequency 1=k; all the tasks A(i) have
frequency all the tasks C(i) have
frequency i=k. (See Figure 2.)
We now observe the properties of a P-fair schedule
for the tasks in G . (i) The time period is k. (ii)
The schedule is entirely specied by the schedule for
the tasks B(i). (iii) This schedule is a permutation
of k elements, where (i) is the time unit for which
B(i) is scheduled. To see what kind of permutations
constitute P-fair schedules of G we dene the notion
of when a permutation is fair for another permutation.
Definition 3.1. A permutation 1 is fair for a
permutation 2 if for all
the conditions cond ij dened as follows:
3.2. If a permutation is a P-fair schedule
for G then is fair for the identity permutation and
for permutation .
be a permutation
on 12 elements. In the full paper we show that
no permutation is fair to both and the identity.
Realizing frequencies exactly
In this section we rst show how to construct a
schedule that realizes any feasible set of frequencies
(and hence in particular max-min frequencies) exactly
on an interval graph. We prove its correctness and
demonstrate a bound of d4=f i e on the response time
for each interval i. We then proceed to introduce
a potential function that is used to yield a bound
of O(n 1+ ) on the drift for every interval. We also
prove that if the feasible frequencies are of the form
then the drift of the schedule can be bounded
by 1 and thus the waiting time can be bounded by
We use this property to give an algorithm for
Guaranteeing Fair Service to Persistent Dependent Tasks 7
computing a schedule that 2-approximates any feasible
set of frequencies with high regularity.
Input to the Algorithm: A unit of time t and a con
ict
graph G which is an interval graph. Equivalently, a set
of intervals on the unit interval [0; 1]
of the x-coordinate, where I
Every interval I i has a frequency f with the
following constraint:
I
For simplicity, we assume from now on that these
constraints on the frequencies are met with equality
and that t g.
Output of the Algorithm: An independent set I t which
is the set of tasks scheduled for time t such that the
scheduled S, given by fI t g T
realizes frequencies f i .
The algorithm is recursive. Let s i denote the
number of times a task i has to appear in T time
units, i.e., s . The algorithm has log T levels
of recursion. (Recall that log T is O(n) for max-min
fair frequencies.) In the rst level we decide on the
occurrences of the tasks in each half of the period.
That is, for each task we decide how many of its
occurrences appear in the rst half of the period and
how many in the second half. This yields a problem
of a recursive nature in the two halves. In order to
nd the schedule at time t, it su-ces to solve the
problem recursively in the half which contains t. (Note
that in case T is odd one of the halves is longer than
the other.) Clearly, if a task has an even number of
occurrences in T it would appear the same number of
times in each half in order to minimize the drift. The
problem is with tasks that have an odd number of
occurrences s i . Clearly, each half should have at least
bs i c of the occurrences. The additional occurrence
has to be assigned to a half in such a way that both
resulting sub-problems would still be feasible. This is
the main di-culty of the assignment and is solved in
the procedure Sweep.
Procedure Sweep: In this procedure we compute
the assignment of the additional occurrence for all
tasks that have an odd number of occurrences. The
input to this procedure is a set of intervals I
(with odd s i 's) with the restriction that each clique in
the resulting interval graph is of even size. (Later, we
show how to overcome this restriction.) The output
is a partition of these intervals into two sets such
that each clique is equally divided among the sets.
This is done by a sweep along the x-coordinate of
the intervals. During the sweep every interval will
be assigned a variable which at the end is set to 0
or 1 (i.e., rst half of the period or second half of the
period). Suppose that we sweep point x. We say that
an interval I i is active while we sweep point x if x 2 I i .
The assignment rules are as follows.
For each interval I i that starts at x:
If the current number of active intervals is even:
A new variable is assigned to I i (I i is unpaired).
If the current number of active intervals is odd:
I i is paired to the currently unpaired interval I j
and it is assigned the negation of I j 's variable.
Thus no matter what value is later assigned to this
variable, I i and I j will end up in opposite halves.
For each interval I i that ends at x:
If the current number of active intervals is even:
Nothing is done.
If the current number of active intervals is odd:
If I i is paired with I
I j is now being paired with the currently unpaired
interval I k . Also, I j 's variable is matched
with the negation of I k 's variable. This will ensure
that I j and I k are put in opposite halves,
or equivalently, I i and I k are put in the same
halves.
If I i is unpaired:
Assign arbitrarily 0 or 1 to I i 's variable.
These operations ensure that whenever the number
of active intervals is even, then exactly half of the
intervals will be assigned 0 and half will be assigned
this will be proven later.
Recall that we assumed that the size of each clique
is even. Let us show how to overcome this restriction.
For this we need the following simple lemma. For
by C x the set of all the input intervals
(with odd and even s i 's) that contain x; C x will be
referred to as a clique.
Lemma 4.1. The period T is even if and only if
is oddgj is even for every clique C.
This lemma implies that if T is even then the size of
each clique in the input to procedure Sweep is indeed
even. If T is odd, then a dummy interval I n+1 which
extends over all other intervals and which has exactly
one occurrence is added to the set I before calling
Sweep. Again, by Lemma 4.1, we are sure that in
this modied set I the size of all cliques is even. This
would increase the period by one. The additional time
unit will be allotted only to the dummy interval and
thus can be ignored. We note that to produce the
schedule at time t we just have to follow the recursive
calls that include t in their period. Since there are
no more than log T such calls, the time it takes to
produce this schedule is polynomial in n for max-min
fair frequencies.
8 A. Bar-Noy, A. Mayer, B. Schieber, and M. Sudan
Lemma 4.2. The algorithm produces a correct
schedule for every feasible set of frequencies.
Lemma 4.3. If the set of frequencies is of the form
then the drift can be bounded by 1 and hence the
response time can be bounded by d2=f i e.
Proof: Since our algorithm always divides even s i into
equal halves, the following invariant is maintained: At
any recursive level, whenever s i > 1, then s i is even.
Also note that thus
we can express each f i as 2 i k . Now, following the
algorithm, it can be easily shown that there is at least
one occurrence of task i in each time interval of size
Hence
c and P -fairness
Lemma 4.4. The response time for every interval
I i is bounded by d4=f i e.
Proof: The proof is based on the Lemma 4.3. This
lemma clearly implies the case in which the frequencies
are powers of two. Moreover, in case the frequencies
are not powers of two, we can virtually partition
each task into two tasks with frequencies p i and r i
respectively, so that f a power of
two, and r i < p i . Then, the schedule of the task with
frequency p i has drift 1. This implies that its response
time is d2=p i e d4=f i e. 2
We remark that it can be shown that the bound of
the above lemma is tight for our algorithm.
We summarize the results in this section in the
following theorem:
Theorem 4.5. Given an arbitrary interval graph
as con
ict graph, the algorithm exactly realizes any
feasible frequency-vector and guarantees that r i
4.1 Bounding the drift
Since the algorithm has O(log T ) levels of recursion
and each level may increase the drift by one, clearly
the maximum drift is bounded by O(log T ). In this
section we prove that we can decrease the maximum
drift to be O(
log xed , where n is the
number of tasks. By Theorem 2.6 this implies that in
the worst case the drift for a max-min fair frequencies
is bounded by O(n 1+ ).
Our method to get a better drift is based on the
following observation: At each recursive step of the
algorithm two sets of tasks are produced such that
each set has to be placed in a dierent half of the
time-interval currently considered. However, we are
free to choose which set goes to which half. We are
using this degree of freedom to decrease the drift. To
make the presentation clearer we assume that T is a
power of two and that the time units are
Consider a sub-interval of size T=2 j starting after
time and ending at t
for 1. In the rst j recursion levels we
already xed the number of occurrences of each task
up to t ' . Given this number, the drift d ' at time t ' is
xed. Similarly, the drift d r at time t r is also xed.
At the next recursion level we split the occurrences
assigned to the interval thus xing
the drift dm at time t Optimally,
we would like the drifts after the next recursion level
at each time unit to be the weighted
average of the drifts d ' and d r . In other words, let
would like the drift
at time t to be d r In particular, we
would like the drift at t m to be (d ' This drift
can be achieved for t m only if the occurrences in the
interval can be split equally. However, in
case we have an odd number of occurrences to split,
the drift at t m is (d ' depending on
our decision in which half interval to put the extra
occurrence. Note that the weighted average of the
drifts of all other points changes accordingly. That
is, if the new dm is (d '
then the weighted average in t 2
d r +(1 )d ' +2x, where
and the weighted average in t 2 [(t r
d r +(1 )d ' +(2 2)x, where
1=2.
Consider now the two sets of tasks S 1 and S 2 that
we have to assign to the two sub-intervals (of the
same size) at level k of the recursion. For each of the
possible two assignments, we compute a \potential"
based on the resulting drifts at time t m . For a given
possibility let D[tm ; i; k] denote the resulting drift of
the i-th task at t m after k recursion levels. Dene
the potential of t m after k levels as
xed even constant . We
choose the possibility with the lowest potential.
Theorem 4.6. Using the policy described above
the maximum drift is bounded by O(
log T n ), for
any xed .
Realizing frequencies rigidly
In this section we show how to construct a schedule
that 12-approximates any feasible frequency-vector in
a rigid fashion on an interval graph. We reduce
our Rigid Schedule problem to the Dynamic Storage
Allocation problem. The Dynamic Storage Allocation
Guaranteeing Fair Service to Persistent Dependent Tasks 9
problem is dened as follows. We are given objects to
be stored in a computer memory. Each object has
two parameters: (i) its size in terms of number of
cells needed to store it, (ii) the time interval in which
it should be stored. Each object must be stored in
adjacent cells. The problem is to nd the minimal
size memory that can accommodate at any given time
all of the objects that are needed to be stored at that
time. The Dynamic Storage Allocation problem is a
special case of the multi-coloring problem on intervals
graphs which we now dene.
A multi-coloring of a weighted graph G with the
weight function
such that for all v 2 V the size of F (v) is w(v), and
such that if (v; u) 2 E then F (v) \ F ;. The
multi-coloring problem is to nd a multi-coloring with
minimal number of colors. This problem is known to
be an NP-Hard problem [10].
Two interesting special cases of the Multi-Coloring
problem are when the colors of a vertex either must
be adjacent or must be \spread well" among all colors.
We call the rst case the AMC problem and the second
case the CMC problem. More formally, in a solution
to AMC if F
for all 1 i < k. Whereas in a solution to CMC which
uses T colors, if F
divides T , and (ii) x
and
It is not hard to verify that for interval graphs the
AMC problem is equivalent to the Dynamic Storage
Allocation problem described above. Simply associate
each object with a vertex in the graph and give it a
weight equal to the number of cells it requires. Put an
edge between two vertices if their time intervals inter-
sect. The colors assigned to a vertex are interpreted
as the cells in which the object is stored.
On the other hand, the CMC problem corresponds
to the Rigid Schedule problem as follows. First, we
replace the frequency f(v) by a weight w(v). Let
Now, assume that the output for the CMC problem
uses T colors and let the colors of v be fx 1 < < x k g
We interpret this as follows: v
is scheduled in times x It is not
di-cult to verify that this is indeed a solution to the
Rigid Scheduling problem.
Although Dynamic Storage Allocation problem is
a special case of the multi-coloring problem it is
still known to be an NP-Hard problem [10] and
for similar reasons the Rigid Scheduling problem is
also NP-Hard. Therefore, we are looking for an
approximation algorithm. In what follows we present
an approximation algorithm that produces a rigid
scheduling that 12-approximates the given frequencies.
For this we consider instances of the AMC and CMC
problems in which the input weights are powers of two.
Definition 5.1. A solution for an instance of
AMC is both aligned and contiguous if for all
In [15], Kierstead presents an algorithm for AMC
that has an approximation factor 3. A careful inspection
of this algorithm shows that it produces solutions
that are both aligned and contiguous for all instances
in which the weights are power of two.
We show how to translate a solution for such an
instance of the AMC problem that is both aligned and
contiguous into a solution for an instance of the CMC
problem with the same input weights.
For be the k-bit number whose
binary representation is the inverse of the binary
representation of x.
Lemma 5.1. For
1)g.
Consider an instance of the CMC problem in which
all the input weights are powers of two. Apply the
solution of Kierstead [15] to solve the AMC instance
with the same input. This solution is both aligned and
contiguous, and uses at most 3T 0 colors where T 0 is the
number of colors needed by an optimal coloring. Let
be the smallest power of 2 that is greater
than T 0 . It follows that T 6T 0 . Applying the
transformation of Lemma 5.1 on the output of the
solution to AMC yields a solution to CMC with at
most T colors. This in turn, yields an approximation
factor of at most 12 for the Rigid Scheduling problem,
since w(v)=T f(v)=2.
Theorem 5.2. The above algorithm computes a
rigid schedule that 12-approximates any feasible
frequency-vector on an interval graph.
6 Circular-Arc graphs
In this section we show how to transform any algorithm
A for computing a schedule that c-approximates
any given feasible frequency-vector on interval graphs
into an algorithm A 0 for computing a schedule that
2c-approximates any given feasible frequencies on
circular-arc graphs.
A. Bar-Noy, A. Mayer, B. Schieber, and M. Sudan
f be a feasible frequency-vector on a circular-arc
graph G.
1: Find the maximum clique C in G.
is an interval graph.
2 be the frequency-vectors resulting from
restricting
f to the vertices of G 0 and C, respectively.
Note that ^
are feasible on G 0 and C, respectively
Step 2: Using A, nd schedules S 1 and S 2 that c-
g 2 on G 0 and C, respectively.
Step 3: Interleave S 1 and S 2 .
Clearly, the resulting schedule 2c-approximates ^
f
on the circular-arc graph G.
7 Future research
Many open problems remain. The exact complexity
of computing a max-min fair frequency assignment in
general graphs is not known and there is no characterization
of when such an assignment is easy to compute.
All the scheduling algorithms in the paper use the inherent
linearity of interval or circular-arc graphs. It
would be interesting to nd scheduling algorithms for
the wider class of perfect graphs. The algorithm for interval
graphs that realizes frequencies exactly exhibits
a considerable gap in its drift. It is not clear from
which direction this gap can be closed.
Our algorithms assume a central scheduler that
makes all the decisions. Both from theoretical and
practical point of view it is important to design
scheduling algorithms working in more realistic environments
such as high-speed local-area networks and
wireless networks (as mentioned in Section 1.1). The
distinguishing requirements in such an environment
include a distributed implementation via a local signaling
scheme, a con
ict graph which may change with
time, and restrictions on space per node and size of a
signal. The performance measures and general setting,
however, remain the same. A rst step towards such
algorithms has been recently carried out by Mayer,
Ofek and Yung in [19].
Acknowledgment
. We would like to thank Don
Coppersmith and Moti Yung for many useful discussions
--R
A Dining Philosophers Algorithm with Polynomial Response Time.
A Notion of Fairness in Resource Allocation.
Data Networks.
Distributed Resource Allocation Algorithms.
A Local Fairness Algorithm for Gigabit LANs/MANs with Spatial Reuse.
The Drinking Philosophers Problem.
Hierarchical Ordering of Sequential Processes.
Computers and Intractability
Algorithmic Graph Theory and Perfect Graphs.
Cellular Packet Communica- tions
Bottleneck Flow Control.
A Polynomial Time Approximation Algorithm for Dynamic Storage Allocation.
The Sandwich Theorem
On the Hardness of Approximating Minimization Problems.
Fast Allocation of Nearby Resources in a Distributed System.
Distributed Scheduling Algorithm for Fairness with Minimum Delay.
Matrix characterizations of circular-arc graphs
--TR
--CTR
Sanjoy K. Baruah , Shun-Shii Lin, Pfair Scheduling of Generalized Pinwheel Task Systems, IEEE Transactions on Computers, v.47 n.7, p.812-816, July 1998
Francesco Lo Presti, Joint congestion control: routing and media access control optimization via dual decomposition for ad hoc wireless networks, Proceedings of the 8th ACM international symposium on Modeling, analysis and simulation of wireless and mobile systems, October 10-13, 2005, Montral, Quebec, Canada
Sandy Irani , Vitus Leung, Scheduling with conflicts on bipartite and interval graphs, Journal of Scheduling, v.6 n.3, p.287-307, May/June
Ami Litman , Shiri Moran-Schein, On distributed smooth scheduling, Proceedings of the seventeenth annual ACM symposium on Parallelism in algorithms and architectures, July 18-20, 2005, Las Vegas, Nevada, USA
Tracy Kimbrel , Baruch Schieber , Maxim Sviridenko, Minimizing migrations in fair multiprocessor scheduling of persistent tasks, Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, January 11-14, 2004, New Orleans, Louisiana
Tracy Kimbrel , Baruch Schieber , Maxim Sviridenko, Minimizing migrations in fair multiprocessor scheduling of persistent tasks, Journal of Scheduling, v.9 n.4, p.365-379, August 2006
Violeta Gambiroza , Edward W. Knightly, Congestion control in CSMA-based networks with inconsistent channel state, Proceedings of the 2nd annual international workshop on Wireless internet, p.8-es, August 02-05, 2006, Boston, Massachusetts | dining philosophers problem;fairness;interval graphs;scheduling |
284994 | An Algorithm for Finding the Largest Approximately Common Substructures of Two Trees. | AbstractOrdered, labeled trees are trees in which each node has a label and the left-to-right order of its children (if it has any) is fixed. Such trees have many applications in vision, pattern recognition, molecular biology and natural language processing. We consider a substructure of an ordered labeled tree T to be a connected subgraph of T. Given two ordered labeled trees and an integer d, the largest approximately common substructure problem is to find a substructure U1 of T1 and a substructure U2 of T2 such that U1 is within edit distance d of U2 and where there does not exist any other substructure V1 of T1 and V2 of T2 such that V1 and V2 satisfy the distance constraint and the sum of the sizes of V1 and V2 is greater than the sum of the sizes of U1 and U2. We present a dynamic programming algorithm to solve this problem, which runs as fast as the fastest known algorithm for computing the edit distance of two trees when the distance allowed in the common substructures is a constant independent of the input trees. To demonstrate the utility of our algorithm, we discuss its application to discovering motifs in multiple RNA secondary structures (which are ordered labeled trees). | Introduction
Ordered, labeled trees are trees in which each node has a label and the left-to-right order of its children (if
it has any) is fixed. 1 Such trees have many applications in vision, pattern recognition, molecular biology
and natural language processing, including the representation of images [12], patterns [2, 10] and secondary
structures of RNA [14]. They are frequently used in other disciplines as well.
A large amount of work has been performed for comparing two trees based on various distance measures
recently generalized one of the most commonly used distance measures, namely
the edit distance, for both rooted and unrooted unordered trees. These works laid out a foundation that
is useful for comparing graphs [15, 24].
In this paper we extend the previous work by considering the largest approximately common sub-structure
problem for ordered labeled trees. Various biologists [5, 14] represent RNA secondary structures
as trees. Finding common patterns (also known as motifs) in these secondary structures helps both in
predicting RNA folding [5] and in functional studies of RNA processing mechanisms [14].
Previous methods for detecting motifs in the RNA molecules (trees) are based on one of the following
two approaches: (1) transforming the trees to sequences and then using sequence algorithms [13]; (2)
representing the molecules using a highly simplified tree structure and then searching for common nodes
in the trees [5]. Neither of the two approaches satisfactorily takes the full tree structure into account. By
contrast, utilizing the proposed algorithm for pairs of trees enables one to locate tree-structured motifs
occurring in multiple RNA secondary structures. Our experimental results concerning RNA classification
show the significance of these motifs [23].
Preliminaries
2.1 Edit Distance and Mappings
We use the edit distance [17] to measure the dissimilarity of two trees. There are three types of edit
operations, i.e., relabeling, delete, and insert a node. Relabeling node n means changing the label on n.
Deleting a node n means making the children of n become the children of the parent of n and removing n.
Insert is the inverse of delete. Inserting node n as the child of node n 0 makes n the parent of a consecutive
subsequence of the current children of n 0 . Fig. 1 illustrates the edit operations. For the purpose of this
work, we assume that all edit operations have a unit cost. The edit distance, or simply the distance, from
tree T 1 to tree T 2 , denoted is the cost of a minimum cost sequence of edit operations transforming
to T 2
[17].
The notion of edit distance is best illustrated through the concept of mappings. A mapping is a
graphical specification of which edit operations apply to each node in the two trees. For example, the
mapping in Fig. 2 shows a way to transform T 1 to T 2 . The transformation includes deleting the two nodes
labeled a and m in T 1
and inserting them into T 2
1 Throughout the paper, we shall refer to ordered labeled trees simply as trees when no ambiguity occurs.
a c
r
a b
r
r
r
a f c
r
r
f
e
f
f
Fig. 1. (i) Relabeling: To change one node label (a) to another (b). (ii) Delete: To delete a node;
all children of the deleted node (labeled b) become children of the parent (labeled r). (iii) Insert:
To insert a node; a consecutive sequence of siblings among the children of the node labeled r (here,
f and the left g) become the children of the newly inserted node labeled b.
r
a m
b c d e pT
r
a m
b c d e pT
Fig. 2. A mapping from tree T1 to tree T2 .
We use a postorder numbering of nodes in the trees. Let t[i] represent the node of T whose position in
the left-to-right postorder traversal of T is i. When there is no confusion, we also use t[i] to represent the
label of node t[i]. Formally, a mapping from T 1
to T 2
is a triple (M;
simply M if the context is
clear), where M is any set of ordered pairs of integers (i;
(ii) For any pair of (i
(one-to-one
is
to the left of t 1
] is to the left of t 2
preservation
is an
ancestor of t 1 is an ancestor of t 2 [j 2 ] (ancestor order preservation condition). The cost of M is
the cost of deleting nodes of T 1
not touched by a mapping line plus the cost of inserting nodes of T 2
not
touched by a mapping line plus the cost of relabeling nodes in those pairs related by mapping lines with
different labels. It can be proved [17] that ffi(T 1
equals the cost of a minimum cost mapping from tree
T 1 to tree T 2 .
2.2 Cut Operations
We define a substructure U of tree T to be a connected subgraph of T . That is, U is rooted at a node n
in T and is generated by cutting off some subtrees in the subtree rooted at n. Formally, let T [i] represent
the subtree rooted at t[i]. The operation of cutting at node t[i] means removing T [i]. A set S of nodes of
T [k] is said to be a set of consistent subtree cuts in T [k] if (i) t[i] 2 S implies that t[i] is a node in T [k],
and (ii) t[i]; t[j] 2 S implies that neither is an ancestor of the other in T [k]. Intuitively, S is the set of all
roots of the removed subtrees in T [k].
We use Cut(T ; S) to represent the tree T with subtree removals at all nodes in S. Let Subtrees(T )
be the set of all possible sets of consistent subtree cuts in T . Given two trees T 1
and T 2
and an integer
d, the size of the largest approximately common root-containing substructures within distance
[i] and T 2
[j], denoted fi(T 1
[j]; (or simply fi(i; j; when the context is
clear), is maxfjCut(T 1
)jg subject to ffi(Cut(T 1
[j]). Finding the largest approximately common substructure (LACS),
within distance d, of T 1
[i] and T 2
[j] amounts to calculating max 1-u-i;1-v-j ffi(T 1
[v]; d)g and locating
the Cut(T 1
[v]) that achieve the
maximum size. The size of LACS, within distance d, of T 1
and T 2
is
d)g.
We shall focus on computing the maximum size. By memorizing the size information during the computation
and by a simple backtracking technique, one can find both the maximum size and one of the
corresponding substructure pairs yielding the size in the same time and space complexity.
3 Our Algorithm
3.1 Notation
We use desc(i) to represent the set of postorder numbers of the descendants of the node t[i] and l(i)
denotes the postorder number of the leftmost leaf of the subtree T [i]. When T [i] is a leaf,
is an ordered forest of tree T induced by the nodes numbered i to j inclusive (see Fig. 3). If i ? j, then
;. The definition of mappings for ordered forests is the same as for trees. Let F 1
and F 2
be two
forests. The distance from F 1 to F 2 , denoted \Delta(F 1 the cost of a minimum cost mapping from
to F 2
[25].
set S of nodes of F is said to be a set of consistent subtree cuts in F if (i) t[p] 2 S
implies that i - p - j, and (ii) t[p]; t[q] 2 S implies that neither is an ancestor of the other in F . We
use Cut(F; S) to represent the sub-forest F with subtree removals at all nodes in S. Let Subtrees(F )
be the set of all possible sets of consistent subtree cuts in F . Define the size of the largest approximately
common root-containing substructures, within distance k, of F 1
and F 2
, denoted \Psi(F 1
; k), to
be subject to \Delta(C ut(F 1
[l(i)::s] and F 2
[l(j)::t], we also represent
there is no confusion.
[10]
[3]
[5] [6]
[8]
[6]
[3]
Fig. 3. An induced ordered forest.
3.2 Basic Properties
Lemma 3.1. Suppose s 2 desc(i) and t 2 desc(j). Then
Proof. Immediate from definitions.
Lemma 3.2. Suppose s 2 desc(i) and t 2 desc(j). Then for all k, 1 - k - d,
ae \Psi(T 1
ae \Psi(;; T 2
Proof. (i) follows from the definition. For (ii), suppose
[l(i)::s]) is a smallest set of
consistent subtree cuts that maximizes jCut(T 1
)j where \Delta(Cut(T 1
of the following two cases must hold: (1) t 1
. If (1) is true, then \Psi(T 1
[l(i)::s], ;,
1. (iii) is proved
similarly as for (ii).
Lemma 3.3. Suppose s 2 desc(i) and t 2 desc(j). If (l(s) 6= l(i) or l(t) 6= l(j)), then
Proof. Suppose
[l(i)::s]) and S 2 2 Subtrees(T 2
[l(j)::t]) are two smallest sets of consistent
subtree cuts that maximize jCut(T 1
)j where \Delta(C ut(T 1
at least one of the following cases must hold:
Case 1. t 1
(i.e., the subtree T 1
[s] is removed). So, \Psi(l(i)::s; l(j)::t;
Case 2. t 2
(i.e., the subtree T 2
[t] is removed). So, \Psi(l(i)::s; l(j)::t;
Case 3. t 1
and t 2
[t]
(i.e., neither T 1
[t] is removed) (Fig. 4). Let M be the
mapping (with cost 0) from Cut(T 1
) to Cut(T 2
In M , T 1
[s] must be mapped to
[t] because otherwise we cannot have distance zero between Cut(T 1
Therefore \Psi(l(i)::s; l(j)::t;
Since these three cases exhaust all possible mappings yielding \Psi(l(i)::s; l(j)::t; 0), we take the maximum
of the corresponding sizes, which gives the formula asserted by the lemma.
Fig. 4. Illustration of the case in which one of T1 [l(i)::s] and T2 [l(j)::t] is a forest and neither
nor T2 [t] is removed.
Lemma 3.4. Suppose s 2 desc(i) and t 2 desc(j). Suppose both T 1
[l(j)::t] are trees (i.e.,
Proof. Since
[t]. First, consider the
case where t 1 are two smallest sets of
consistent subtree cuts that maximize jCut(T 1
)j where ffi(Cut(T 1
[t],
M be the mapping (with cost 0) from Cut(T 1 Fig. 5). Clearly,
in M , t 1
[s] must be mapped to t 2
[t]. Furthermore, the largest common root-containing substructure of
must be the largest common root-containing substructure
of T 1 [s] and T 2 [t]. This means that \Psi(l(i)::s; l(j)::t; 2, where the 2 is
obtained by including the two nodes t 1
[t].
Next consider the case where t 1 [s] 6= t 2 [t] (i.e., the roots of the two trees T 1 [s] and T 2 [t] differ). In
order to get distance zero between the two trees after applying cut operations to them, we have to remove
both trees entirely. Thus, \Psi(l(i)::s; l(j)::t;
Fig. 5. Illustration of the case in which both T1 [l(i)::s] and T2 [l(j)::t] are trees and t1
Lemma 3.5. Suppose s 2 desc(i) and t 2 desc(j). If (l(s) 6= l(i) or l(t) 6= l(j)), then for all k, 1 - k - d,
Proof. Suppose are two smallest sets of consistent
subtree cuts that maximize jCut(T 1
)j where \Delta(C ut(T 1
k. Then at least one of the following cases must hold:
Case 1. t 1
. So, \Psi(l(i)::s; l(j)::t;
Case 2.
Case 3. t 1
and t 2
[t]
. Let M be a minimum cost mapping from Cut(T 1
) to
There are three subcases to examine:
[s] is not touched by a line in M . Then, \Psi(l(i)::s; l(j)::t;
(b) t 2 [t] is not touched by a line in M . Then, \Psi(l(i)::s; l(j)::t;
are both touched by lines in M . Then (s; . So, there exists an h
such that \Psi(l(i)::s; l(j)::t; h). The value of h ranges
from 0 to k. Therefore we take the maximum of the corresponding sizes, i.e., \Psi(l(i)::s; l(j)::t;
h)g.
Lemma 3.6. Suppose s 2 desc(i) and t 2 desc(j). Suppose both T 1 [l(i)::s] and T 2 [l(j)::t] are trees
(i.e.,
where
ae
Proof. Since
[l(j)::t] are trees, T 1
[t]. We first show that
removing either T 1
[s] or T 2
[t] would not yield the maximum size. There are three cases to be considered:
Case 1. Both T 1
are removed. Then, \Psi(l(i)::s; l(j)::t; cutting
at just the children of t 1
[t] would cause \Psi(l(i)::s; l(j)::t; 2. Therefore removing both T 1
and T 2
cannot yield the maximum size.
Case 2. Only T 1
[s] is removed. Then, \Psi(l(i)::s; l(j)::t;
[t]; k). Assume without loss of
generality that jT 2
k. The above equation implies that we have to remove some subtrees from T 2
so that there are no more than k nodes left in T 2
[t]. Thus, \Psi(l(i)::s; l(j)::t;
On the
other hand, if we just cut at the children of t 1
[s] and leave t 1
[s] in the tree, we would map t 1
[s] to t 2
[t].
This would lead to \Psi(l(i)::s; l(j)::t;
1. Thus, removing T 1
alone
cannot yield the maximum size.
Case 3. Only T 2
[t] is removed. The proof is similar to that in Case 2.
The above arguments lead to the conclusion that in order to obtain the maximum size, neither T 1
nor
[t] can be removed. Now suppose
[s]) and S 2 2 Subtrees(T 2
[t]) are two smallest
sets of consistent subtree cuts that maximize jCut(T 1
)j where ffi(Cut(T 1
be a minimum cost mapping from Cut(T 1
) to Cut(T 2
). Then at
least one of the following cases must hold:
Case 1. t 1
[s] is not touched by a line in M . So, \Psi(l(i)::s; l(j)::t;
Case 2. t 2
[t] is not touched by a line in M . So, \Psi(l(i)::s; l(j)::t;
Case 3. Both t 1
are touched by lines in M . By the ancestor order preservation and sibling
order preservation conditions on mappings (cf. Section 2.1), (s; t) must be in M . Thus, if t 1
[t], we
have \Psi(l(i)::s; l(j)::t; mapping t 1
[s] to t 2
costs 1,
we have \Psi(l(i)::s; l(j)::t; 2.
3.3 The Algorithm
From Lemma 3.4 and Lemma 3.6, we observe that when s is on the path from l(i) to i and t is on
the path from l(j) to j, we need not compute fi(s; t; k), 0 - k - d, separately, since they can be
obtained during the computation of fi(i; j; k). Thus, we will only consider nodes that are either the
roots of the trees or having a left sibling. Let keynodes(T ) contain all such nodes of a tree T , i.e.,
? k such that
)g. For each
Fig. 6 computes fi(s; t;
Procedure Find-Largest-2 in Fig. 6 computes fi(s; t; d. The main algorithm is summarized
in Fig. 6.
Now, to calculate the size of the largest approximately common substructures (LACSs), within distance
d, of T 1
[i] and T 2
[j], we build, in a bottom-up fashion, another array fl(i;
using fi(i; j; d) as follows. Let are the postorder numbers of the
children of t 1
[i] or
[i] is a leaf. Let
are the postorder
numbers of the children of t 2 [j] or Rg. The
size of LACSs, within distance d, of T 1
[i] and T 2
[j] is fl(i; j; d). The size of LACSs, within distance d, of
Consider the complexity of the algorithm. We use an array to hold \Psi, fi and fl, respectively. These
arrays require O(d \Theta jT 1 j \Theta jT 2 space. Regarding the time complexity, given fi(i;
calculating fl(i; j; d) requires O(jT 1 j \Theta jT 2 time. For a fixed i and j, Procedure Find-Largest-
the total time is bounded by
O(
[i]j \Theta jT 2
[i]j \Theta
From [25, Theorem 2], the last term above is bounded by O(d 2 \Theta jT 1 j \Theta jT 2 j \Theta min(H 1
is the height of T i
and L i
is the number of leaves in T i
. When d is a constant, this
is the same as the complexity of the best current algorithm for tree matching based on the edit distance
[11, 25], even though the problem at hand appears to be harder than tree matching.
Note that to calculate max 1-i-jT1 j;1-j-jT2 j ffi(i; j; 0)g, one could use a faster algorithm that runs in
time O(jT 1 j \Theta jT 2 j). However, the reason for considering the keynodes and the formulas as specified in
Lemmas 3.3 and 3.4 is to prepare the optimal sizes from forests to forests and store these size values in the
array to be used in calculating fi(s; t; could incorporate the faster algorithm into
the Find-Largest algorithm, the overall time complexity would not be changed, because the calculation of
fi(s; t; dominates the cost.
4 Implementation and Discussion
We have applied our algorithm to find motifs in multiple RNA secondary structures. In this experiment, we
examined three phylogenetically related families of mRNA sequences chosen from GenBank [1] pertaining
to the poliovirus, human rhinovirus and coxsackievirus. Each family contained two sequences, as shown
in
Table
1.
Algorithm Find-Largest
Input: Trees T 1
and an integer d.
Output: fi(i; j;
:= 1 to jkeynodes(T1)j do
:= 1 to jkeynodes(T2)j do
begin
run Procedure Find-Largest-1 on input (i; j; 0);
run Procedure Find-Largest-2 on input (i; j; d);
Procedure Find-Largest-1
Input:
Output: fi(s; t;
for s := l(i) to i do
for t := l(j) to j do
for s := l(i) to i do
for t := l(j) to j do
if (l(s) 6= l(i) or l(t) 6= l(j)) then
compute \Psi(l(i)::s; l(j)::t; 0) as in Lemma 3.3;
else begin /*
compute \Psi(l(i)::s; l(j)::t; 0) as in Lemma 3.4;
fi(s; t;
Procedure Find-Largest-2
Input:
Output: fi(s; t;
for k := 1 to d do
for k := 1 to d do
for s := l(i) to i do
compute \Psi(T 1 [l(i)::s]; ;; k) as in Lemma 3.2 (ii);
for k := 1 to d do
for t := l(j) to j do
compute \Psi(;; T2 [l(j)::t]; k) as in Lemma 3.2 (iii);
for k := 1 to d do
for s := l(i) to i do
for t := l(j) to j do
if (l(s) 6= l(i) or l(t) 6= l(j)) then
compute \Psi(l(i)::s; l(j)::t; k) as in Lemma 3.5;
else begin /*
compute \Psi(l(i)::s; l(j)::t; k) as in Lemma 3.6;
fi(s; t;
Fig. 6. Algorithm for computing fi(i; j; k).
Family Sequence # of trees File #
poliovirus polio3 sabin strain 3,026 file 1
pol3mut 3,000 file 2
human rhinovirus rhino 2 3,000 file 3
coxsackievirus cox5 3,000 file 5
Table
1. Data used in the experiment.
Under physiological conditions, i.e., at or above the room temperature, these RNA molecules do not
take on only a single structure. They may change their conformation between structures with similar free
energies or be trapped in local minima. Thus, one has to consider not only the optimal structure but all
structures within a certain range of free energies. On the other hand, a loose rule of thumb is that the
"real" structure of an RNA molecule appears in the top 5% - 10% of suboptimal structures of the sequence
based on the ranking of their energies with the minimum energy one (i.e. the optimal one) being at the
top. Therefore, we folded the 5' non-coding region of the selected mRNA sequences and collected (roughly)
the top 3,000 suboptimal structures for each sequence. We then transformed these suboptimal structures
into trees using the algorithms described in [13, 14]. Fig. 7 illustrates an RNA secondary structure and
its tree representation.
The structure is decomposed into five terms: stem, hairpin, bulge, internal loop and multi-branch loop
[14]. In the tree, H represents hairpin nodes, I represents internal loops, B represents bulge loops, M
represents multi-branch loops, R represents helical stem regions (shown as connecting arcs) and N is a
special node used to make sure the tree is connected. The tree is considered to be an ordered one where
the ordering is imposed based upon the 5' to 3' nature of the molecule. The resulting trees for each mRNA
sequence selected from GenBank were stored in a separate file, where the trees had between 70 and 180
nodes (cf. Table 1). Each tree is represented by a fully parenthesized notation where the root of every
subtree precedes all the nodes contained in the subtree. Thus, for example, the tree depicted in Fig. 7(ii)
is represented as (N(R(I(R(M(R(B(R(M(R(H))(R(H))))))(R(H))))))).
For each pair of trees T 1
in a file, we ran the algorithm Find-Largest on T 1
, finding the size of
the largest approximately common substructures, within distance 1, for each subtree pair T 1
[i] and T 2
[j],
locating one of the corresponding substructure pairs yielding the size.
These substructures constituted candidate motifs. Then we calculated the occurrence number 2 of each
candidate motif M by adding variable length don't cares (VLDCs) to M as the new root and leaves to
form a VLDC pattern V and then comparing V with each tree T in the file using the pattern matching
technique developed in [26]. (conventionally denoted by " ") can be matched, at no cost, with
a path or portion of a path in T . The technique calculates the minimum distance between V and T after
implicitly computing an optimal substitution for the VLDCs in V , allowing zero or more cuttings at nodes
from T (see Fig. 8).) This way we can locate the motifs approximately occurring in all (or the majority
2 The occurrence number of a motif M with respect to distance k refers to the number of trees of the file in which M
approximately occurs (i.e. these trees approximately contain M) within distance k.
of) the trees in the file. 3U A
U
A
A
A U
G C
A
U
U
A
A
U
A
U
G
U
A
U
A
A
A
U
U A
G
G
A
A
G
A
G
G
G
G
U
U
G
U
U
G
A C C
G
U
A
G
A
U A
U
GU
U
A A
U
U
I
A
A A G C A A G U U C A U U U C G C C A U U A A GFig. 7. Illustration of a typical RNA secondary structure and its tree representation. (i)
Normal polygonal representation of the structure. (ii) Tree representation of the structure.
Table
2 summarizes the results where the motifs occur within distance 0 in at least 350 trees in the
corresponding file. The table shows the number of motifs discovered for each sequence, the number of
distinct motifs found in common between both sequences of each family, and the minimum and maximum
sizes of these common motifs. Table 3 shows some big motifs found in common in all the three families
and the number of each sequence's secondary structures that contain the motifs. These motifs serve as a
starting point to conduct further study of common motif analysis [3, 22].
3 One can speed up this method by encoding the candidate motifs into a suffix tree and then using the statistical sampling
and optimization techniques described in [23] to find the motifs.
a
r
a
b d
Fig. 8. Matching a VLDC pattern V and a tree T (both the pattern and tree are hypothetical
ones solely used for illustration purposes). The root in V would be matched
with nodes and the two leaves in V would be matched with nodes
in T , respectively. Nodes would be cut. The distance of V and T would be
1 (representing the cost of changing c in V to d in T ).
Family Sequence # of motifs found # of common motifs min size max size
poliovirus polio3 sabin strain 836 347 3 101
pol3mut 793
rhinovirus rhino 2 287
coxsackievirus cox5 306 136 3 20
cvb305pr 391
Table
2. Statistics concerning motifs discovered from the secondary structures of the mRNA sequences used in
the experiment.
Motifs found polio3 pol3mut rhino 2 rhino 14 cox 5 cvb305pr
(R(B(R(B(R(B(R))))))) 2,272 1,822 3,000 2,252 2,997 2,979
Table
3. Motifs found in common in the secondary structures of the poliovirus, human rhinovirus and coxsack-
ievirus sequences. The motifs are represented in a fully parenthesized notation where the root of every subtree
precedes all the nodes contained in the subtree. For each motif, the table also shows the number of each sequence's
suboptimal structures that contain the motif.
The proposed algorithm and the discovered motifs have also been applied to RNA classification successfully
[23]. Our experimental results showed that one can get more intersections of motifs from sequences
of the same family. This indicates that closeness in motif corresponds to closeness in family. Another
application of our algorithm is to apply it to a tree T and itself and calculate fi(i; j;
This allows one to find repeatedly occurring substructures (or repeats for short) in T . Finding repeats in
secondary structures across different RNA sequences may help understand the structures of RNA. Readers
interested in obtaining these programs may send a written request to any one of the authors.
Our work is based on the edit distance originated in [17]. This metric is more permissive than other
worthy metrics (e.g. [18, 19, 20]) and therefore helps to locate subtle motifs existing in RNA secondary
structures. The algorithm presented here assumes a unit cost for all edit operations. In practice, a more
refined non-unit cost function can reflect more subtle differences in the RNA secondary structures [14]. It
would then be interesting to score the measures in detecting common substructures or repeats in trees.
Another interesting problem is to find a largest consensus motif T 3 in two input trees T 1 and T 2 where T 3
is a largest tree such that each of T 1
and T 2
has a substructure that is within a given distance to T 3
. A
comparison of the different types of common substructures (see also [6, 7, 8]), probably based on different
metrics (e.g. [18, 19, 20]), as well as their applications remains to be explored.
Acknowledgments
We wish to thank the anonymous reviewers for their constructive suggestions and pointers to some relevant
papers. We also thank Wojcieok Kasprzak (National Cancer Institute), Nat Goodman (Whitehead
Institute of MIT) and Chia-Yo Chang for their useful comments and implementation efforts. This work
was supported by the National Science Foundation under Grants IRI-9224601, IRI-9224602, IRI-9531548,
IRI-9531554, and by the Natural Sciences and Engineering Research Council of Canada under Grant
OGP0046373.
--R
Nucleic Acids Research
Waveform correlation by tree matching.
Secondary structure computer prediction of the polio virus 5
Alignment of trees - An alternative to tree edit
RNA secondary struc- tures: Comparison and determination of frequently recurring substructures by consensus
A largest common similar substructure problem for trees embedded in a plane.
Largest common similar substructures of rooted and unordered trees.
The largest common similar substructure problem.
A tree system approach for fingerprint pattern recognition.
A unified view on tree metrics.
Distance transform for images represented by quadtrees.
An algorithm for comparing multiple RNA secondary structures.
Comparing multiple RNA secondary structures using tree comparisons.
Structural descriptions and inexact matching.
Exact and approximate algorithms for unordered tree matching.
The tree-to-tree correction problem
The metric between rooted and ordered trees based on strongly structure preserving mapping and its computing method.
A metric between unrooted and unordered trees and its bottom-up computing method
"A metric on trees and its computing method."
The tree-to-tree editing problem
The cardiovirulent phenotype of coxsackievirus B3 is determined at a single site in the genomic 5
Automated discovery of active motifs in multiple RNA secondary structures.
An algorithm for graph optimal monomorphism.
Simple fast algorithms for the editing distance between trees and related problems.
Approximate tree matching in the presence of variable length don't cares.
On the editing distance between undirected acyclic graphs.
--TR
--CTR
Roger Keays , Andry Rakotonirainy, Context-oriented programming, Proceedings of the 3rd ACM international workshop on Data engineering for wireless and mobile access, September 19-19, 2003, San Diego, CA, USA
M. Vilares , F. J. Ribadas , J. Graa, Approximately common patterns in shared-forests, Proceedings of the tenth international conference on Information and knowledge management, October 05-10, 2001, Atlanta, Georgia, USA
Rolf Backofen , Sven Siebert, Fast detection of common sequence structure patterns in RNAs, Journal of Discrete Algorithms, v.5 n.2, p.212-228, June, 2007
S. Bhowmick , Wee Keong Ng , Sanjay Madria, Constraint-driven join processing in a web warehouse, Data & Knowledge Engineering, v.45 n.1, p.33-78, April
D. C. Reis , P. B. Golgher , A. S. Silva , A. F. Laender, Automatic web news extraction using tree edit distance, Proceedings of the 13th international conference on World Wide Web, May 17-20, 2004, New York, NY, USA
Ada Ouangraoua , Pascal Ferraro , Laurent Tichit , Serge Dulucq, Local similarity between quotiented ordered trees, Journal of Discrete Algorithms, v.5 n.1, p.23-35, March, 2007
Thomas Kmpke, Distance Patterns in Structural Similarity, The Journal of Machine Learning Research, 7, p.2065-2086, 12/1/2006
N. Bourbakis , P. Yuan , S. Makrogiannis, Object recognition using wavelets, L-G graphs and synthesis of regions, Pattern Recognition, v.40 n.7, p.2077-2096, July, 2007
S. Bhowmick , Sanjay Kumar Madria , Wee Keong Ng, Detecting and Representing Relevant Web Deltas in WHOWEDA, IEEE Transactions on Knowledge and Data Engineering, v.15 n.2, p.423-441, February
Dmitriy Bespalov , Ali Shokoufandeh , William C. Regli , Wei Sun, Scale-space representation of 3D models and topological matching, Proceedings of the eighth ACM symposium on Solid modeling and applications, June 16-20, 2003, Seattle, Washington, USA
Jiang , Andreas Munger , Horst Bunke, On Median Graphs: Properties, Algorithms, and Applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.10, p.1144-1151, October 2001
Yanhong Zhai , Bing Liu, Web data extraction based on partial tree alignment, Proceedings of the 14th international conference on World Wide Web, May 10-14, 2005, Chiba, Japan
Jun Huan , Wei Wang , Deepak Bandyopadhyay , Jack Snoeyink , Jan Prins , Alexander Tropsha, Mining protein family specific residue packing patterns from protein structure graphs, Proceedings of the eighth annual international conference on Resaerch in computational molecular biology, p.308-315, March 27-31, 2004, San Diego, California, USA | trees;pattern recognition;dynamic programming;pattern matching;computational biology |
285006 | On the Power of Finite Automata with both Nondeterministic and Probabilistic States. | We study finite automata with both nondeterministic and random states (npfa's). We restrict our attention to those npfa's that accept their languages with a small probability of error and run in polynomial expected time. Equivalently, we study Arthur--Merlin games where Arthur is limited to polynomial time and constant space.Dwork and Stockmeyer [SIAM J. Comput., 19 (1990), pp. 1011--1023] asked whether these npfa's accept only the regular languages (this was known if the automaton has only randomness or only nondeterminism). We show that the answer is yes in the case of npfa's with a 1-way input head. We also show that if L is a nonregular language, then either L or $\bar{L}$ is not accepted by any npfa with a 2-way input head.Toward this end, we define a new measure of the complexity of a language L, called its 1-tiling complexity. For each $n$, this is the number of tiles needed to cover the 1's in the "characteristic matrix" of L, namely, the binary matrix with a row and column for each string of length $\le n$, where entry [x,y]=1 if and only if the string $xy \in L$. We show that a language has constant 1-tiling complexity if and only if it is regular, from which the result on 1-way input follows. Our main result regarding the general 2-way input tape follows by contrasting two bounds: an upper bound of polylog(n) on the 1-tiling complexity of every language computed by our model and a lower bound stating that the 1-tiling complexity of a nonregular language or its complement exceeds a function in $2^{\Omega (\sqrt{\log n})}$ infinitely often.The last lower bound follows by proving that the characteristic matrix of every nonregular language has rank n for infinitely many n. This is our main technical result, and its proof extends techniques of Frobenius and Iohvidov developed for Hankel matrices [Sitzungsber. der Knigl. Preuss. Akad. der Wiss., 1894, pp. 407--431], [Hankel and Toeplitz Matrices and Forms: Algebraic Theory, Birkhauser, Boston, 1982]. | Introduction
The classical subset construction of Rabin and Scott [25] shows that finite state automata with
just nondeterministic states (nfa's) accept exactly the regular languages. Results of Rabin
[24], Dwork and Stockmeyer [7] and Kaneps and Freivalds [17] show that the same is true of
probabilistic finite state automata which run in polynomial expected time. Here and throughout
the paper, we restrict attention to automata which accept languages with error probability which
is some constant ffl less than 1=2.
However, there has been little previous work on finite state automata which have both
probabilistic and nondeterministic states. Such automata are equivalent to the Arthur-Merlin
games of Babai and Moran [3], restricted to constant space, with an unbounded number of
rounds of communication between Arthur and Merlin. In this paper, we refer to them as npfa's.
In the computation of an npfa, each transition from a probabilistic state is chosen randomly
according to the transition probabilities from that state, whereas from a nondeterministic state,
it is chosen so as to maximize the probability that an accepting state is eventually reached. We
let 1NPFA and 2NPFA-polytime denote the classes of languages accepted by npfa's which have a
1-way or a 2-way input head, respectively, and which run in polynomial expected time. Dwork
and Stockmeyer [8] asked whether 2NPFA-polytime is exactly the set of regular languages,
which we denote by Regular.
In this paper, we prove the following two results on npfa's.
Theorem 1.1
Theorem 1.2 If L is nonregular, then either L or -
L is not in 2NPFA-polytime.
Thus, we resolve the question of Dwork and Stockmeyer for npfa's with 1-way head, and
in the case of the 2-way head model, we reduce the question to that of deciding whether
2NPFA-polytime is closed under complement. Theorem 1.1 also holds even if the automaton
has universal, as well as nondeterministic and probabilistic states. Moreover, Theorem 1.2 holds
even for Arthur-Merlin games that use o(log log n) space.
In proving the two results, we introduce a new measure of the complexity of a language L
called its 1-tiling complexity. Tiling complexity arguments have been used previously to prove
lower bounds for communication complexity (see e.g. Yao [29]). With each language L ' \Sigma ,
we associate an infinite binary matrix ML , whose rows and columns are labeled by the strings
of \Sigma . Entry ML [x; y] is 1 if the string xy 2 L and is 0 otherwise. Denote by ML (n) the finite
submatrix of ML , indexed by strings of length - n. Then, the 1-tiling complexity of L (and of
the matrix ML (n)) is the minimum size of a set of 1-tiles of ML (n) such that every 1-valued
entry of ML (n) is in at least one 1-tile of the set. Here, a 1-tile is simply a submatrix (whose
rows and columns are not necessarily contiguous) in which all entries have value 1.
In Section 3, we prove the following theorems relating language acceptance of npfa's to tiling
complexity. The proofs of these theorems build on previous work of Dwork and Stockmeyer [8]
and Rabin [24].
Theorem [3.1] A language L is in 1NPFA only if the 1-tiling complexity of L is O(1).
Theorem [3.3] A language L is in 2NPFA-polytime only if the 1-tiling complexity of L is
bounded by a polynomial in log n.
What distinguishes our work on tiling is that we are interested in the problem of tiling the
matrices ML (n), which have distinctive structural properties. If L is a unary language, then
ML (n) is a matrix in which all entries along each diagonal from the top right to the bottom left
are equal. Such a matrix is known as a Hankel matrix. An elegant theory on properties of such
Hankel matrices has been developed [15], from which we obtain strong bounds on the rank of
ML (n) if L is unary. In the case that L is not a unary language, the pattern of 0's and 1's in
ML (n) is not as simple as in the unary case, although the matrix still has much structure. Our
main technical contribution, presented in Section 4, is to prove new lower bounds on the rank
of ML (n) when L is not unary. Our proof uses techniques of Frobenius and Iohvidov developed
for Hankel matrices.
Theorem [4.4] If L is nonregular, then the rank of ML (n) is at least n+ 1 infinitely often.
By applying results from communication complexity relating the rank of a matrix to its tiling
complexity, we can obtain a lower bound on the 1-tiling complexity of non-regular languages.
Theorem [4.5] If L is nonregular, then the 1-tiling complexity of either L or -
L exceeds a
function in
log n) infinitely often.
However, there are nonregular languages, even over a unary alphabet, with 1-tiling complexity
O(log n) (see Section 4). Thus the above lower bound on the 1-tiling complexity of L
or -
L does not always hold for L itself. A simpler theorem holds for regular languages.
Theorem [4.1] The 1-tiling complexity of L is O(1) if and only if L is regular.
By combining these theorems on the 1-tiling complexity of regular and non-regular languages
with the theorems relating 1-tiling complexity to acceptance by npfa's, our two main results
(Theorems 1.1 and 1.2) follow as immediate corollaries.
The rest of the paper is organized as follows. In Section 2, we define our model of the npfa,
and the tiling complexity of a language. We conclude that section with a discussion of related
work on probabilistic finite automata and Arthur-Merlin games. In Section 3, we present
Theorems 3.1 and 3.3, which relate membership of a language L in the classes 1NPFA and
2NPFA-polytime to the 1-tiling complexity of L. A similar theorem is presented for the class
2NPFA, in which the underlying automata are not restricted to run in polynomial expected time.
In Section 4, we present our bounds on the tiling complexity of both regular and nonregular
languages. Theorems 1.1 and 1.2 are immediate corollaries of the main results of Sections 3
and 4. Extensions of these results to alternating automata and to Turing machines with small
space are presented in Section 5. Conclusions and open problems are discussed in Section 6.
Preliminaries
We first define our npfa model in Section 2.1. This model includes as special cases the standard
models of nondeterministic and probabilistic finite state automata. In Section 2.2 we define our
notion of the tiling complexity of a language. Finally, in Section 2.3, we discuss previous work
on this and related models.
2.1 Computational Models and Language Classes
A two-way nondeterministic probabilistic finite automaton (2npfa) consists of a set of states Q,
an input alphabet \Sigma, and a transition function ffi, with the following properties. The states Q
are partitioned into three subsets: the nondeterministic states N , the probabilistic (or random)
states R, and the halting states H . H consists of two states: the accepting state q a and the
rejecting state q r . There is a distinguished state q 0 , called the initial state. There are two
special symbols
2 \Sigma, which are used to mark the left and right ends of the input string,
respectively.
The transition function ffi has the form
For each fixed q in R, the set of random states, and oe 2 (\Sigma [ f6 c; $g), the sum of ffi(q; oe; q
over all q 0 and d equals 1. The meaning of ffi in this case is that if the automaton is in state q
reading symbol oe, then with probability ffi(q; oe; q d) the automaton enters state q 0 and moves its
input head one symbol in direction d (left if stationary if 0). For each
fixed q in N , the set of nondeterministic states, and oe
all q 0 and d. The meaning of ffi in this case is that if the automaton is in state q reading symbol
oe, then the automaton nondeterministically chooses some q 0 and d such that ffi(q; oe; q
enters state q 0 and moves its input head one symbol in direction d. Once the automaton enters
state q a (resp. q r ), the input head moves repeatedly to the right until the right endmarker
is read, at which point the automaton halts. In other words, for q 2 fq a ; q r g, ffi(q; oe; q;
for all oe On a given input, the
automaton is started in the initial configuration, that is, in the initial state with the head at
the left end of the input. If the automaton halts in state q a on the input, we say that it accepts
the input, and if it halts in state q r , we say that it rejects the input.
Fix some input string
strategy (or just strategy) on w is a function
such that ffi(q; oe; q oe. The meaning of Sw is that
if the automaton is in state q 2 N reading w j , then if Sw (q; the automaton enters
state q 0 and moves its input head one symbol in direction d. The strategy indicates which
nondeterministic choice should be made in each configuration.
A language L ' \Sigma is accepted with bounded error probability if for some constant ffl ! 1=2,
1. for all w 2 L, there exists a strategy Sw on which the automaton accepts with probability
2. for all
2 L, on every strategy Sw , the automaton accepts with probability - ffl.
Language acceptance could be defined with respect to a more general type of strategy, in
which the nondeterministic choice made from the same configuration at different times may be
different. It is known (see [4, Theorem 2.6]) that if L is accepted by an npfa with respect to this
more general definition, then it is also accepted with respect to the definition above. Hence,
our results also hold for such generalized strategies.
A one-way nondeterministic probabilistic finite automaton (1npfa) is a 2npfa which can
never move its input head to the left; that is, ffi(q; oe; q Also, a
probabilistic finite automaton (pfa) and a nondeterministic finite automaton (nfa) are special
cases of an npfa in which there are no nondeterministic and no probabilistic states, respectively.
We denote by 1NPFA and 2NPFA the classes of languages accepted with bounded error
probability by 1npfa's and 2npfa's, respectively. If, on all inputs w and all nondeterministic
strategies, the 2npfa halts in polynomial expected time, we say that L is in the class 2NPFA-
polytime. The classes 1PFA, 2PFA and 2PFA-polytime are defined similarly, with pfa replacing
npfa. Finally, Regular denotes the class of regular languages.
Our model of the 2npfa is equivalent to an Arthur-Merlin game in which Arthur is a 2pfa, and
our classes 2NPFA and 2NPFA-polytime are identical to the classes AM(2pfa) and AM(ptime-
2pfa), respectively, of Dwork and Stockmeyer [8].
2.2 The Tiling Complexity of a Language
We adapt the notion of the tiling complexity of a function, used in communication complexity
theory, to obtain a new measure of the complexity of a language. Given a finite, two-dimensional
matrix M , a tile is a submatrix of M in which all entries have the same value. A tile is specified
by a pair (R; C) where R is a nonempty set of rows and C is a nonempty set of columns.
The entries in the tile are said to be covered by the tile. A tile is a b-tile if all entries of the
submatrix are b. A set of b-tiles is a b-tiling of M if every b-valued entry of M is covered by
at least one tile in the set. If M is a binary matrix, the union of a 0-tiling and a 1-tiling of M
is called a tiling of M . Let T (M) be the minimum size of a tiling of M . Let T 1 (M) be the
minimum size of a 1-tiling of M , and let T 0 (M) be the minimum size of a 0-tiling of M . Then,
Note that in these definitions it is permitted for tiles of the same
type to overlap.
We can now define the tiling complexity of a language. Associated with a language L over
alphabet \Sigma is an infinite binary matrix ML . The rows and columns of ML are indexed (say,
in lexicographic order), by the strings in \Sigma . Entry ML [x; only if xy 2 L. Let
L n be the strings of L of length - n. Let ML (n) be the finite submatrix of ML whose rows
and columns are indexed by the strings of length - n. The 1-tiling complexity of a language
L is defined to be the function T 1
Similarly, the 0-tiling complexity of L is
and the tiling complexity of L is
A tiling of a matrix M is disjoint if every entry [x; y] of M is covered by exactly one tile.
The disjoint tiling complexity of a matrix M , ~
is the minimum size of a disjoint tiling of
M . Also, the disjoint tiling complexity of a language, ~
(n), is ~
T(ML (n)).
Tilings are often used in proving lower bounds in communication complexity. Let f :
1g. The function f is represented by a matrix M f whose rows are indexed by
elements of X and whose columns are indexed by elements of Y , such that M f [x;
Let T f denote T(M f ). Suppose that two cooperating parties, P 1 and P 2 , get inputs x 2 X and
respectively, and want to compute f(x; y). They can do so by exchanging information
according to some protocol (precise definitions of legal protocols can be found in [13]). If
the protocol is deterministic, then the worst case number of bits that need to be exchanged
(that is, the deterministic communication complexity) is bounded below by log ~
If the
protocol is non-deterministic, then the lower bound is log T f [1]. Finally, if the object of the
non-deterministic protocol is only to verify that f(x; that is indeed the case), then
the lower bound on the number of bits exchanged is log T 1
f .
2.3 Related Work
Our work on npfa's builds on a rich literature on probabilistic finite state automata. Rabin [24]
was the first to consider probabilistic automata with bounded error probability. He showed that
However, with a 2-way input head, pfa's can recognize nonregular languages.
This was shown by Freivalds [10], who constructed a 2pfa for the language f0 n 1
Greenberg and Weiss [12] showed that exponential expected time is required by any 2pfa accepting
this language. Dwork and Stockmeyer [7] and independently Kaneps and Freivalds [17]
showed that in fact any 2pfa which recognizes a nonregular language must run in exponential
expected time. It follows that 2PFA-polytime = Regular.
Roughly, Rabin's proof shows that any language L accepted by a 1pfa has only finitely
many equivalence classes. Here, two strings x; x 0 are equivalent if and only if for all y, xy 2
L. The Myhill-Nerode theorem [14] states that a language has a finite number of
equivalence classes if and only if it is regular. This, combined with Rabin's result, implies that
decades later, this idea was extended to 2pfa's. A strengthened version
of the Myhill-Nerode theorem is needed for this extension. Given a language L, we say that
two strings x; x 0 are pairwise n-inequivalent if for some y, xy 2 L , x 0 y 62 L, and furthermore,
(the nonregularity of L) be size of the largest set of pairwise n-
inequivalent strings. Kaneps and Freivalds [16] showed that NL (n) - b(n + 3)=2c for infinitely
many n. (It is interesting to note that to prove their bound, Kaneps and Freivalds first showed
that NL (n) equals the number of states of the minimal deterministic 1-way finite automaton
that accepts all words of length - n that are in L and rejects all words of length - n that are
not in L. Following Karp [19], we denote the latter measure by OE L (n). Karp [19] previously
proved that OE L (n) infinitely many n. Combining this with the fact that NL (n)
and OE L (n) are equal, it follows immediately that NL (n) ? n=2+1 for infinitely many n. This is
stronger (by 1) for even n than Kaneps and Freivalds' lower bound. We also note that Dwork
and Stockmeyer [7] obtained a weaker bound on NL (n) without using OE L (n).) Using tools from
Markov chain theory, Dwork and Stockmeyer [7] and Kaneps and Freivalds [17] showed that
if a language is accepted by a 2pfa in polynomial expected time, then the language has "low"
nonregularity. In fact, NL (n) is bounded by some polynomial in log n. This, combined with
the result of Kaneps and Freivalds, implies that 2PFA-polytime = Regular.
Models of computation with both nondeterministic and probabilistic states have been studied
intensively since the work of Papadimitriou [23] on games against nature. Babai and Moran
[3] defined Arthur-Merlin games to be Turing machines with both nondeterministic and probabilistic
states, which accept their languages with bounded error probability. Their work on
polynomial time bounded Arthur-Merlin games laid the framework for the remarkable progress
on interactive proof systems and their applications (see for example [2] and the references
therein). Space bounded Arthur-Merlin games were first considered by Condon and Ladner
[6]. Condon [4] showed that AM(log-space), that is, the class of languages accepted by Arthur-
Merlin games with logarithmic space, is equal to the class P. However, it is not known whether
the class AM(log-space, polytime) - the subclass of AM(log-space) where the verifier is also
restricted to run in polynomial time - is equal to P, or whether it is closed under complement.
Fortnow and Lund [9] showed that NC is contained in AM(log-space,poly-time).
Dwork and Stockmeyer [8] were the first to consider npfa's, which are Arthur-Merlin games
restricted to constant space. They described conditions under which a language is not in the
classes 2NPFA or 2NPFA-polytime. The statements of our Theorems 3.2 and 3.3 generalize and
simplify the statements of their theorems, and our proofs build on theirs. In communication
complexity theory terms, their proofs roughly show that languages accepted by npfa's have low
"fooling set complexity". This measure is defined in a manner similar to the tiling complexity
of a language, based on the following definition. Define a 1-fooling set of a binary matrix A to
be a set of entries
The size of a 1-fooling set of a binary matrix is always at most the 1-tiling complexity of the
matrix, because no two distinct entries in the 1-fooling set, [x can be in the
same tile. However, the 1-tiling complexity may be significantly larger than the 1-fooling set
in fact, for a random n \Theta n binary matrix, the expected size of the largest 1-fooling
set is O(log n) whereas the expected number of tiles needed to tile the 1-entries is \Omega\Gamma n= log n)
3 NPFA's and Tiling
Three results are presented in this section. For each of the classes 1NPFA, 2NPFA and 2NPFA-
polytime, we describe upper bounds on the tiling complexity of the languages in these classes.
The proof for 1NPFA's is a natural generalization of Rabin's proof that
The other two proofs build on previous results of Dwork and Stockmeyer [8] on 2npfa's.
3.1 1NPFA and Tiling
Theorem 3.1 A language L is in 1NPFA only if the 1-tiling complexity of L is O(1).
Proof: Suppose L is accepted by some 1npfa M with error probability ffl ! 1=2. Let the
states of M be cg.
Consider the matrix ML . For each 1-entry [x; y] of ML , fix a nondeterministic strategy that
causes the string xy to be accepted with probability at least 1 \Gamma ffl. With respect to this strategy,
define two vectors of dimension c. Let p xy be the state probability vector at the step when the
input head moves off the right end of x. That is, the i'th entry of the vector is the probability of
being in state i at that moment, assuming that the automaton is started at the left end of the
input
6 cxy$ in the initial state. Let r xy be the column vector whose i'th entry is the probability
of accepting the string xy, assuming that the automaton is in state i at the moment that the
head moves off the right end of x. Then the probability of accepting the string xy is the inner
product
ffl)=c. Partition the space [0; 1] c into cells of size - \Theta - \Theta \Delta \Delta \Delta \Theta - (the final
entry in the cross product should actually be less than - if 1 is not a multiple of -). Associate
each 1-entry [x; y] with the cell containing the vector p xy ; we say that [x; y] belongs to this cell.
With each cell C, associate the rectangle RC defined as
fxj there exists y such that [x; y] belongs to Cg
\Theta
fyj there exists x such that [x; y] belongs to Cg:
This is the minimal submatrix that covers all of the entries associated with cell C.
We claim that RC is a valid 1-tile - that is, RC covers only 1-entries. To see this, suppose
. If [x; y] belongs to C, then it must be a 1-entry. Otherwise, there exist x 0 and y 0
such that [x; y 0 belong to C; that is, xy are in the same
cell.
We claim that xy is accepted with probability at least 1=2 on some strategy, namely the
strategy that while reading x, uses the strategy for xy 0 , and while reading y, uses the strategy
for x 0 y. To see this, note that
c
c
-c
our choice of -:
Hence, the probability that xy is accepted on the strategy described above is
Because xy is accepted with probability greater than ffl on this strategy, it cannot be that xy 62 L.
Hence, for all [x; y] 2 RC , xy must be in L. Therefore RC is a 1-tile in ML .
Every 1-entry [x; y] is associated with some cell C, and is covered by the 1-tile RC that is
associated with C. Thus, every 1-entry of ML is covered by some RC .
Hence L can be 1-tiled using one tile per cell, which is a total of d1=-e
3.2 2NPFA and Tiling
We next show that if L 2 2NPFA, then T 1
L (n) is bounded by a polynomial.
Theorem 3.2 A language L is in 2NPFA only if the 1-tiling complexity of L is bounded by a
polynomial in n.
Proof: Suppose L is accepted by some 2npfa M with error probability
c be the number of states of M . As in Theorem 3.1, for each 1-entry [x; y] of ML (n), fix a
nondeterministic strategy that causes M to accept the string xy with probability at least 1 \Gamma ffl.
We construct a stationary Markov chain H xy that models the computation of M on xy
using this strategy.
This Markov chain has states. 2c of the states are labeled (q; l), where q is a
state of M and l 2 f0; 1g. The other states are labeled Initial, Accept, Reject, and Loop. The
state (q; 0) of H xy corresponds to M being in state q while reading the rightmost symbol of
6 cx.
The state (q; 1) of H xy corresponds to M being in state q while reading the leftmost symbol of
y$. The state Initial corresponds to the initial configuration of M . The states Accept, Reject,
and Loop are sink states of H xy .
A single step of the Markov chain H xy corresponds to running M on input xy (using the
fixed nondeterministic strategy) from the appropriate configuration for one or more steps, until
M enters a configuration corresponding to one of the chain states (q; l). If M halts in the
accepting (resp., rejecting) state before entering one of these configurations, H xy enters the
Accept (resp., Reject) state. If M does not halt and never again reads the rightmost symbol of
6 cx or the leftmost symbol of y$, then H xy enters the Loop state. The transition probabilities
are defined accordingly.
Consider the transition matrix of H xy . Collect the rows corresponding to the chain states
Initial and (q; 0) (for all q) and call this submatrix P xy . Collect the rows corresponding to the
chain states (q; 1) and call this submatrix R xy . Then the transition matrix looks like this:
R xy
Initial
Accept
Reject
Loop
where I 3 denotes the identity matrix of size 3. (We shall engage in a slight abuse of notation
by using H xy to refer both to the transition matrix and to the Markov chain itself.) Note that
the entries of P xy depend only on x and the nondeterministic strategy used; these transition
probabilities do not depend on y. This assertion appears to be contradicted by the fact that
our choice of nondeterministic strategy may depend on y; however, the idea here is that if we
replace y with y 0 while maintaining the same nondeterministic strategy we used for xy, then
will be identical to P xy , because the transitions involved simulate computation of M on
the left part of its input only. Similarly, R xy depends only on y and the strategy, and not on x.
We now show that if jxj - n and if p is a nonzero element of P xy , then
a second Markov chain K(6 cx) with states of the form (q; l), where q is a state of M and
1. The chain state (q; l) with l - j6 cxj corresponds to M being in state q
scanning the l'th symbol of
6 cx. Transition probabilities from these states are obtained from the
transition probabilities of M in the obvious way. Chain states of the form (q; cxj + 1) are sink
states of K(6cx) and correspond to the head of M falling off the right end of
6 cx with M in state
q. Now consider a transition probability p in P xy . Suppose that, in the Markov chain H xy , p
is the transition probability from (q; 0) to (q 0 ; 1). Then p 2 f0; 1=2; 1g, since if H xy makes this
transition, it must be simulating a single computation step of M . Suppose p is the transition
probability from (q; 0) to (q 0 ; 0). If p ? 0, then there must be some path of nonzero probability
in K(6 cx) from state (q; cxj) to (q 0 ; cxj) that visits no state (q 00 ; cxj), and since K(6 cx) has at
most cn states that can be on this path, there must be such a path of length at most cn + 1.
Since 1/2 is the smallest nonzero transition probability of M , it follows that p - 2 \Gammacn\Gamma1 . The
cases where p is a transition probability from the Initial state are similar.
Similarly, if jyj - n and if r is a nonzero element of R xy , then r - 2 \Gammacn\Gamma1 .
Next we present a lemma which bounds the effect of small changes in the transition probabilities
of a Markov chain. This lemma is a slight restatement of a lemma of Greenberg and
Weiss [12]. This version is due to Dwork and Stockmeyer [8].
If k is a sink state of a Markov chain R, let a(k; R) denote the probability that R is
(eventually) trapped in state k when started in state 1. Let fi - 1. Say that two numbers r
and r 0 are fi-close if either (i)
chains
i;j=1 and R
are fi-close if r ij and r 0
are fi-close for all pairs
Lemma 3.1 Let R and R 0 be two s-state Markov chains which are fi-close, and let k be a sink
state of both R and R 0 . Then a(k; R) and a(k; R 0 ) are fi 2s -close.
The proof of this lemma is based on the Markov chain tree theorem of Leighton and Rivest
[20], and can be found in [8].
Our approach is to partition the 1-entries of ML (n) into equivalence classes, as in the proof
of Theorem 3.1, but this time we will make entries [x; y] and [x equivalent only if the
corresponding Markov chains H xy and H x 0 y 0 are fi-close, where fi will be chosen small enough
that we can use Lemma 3.1 to show that xy 0 and x 0 y are accepted with high probability by
combining the strategies for xy and x 0 y 0 .
If [x; y] is a 1-entry such that jxj - n and jyj - n, then for any nonzero p of P xy (or r of
R xy
By partitioning each coordinate interval into subintervals of length -, we divide
the space
into at most d(cn
cells, each of size at most - \Theta - \Theta \Delta \Delta \Delta -.
Partition the 1-entries in ML (n) into equivalence classes by making xy and x 0 y 0 equivalent
have the property that for each state transition, if p and p 0 are the respective
transition probabilities, either log p and log p 0 are in the same (size -) subinterval
of
Note that the number of equivalence classes is at most (d(cn
We claim that if - is chosen small enough, these equivalence classes induce a 1-tiling of
ML (n) of size at most the number of equivalence classes. As in Theorem 3.1, we associate with
each equivalence class C the rectangle RC defined by
fxjthere exists y such that [x; y] 2 Cg \Theta fyj there exists x such that [x; y] 2 Cg.
We claim that for each [x; y] in RC , xy 2 L. That is, all entries in the rectangle are 1, so
the rectangle forms a 1-tile. Let [x; y] be in RC . There must be some y 0 such that [x; y 0
and some x 0 such that [x 0 ; y] 2 C. Consider the associated Markov chains H xy 0 and H x 0 y , and
in particular, consider the transition submatrices P xy 0 and R x 0 y . The first is associated with a
particular nondeterministic strategy on x, namely one which assumes the input is xy 0 and tries
to cause xy 0 to be accepted with high probability. The second is associated with a particular
nondeterministic strategy on y, namely one which assumes the input is x 0 y and tries to cause x 0 y
to be accepted with high probability. The two matrices P xy 0 and R x 0 y taken together correspond
to a hybrid strategy on xy: while reading x, use the strategy for xy 0 , and while reading y, use
the strategy for x 0 y. We will argue that this hybrid strategy causes xy to be accepted with
probability - 1=2.
We construct a hybrid Markov chain H xy using P xy 0 and R x 0 y . This chain models the
computation of M on xy using the hybrid strategy.
Since the 1-entries [x; y 0 ] and [x 0 ; y] are in the same equivalence class C, it follows that if p
and p 0 are corresponding transition probabilities in the Markov chains H xy 0 and H x 0 y , then either
Therefore, H xy 0 and H x 0 y are 2 -close, and it immediately
follows that H xy is 2 -close to H xy 0 (and to H x 0 y ). Let a xy 0 be the probability that M accepts
input xy 0 on the strategy for xy 0 , and let a xy be the probability that M accepts input xy using
the hybrid strategy. Then a xy 0 (resp., a xy ) is exactly the probability that the Markov chain
eventually trapped in the Accept state, when started in the Initial state.
Now xy 0 2 L implies a xy are 2 -close, Lemma 3.1 implies that
a xy
a xy 0
which implies
a xy -
Since ffl and d are constants, and since ffl ! 1=2, we can choose - to be a constant so small
that a xy - 1=2. Therefore xy must be in L.
Since each 1-entry [x; y] is in some equivalence class, the matrix ML (n) can be 1-tiled using
at most (d(cn
tiles. Therefore,
Since c; d, and - are constants independent of n, this shows that T 1
L (n) is bounded by a
polynomial in n. 2
3.3 2NPFA-polytime and Tiling
We now show that if L 2 2NPFA-polytime, then T 1
L (n) is bounded by a polylog function.
Theorem 3.3 A language L is in 2NPFA-polytime only if the 1-tiling complexity of L is
bounded by a polynomial in log n.
Proof: Suppose L is accepted by some 2npfa M with error probability ffl ! 1=2 in expected
time at most t(n). Let c be the number of states of M . For each 1-entry [x; y] of ML (n), fix a
nondeterministic strategy that causes M to accept the string xy with probability at least 1 \Gamma ffl.
We construct the Markov chain H xy just as in Theorem 3.2.
Say that a probability p is small if p ! t(n) \Gamma2 ; otherwise, p is large. Note that if p is a large
transition probability, then dividing the 1-
entries of ML (n) into equivalence classes, make xy and x 0 y 0 equivalent if H xy and H x 0 y 0 have the
property that for each state transition, if p and p 0 are the respective transition probabilities,
either p and p 0 are both small, or log p and log p 0 are in the same (size -) subinterval of
This time the number of equivalence classes is at most (d2 log
Model the computation of M on inputs x 0 y, xy 0 , and xy by Markov chains H x 0 y , H xy 0 , and
H xy , respectively, as before.
If p and p 0 are corresponding transition probabilities in any two of these Markov chains, then
either p and p 0 are 2 -close or p and p 0 are both small. Let E x 0 y be the event that, when H x 0 y
is started in state Initial, it is trapped in state Accept or Reject before any transition labeled
with a small probability is taken; define E xy 0 and E xy similarly. Since M halts in expected
time at most t(n) on the inputs x 0 y, xy 0 , and xy, the probabilities of these events go to 1 as n
increases. Therefore, by changing all small probabilities to zero, we do not significantly change
the probabilities that H x 0 y , H xy 0 , and H xy enter the Accept state, provided that n is sufficiently
large. A formal justification of this argument can be found in Dwork and Stockmeyer [8].
After these changes, we can argue that
a xy -
and choose - so that a xy - 1=2, as before. It then follows that
(1)
for all sufficiently large n, establishing the result. 2
4 Bounds on the Tiling Complexity of Languages
In this section, we obtain several bounds on the tiling complexity of regular and nonregular
languages. In Section 4.1, we prove several elementary results. First, all regular languages have
constant tiling complexity. Second, the 1-tiling complexity of all nonregular languages is at least
log infinitely often. We also present an example of a (unary) non-regular language
which has 1-tiling complexity O(log n). In Section 4.2, we use a rank argument to show that for
all nonregular languages L, either L or its complement has "high" 1-tiling complexity infinitely
often.
4.1 Simple Bounds on the Tiling Complexity of Languages
The following lemma is useful in proving some of the theorems in this section. Its proof is
implicit in work of Melhorn and Schmidt [21]; we include it for completeness.
Lemma 4.1 Any binary matrix A that can be 1-tiled with m tiles has at most 2 m distinct rows.
Proof: Let A be a binary matrix that can be 1-tiled by m tiles fT
For each row r of A, let g. Suppose
r 1 and r 2 are rows such that I(r 1 We show that in this case, rows r 1 and r 2 are
identical. To see this, consider any column c of A. Suppose that entry [r 1 ; c] has value 1, and
is covered by some tile T
therefore r 2 2 R j and [r 2 ; c] is covered by tile T j . Hence entry [r must have value 1, since
T j is a 1-tile. Hence, if [r 1 ; c] has value 1, so does [r 2 ; c]. Similarly, if [r 2 ; c] has value 1, then
so does entry [r 1 ; c]. Therefore r 1 and r 2 are identical rows. Since there are only 2 m possible
values for I(r), A can have at most 2 m distinct rows. 2
Theorem 4.1 The 1-tiling complexity of L is O(1) if and only if L is regular.
Proof: By the Myhill-Nerode theorem [14, Theorem 3.6], L is regular if and only if ML
has a finite number of distinct rows.
Suppose L is regular. Then by the above fact there exists a constant k such that ML has
at most k distinct rows. Consider any (possibly infinite) set R of identical rows in ML . Let C b
be the set of columns which have bit b in the rows of R, for 1. Then the subset specified
by (R; C b ) is a b-tile and covers all the b-valued entries in the rows of R. It follows that the
1-valued entries of R can be covered by a single tile, and hence there is a 1-tiling of ML (n) of
size k. (Similarly, there is a 0-tiling of ML (n) of size k.)
Suppose L is not regular. Since L is not regular, ML has an infinite number of distinct rows.
It follows immediately from Lemma 4.1 that M cannot be tiled with any constant number of
tiles. 2
The above theorem uses the simple fact that the 1-tiling complexity
L (n) of a language L
is a lower bound on the number of distinct rows of ML (n). In fact, the number of distinct rows
of ML (n), for a language L, is closely related to a measure that has been previously studied by
many researchers. Dwork and Stockmeyer called this measure non-regularity, and denoted the
non-regularity of L by NL (n) [7]. NL (n) is the maximum size of a set of n-dissimilar strings of
L. Two strings, w and w 0 , are considered n-dissimilar if jwj - n and jw 0 j - n, and there exists
a string v such that jwvj - n, It is easy to show
that the number of distinct rows of ML (n) is between NL (n) and NL (2n). Previously, Kaneps
and Freivalds [16] showed that NL (n) is equal to the number of states of the minimal 1-way
deterministic finite state automaton which accepts a language L 0 for which L 0
is the set of strings of L of length - n.
Shallit [28] introduced a similar measure: the nondeterministic nonregularity of L, denoted
by NNL (n), is the minimal number of states of a 1-way nondeterministic finite automaton
which accepts a language L 0 for which L 0
In fact, it is not hard to show that
To see this, suppose that M is an automaton with NNL (2n) states, which accepts a language
We construct a 1-tiling of ML (n) with one tile T q per state q of M ,
where entry [x; y] is covered by T q if and only if there is an accepting path of M on xy which
enters state q as the head falls off the rightmost symbol of x. It is straightforward to verify the
set of tiles defined in this way is indeed a valid 1-tiling of ML (n). A similar argument was used
by Schmidt [27] to prove lower bounds on the number of states in an unambiguous nfa.
We next turn to simple lower bounds on the 1-tiling complexity of nonregular languages.
From Theorem 4.1, it is clear that if L is nonregular, then T 1
L (n) is unbounded. We now use
a known lower bound on the nonregularity of nonregular languages to prove a lower bound for
(n).
Theorem 4.2 If L is not regular, then T 1
infinitely many n.
Proof: Kaneps and Freivalds [16] proved that if L is not regular, then NL (n) - b(n+3)=2c
for infinitely many n. By the definition of NL (n), the matrix ML (n) must have at least NL (n)
distinct rows. Therefore, by Lemma 4.1, T 1
(n). The lemma follows immediately.We next present an example of a unary nonregular language, with 1-tiling complexity
O(log n). Thus, the lower bound of Theorem 4.2 is optimal to within a constant factor.
Theorem 4.3 Let L be the complement of the language fa 1-tiling
complexity O(logn).
Proof: We show that the 1-valued entries of ML (n) can be covered with O(log n) 1-tiles.
Let lg n denote blog 2 binary numbers, of length at
most lg n. Number the bits of these numbers from right to left, starting with 1, so that for
example . For any binary number q, lg q is the maximum index i such that
if q is equal to 2
1. The next fact follows easily.
only if for all j such that j - maxflg x; lg yg,
Roughly, we construct a 1-tiling of ML (n), corresponding to the following nondeterministic
communication protocol. The party P 1 guesses an index j and sends j and x j to P 2 . Also P 1
sends indicating whether or not j - lg x. If j - lg x, then P 2 checks that y
checks that j - lg y and that y or equivalently, that y In either case,
can conclude that y of ML (n) is 1. The number of bits sent from
2.
We now describe the 1-tiling corresponding to this protocol. It is the union of two sets of
tiles. The first set has one tile T j;b for each j; b such that lg n -
The second set of tiles has one tile S j;0 , for all j such that dlog ne - j - 1.
To see that all the 1's in the matrix are covered by one of these tiles, note that if entry
[a x ; a y ] of the matrix is 1, then by the Fact, there exists an index j such that j - maxflg x; lg yg
and either x y, and j is such that
covered by tile T j;0 . 2
The nondeterministic communication protocol in the above proof is a slight variation of a
simple (and previously known) protocol for the complement of the set distinctness problem. In
the set distinctness problem, the two parties each hold a subset of must
determine whether the subsets are distinct. In our application, the problem is to determine,
whether the subset of whose corresponding values in x are
0, is distinct from the subset of whose corresponding values in y are 1.
4.2 Lower Bounds on the Tiling Complexity of Nonregular Languages
In this section we prove that if a language L is nonregular, then the 1-tiling complexity of either
L or -
L is "high" infinitely often. To prove this, we first prove lower bounds on the rank of ML
when L is nonregular. We then apply theorems from communication complexity relating rank
to tiling complexity.
The proofs of the lower bounds on the rank of ML are heavily dependent on distinctive
structural properties of ML . Consider first the case where L is a unary language over the
alphabet fag. In this case, for all
It follows that for every n, ML (n) is such that its auxiliary
diagonal (the diagonal from the top right to the bottom left) consists of equal elements, as do
all diagonals parallel to that diagonal. An example is shown in Figure 1. Such matrices are
classically known as Hankel matrices, and have been extensively studied [15]. In fact, a direct
application of known results on the rank of Hankel matrices shows that if L is nonregular, then
infinitely often. This was first proved by Iohvidov (see [15, Theorem
11.3]), based on previous work of Frobenius [11].
ffl a 1 a 2 a 3 a 4 a 5 a 6
a
a
a
a
a
Figure
1: The Hankel matrix ML (6) for
If L is a non-unary language, then ML does not have the simple diagonal structure of a
Hankel matrix. Nevertheless, ML still has structural properties that we are able to exploit. In
fact, the term Hankel matrix has been extended from its classical meaning to refer to matrices
ML of non-unary languages (see [26]). In what follows, we generalize the results on the rank
of classical Hankel matrices, and prove that for any nonregular language L, over an arbitrary
alphabet, rank(ML (n))
4.2.1 Notation and basic facts
Let L be a language over an arbitrary alphabet, and let
Consider a row of M indexed by a string w. This row corresponds to strings that have the
prefix w. For any string s, row ws corresponds to strings with the prefix ws. Thus the entries
in row ws can be determined by looking at those entries in row w whose columns are indexed by
strings beginning with s (see Figure 2). In what follows, we consider this relationship between
the rows of M more formally.
Let M(n;m) denote the set of vectors (finite rows) of M which are indexed by strings x of
length - n and whose columns are indexed by strings of length - m. Let -
M(n; m) denote the
subset of vectors of M(n;m) which are indexed by strings x of length exactly n. If v 0 is row x
of M(n;m+ i), where i ? 0 and v is row x of M(n;m), then v 0 is called an extension of v.
Suppose s be a string over \Sigma of length - m (possibly the empty string,
ffl). Define split (s) (v) to be the subvector formed from v by selecting exactly those columns
whose labels have s as a prefix. Also, relabel the columns of split (s) (v) by removing the prefix
s. Note that split (ffl) v. Note also that if \Sigma is unary, say foeg, then split (oe) (v) is v with
the first column removed. Let jvj denote the dimension (number of entries) of vector v. If \Sigma is
binary and oe 2 \Sigma, then
Figure
2: The matrix M(3) for is a palindromeg. The bold entries in
row 110 are determined by the bold entries in row 11. The bold entries in row 110 comprise
split (0) (11) for M(2; 3).
More generally, if
c
Also, the vector v consists of the first entry (indexed by the empty string, ffl), plus an
"interleaving" of the entries of split (oe) (v), for each oe 2 \Sigma. More precisely, we have the following
Fact 4.1 Let
We generalize the definition of the split function to sets of vectors. If V is a set of vectors
in M(n;m), and jsj - m, let split g. Then we have the following.
Fact 4.2 [ jsj=i split
(a) -
In what follows, the vectors we consider are assumed to be elements of vector spaces over
an arbitrary field F (e.g. our proofs will hold if F is taken to be the field of rationals F). All
references to rank, span, and linear independence apply to vector spaces over F.
Lemma 4.2 Suppose that b and that
where the ff i are in the field F. Suppose that for 1
k is an extension in M(n;m+ 1)
of b k and that v 0 is an extension of v to the same length as the b 0
k .
Suppose also that for some it is the case that for all s of length i,
split
.
is a string of length - m. Consider a
string j 0 of length m+ 1. Let j
Also,
By the hypothesis of the lemma,
split
Putting the last three equalities together, v 0 [j
Let rank(M(n; m)) be the rank of the set of vectors M(n;m) and let span(M(n; m)) be the
vector space generated by the vectors in M(n;m). The next lemma follows immediately from
the definitions.
Lemma 4.3 If v 0 2 span(M(n; m)); m? 0 and
4.2.2 A Lower Bound on the Rank of M(n) when L is Nonregular
A trivial lower bound on the rank of M(n) is given by the following fact.
Fact 4.3 L is nonregular if and only if there is an infinite sequence of integers p r satisfying
This is easily shown using the Myhill-Nerode theorem. Clearly, such a sequence exists if
and only if the rank of M(n) (as n increases) is unbounded. Moreover, the rank of M(n) is
unbounded if and only if the number of distinct rows in M(n) is unbounded. The Myhill-Nerode
theorem states that the number of equivalence classes of L (equivalently, the number of distinct
rows of M) is finite if and only if L is regular. It follows that L is nonregular if and only if
the rank of M(n) is unbounded. This conclusion has already been noted (see Sections II.3 and
II.5 of the book by Salomaa and Soittola [26], which describes results from the literature on
rational power series and regular languages).
The above lower bound is very weak. In what follows, we significantly improve it by using
the special structure of M(n). Namely, we show that there is an infinite sequence of values
of n such that rank(M(n)) - n + 1. We define the first value of n in our sequence to be the
length of the shortest word in L (clearly this case). To construct the
remainder of the sequence, we show (in Lemma 4.5) that because L is nonregular, for any value
of n, there is some m - n such that rank(M(n
prove (in Lemma 4.6 and the proof of Theorem 4.4) that if n is such that rank(M(n)) - n+ 1,
and we choose the smallest m - n such that rank(M(n
in fact rank(M(m 2.
We begin with the following useful lemma.
Lemma 4.4 Let n - 0; m - 1. Suppose that M(n
Proof: By induction on i. The result is true by hypothesis of the lemma in the case
and that the lemma is true for
It follows from the induction hypothesis that if v 2 M(n
must also be the case that if v 2 M(n+
then It remains to consider the vectors in -
1). By
Fact 4.2 (a), each such vector v is of the form split (oe) (v 0 ), where
for some oe; 1. By the inductive hypothesis,
Then, by Fact 4.2 (b), all of the vectors in split (oe) (M(n; are in M(n+1;m \Gamma i+1).
Hence, Finally, by the hypothesis of the lemma, span(M(n
Corollary 4.1 For any n - 0, if rank(M(n+1;
r.
Proof: If n - p then M(p) is a submatrix of M(n; 2p) so the result follows trivially.
Otherwise, choose i so that n a submatrix of M(n
and hence by Lemma 4.4, the rows of M(p) are contained in span(M(n; p)). Thus again
The following lemma shows the existence of an m - n such that rank(M(n
Lemma 4.5 Let L be a nonregular language. Then for any n, there exists an m - n such that
Proof: Let r be the number of strings of length - n. Clearly, rank(M(n; m)) - r for all
m, since there are r rows in M(n;m). Let r as in Fact 4.3, that is,
Hence, by Corollary 4.1, it must be the case that rank(M(n
2p is one possible value of m that satisfies the lemma. 2
It remains to show that if n is such that rank(M(n)) - n+ 1, and m is the smallest number
such that m - n and rank(M(n+1; m+1)) ? rank(M(n; m+1)), then rank(M(m+1)) - m+2.
This is clearly true if for all
in this case rank(M(n; m+ 1)) - m+ 2. The difficult case is when there exist values of i such
that To help deal with this case, we prove the
following lemma.
Lemma 4.6 Suppose that the following properties hold:
2. m is the smallest number ? n such that M(n
3. i is a number in the range
Then, there is some vector in M(n+ i which is not in span(M(n;
is the extension of some
Then, we claim that for some s;
split Fact 4.2 (b), this is sufficient to prove the lemma.
Suppose to the contrary that for all s of length i, split
be a basis of M(n;m). Let fb 0
p g be an extension of this basis in
1). By Properties 1 and 2 of the lemma, v is in span(M(n; m)). Let
applying Fact 4.1, we see that for all s;
split
We want to show that for all s of length i,
split
It follows from this and from Lemma 4.2 that
contradicting the fact that v 0 62 span(M(n; m+ 1)).
Consider the vectors split
k ). These are in M(n+ Fact 4.2 (b). If
this is clearly in span(M(n; m+ 1))). If and by Property 2 of this
lemma, these vectors are in span(M(n; l be a basis for span(M(n;
and for 1 - k - l, let c 0
k be an extension in M(n;m . Clearly the set fc 0
l g
is also linearly independent, and since rank(M(n; set is
a basis for span(M(n;
split
Then, also
split
Also, since v 2 M(n m), from Fact 4.2 (b) it must be that the vectors split (s) (v) are
in M(n Hence, again by Property 2 of this lemma, and by Lemma 4.4, these
vectors are in span(M(n;
l is a basis for span(M(n; follows that there exists a unique sequence
of coefficients - l such that
split
Also, by combining Equation 2 with Equation 4, we see that
split
1;l
2;l
p;l c l ]:
Thus
p;k for all k 2
We claim
split
l
2;l
l
l ]:
We now justify the claim. By our initial assumption, split
Thus for some unique coefficients - 0
l ,
split
l c 0
Each c 0
k is an extension of c k , and there is a unique linear combination of c l that
is equal to split (s) (v). It follows that each - 0
This proves the claim.
Combining the claim with Equation 3 yields
split
as desired. 2
We now prove the lower bound.
Theorem 4.4 If L is nonregular, then
Proof: The base case is n such that the shortest word in the language is of length n.
Suppose that rank(M(n)) - fixed n. Let m be the smallest number - n
such that rank(M(n there is such an m. We
claim that rank(M(m 2.
the claim is clearly true. Suppose m ? n.
be a basis for M(n; k), n - k - m+ 1, where the extensions of all vectors in B k are
in B k+1 . Let B 0
denote the subset of B k which are extensions of vectors in B k\Gamma1 .
We construct a set of m linearly independent vectors in M(m + 1) as follows. For k
from n to m+ 1, we define a linearly independent set C k of vectors in M(m+ 1; k), of size at
least k + 1. Then, Cm+1 is the desired set.
Let C . This is by definition a linearly independent set, and it has size - n
because (by our initial assumption) rank(M(n)) - n + 1. Suppose that n -
that C k is already constructed and is linearly independent. Construct C k+1 as follows.
k be the set of extensions in M(m+ of the vectors in C k . Add C 0
k to C k+1 .
to C k+1 . (Thus, C k+1 is expanded to contain those vectors in B k+1 which
are not in B 0
.)
(iii) Finally, suppose nothing is added to C k+1 in step (ii); that is, rank(M(n;
is such that then this is equivalent to: rank(M(n;
Thus, we can apply Lemma 4.6 to obtain a vector v
which is not in span(M(n; but is not in
k ).) Add v 0 to C k+1 .
We claim that the vectors in C k+1 are linearly independent. Clearly the set C 0
k is linearly
independent. Consider each vector u 0 added to C k+1 , which is not in C 0
k . By the construction,
u 0 is not in span(B 0
be the extension of vector u in M(m+ 1; k). We claim that the
vector u must be linearly dependent on the set B k . This is true if u 0 is added in step (ii), since
in this case u is in M(n; is a basis for M(n; k). It is also true in the case that u
the vector added in step (iii), since then by Lemma 4.4,
Hence, Moreover, u can be expressed as a unique linear
combination of the vectors of C k , with non-zero coefficients only on those vectors in B k .
If u 0 were in span(C 0
k ), then since it is an extension of u, it would also be expressible as a
unique linear combination of the vectors of C 0
k , with non-zero coefficients only on those vectors
in B 0
k . But that contradicts the fact that u 0 62 span(B 0
4.2.3 The Tiling Complexity Lower Bound
Theorem 4.5 If L is nonregular, then the 1-tiling complexity of either L or -
L is at leastp
log infinitely often.
Proof: Melhorn and Schmidt, and independently Orlin, showed that for any binary matrix
[21, 22]. Their result holds for A over any field. Halstenberg and Reischuk,
refining a proof of Aho et. al., showed that dlog ~
By Theorem 4.4, if L is nonregular, then the rank of M(n) is at least n
It follows that for infinitely many n,
log
5 Variations on the Model
In this section, we discuss extensions of our main results to other related models.
We first show that Theorem 1.1 also holds for the following "alternating probabilistic" finite
state automaton model. In this model, which we call a 2apfa, the nondeterministic states N
are partitioned into two subsets, NE and NU of existential and universal states, respectively.
Accordingly, for a fixed input, there are two types of strategy, defined as follows for a fixed
input string An existential (universal) strategy on w is a function
such that ffi(q; oe; q
A language L ' \Sigma is accepted with bounded error probability if for some constant ffl ! 1=2,
1. for all w 2 L, there exists an existential strategy Ew on which the automaton accepts
with probability strategies Uw , and
2. for all
2 L, on every existential strategy Ew , the automaton accepts with probability
- ffl on some universal strategy Uw .
The complexity classes 1APFA, 1APFA-polytime, and so on, are defined in the natural way,
following our conventions for the npfa model.
Theorem 5.1
Proof: As in Theorems 1.1 and 3.1, we show that if L is a language accepted by a 1APFA,
then the tiling complexity of L is bounded. We first extend the notation of Theorem 3.1.
If E is an existential strategy on xy and U is a universal strategy on xy, let p xy (E; U) be
the state probability (row) vector at the step when the input head moves off the right end of x,
on the strategies E; U . Let r xy (E; U) be the column vector whose i'th entry is the probability
of accepting the string xy, assuming that the automaton is in state i at the moment that the
head moves off the right end of x, on the strategies E; U . For each 1-entry [x; y] of ML , fix an
existential strategy E xy , that causes xy to be accepted with probability at least 1 \Gamma ffl, for all
universal strategies.
Partition the space [0; 1] c into cells of size - \Theta - \Theta -, as before. Let C be a nonempty
subset of the cells. We say that entry [x; y] of ML belongs to C if xy 2 L, and C is the smallest
set of cells which contain all the vectors p xy strategies U .
With each nonempty subset C of the cells, associate a rectangle R C defined as follows.
fx j there exists y such that [x; y] belongs to Cg
\Theta
fy j there exists x such that [x; y] belongs to Cg:
R C is a valid 1-tile. To see this, suppose that [x; y] 2 R C . If [x; y] belongs to C, then
it must be a 1-entry. Otherwise, there exist x 0 and y 0 such that [x; y 0 belong to C.
Consider the strategy E that while reading x, uses the strategy E xy 0 , and while reading y,
uses the strategy E x 0 y . We claim that xy is accepted with probability at least 1=2 on existential
strategy E and any universal strategy U on xy. The probability that xy is accepted on strategies
E; U is
belong to the same set of cells C, are in
the same cell, for some universal strategy U 0 . Moreover,
This is because this quantity is the probability that x 0 y is accepted on existential strategy
and a universal strategy which is a hybrid of U and U 0 ; also by definition of E x 0 y , the probability
that x 0 y is accepted with respect to E x 0 y and any universal strategy is
-c
our choice of -:
Hence, the probability that xy is accepted on the strategies E; U is
Since U is arbitrary, it follows that there is an existential strategy E such that on all strategies
U , the probability that xy is accepted on the strategies E; U is greater than ffl, and so it cannot
be that xy 62 L. Hence, for all [x; y] 2 R C , xy must be in L. Therefore R C is a 1-tile in ML .
The proof is completed as in Theorem 3.1. 2
In the same way, Theorem 3.3 can also be extended to obtain the following.
Theorem 5.2 A language L is in 2APFA-polytime only if the 1-tiling complexity of L is
bounded by 2 polylog(n) .
Thus, for example, the language Pal, consisting of all strings over f0; 1g which read the
same forwards as backwards, is not in the class 2APFA-polytime. To see this, consider the
submatrix of ML (n), consisting of all rows and columns labeled by strings of length exactly n.
This matrix contains a fooling set of size 2 n ; hence a 1-tiling of ML (n) requires at least 2 n tiles.
We next extend Theorem 1.2 to automata with o(log log n) space. We refer to these as
Arthur-Merlin games, since this is the usual notation for such automata which are not restricted
to a finite number of states [7]. The definition of an Arthur-Merlin game is similar to that of an
npfa, except that the machine has a fixed number of read/write worktapes. The Arthur-Merlin
game runs within space s(n) if on any input w with jwj - n, at most s(n) tape cells are used
on any worktape. Thus, the number of different configurations of the Arthur-Merlin game is
Theorem 5.3 Let M and -
M be Arthur-Merlin games which recognize a nonregular language L
and its complement -
L, respectively, within space o(log log n). Suppose that the expected running
time of both M and -
M is bounded by t(n). Then, for all b ! 1=2, log log t(n) - (log n) b . In
particular, t(n) is not bounded by any polynomial in n.
Proof: The proof of Theorem 1.2 can be extended to space bounded Arthur-Merlin games,
to yield the following generalization of Equation 1. Let c(n) be an upper bound on the number
of different configurations of M on inputs of length n, and let
sufficiently large n, the number of 1-tiles needed to cover ML (n) is at most
Since M uses o(log log n) space, for any constant c ? 0, d(n) - (log n) c , for sufficiently large n.
Now, suppose to the contrary that for some b ! 1=2, log log t(n) ! (log n) b for sufficiently
large n. Then,
log n):
Hence, the number of tiles needed to cover the 1-valued entries of ML (n) is 2 o(
log n) . The
same argument for -
M shows that also for for sufficiently large n, the number of tiles needed to
cover the 1-valued entries of M- L (n) is 2 o(
log n) .
Hence, by Theorem 4.5 L must be regular, contradiction. 2
Finally, we consider a restriction of the 2npfa model, which, given polynomial time, can only
recognize regular languages. A restricted 2npfa is a 2npfa for which there is some ffl ! 1=2 such
that on all inputs w and strategies Sw , the probability that the automaton accepts is either
Theorem 5.4 Any language accepted by a restricted 2npfa with bounded error probability in
polynomial time is regular.
Proof: Let L be accepted by a 2npfa M with bounded error probability in polynomial
expected time. Let \Sigma be the alphabet, ffi the transition function, the set
of states and N ae Q the set of nondeterministic states of M . Without loss of generality, let
g.
We first define a representation of strategies as strings over a finite alphabet. Let \Sigma
loss of generality, assume that \Sigma"\Sigma string S
corresponds to a strategy on
6 cw$, where
is of the
and
to be the set of strings of the form oe each oe i is in the
alphabet \Sigma, each S i is in the alphabet \Sigma 0 , and furthermore, corresponds to
a strategy of M on input causes w to be accepted.
Then, L 0 is accepted by a 2pfa with bounded error probability in polynomial time. Thus,
L 0 is regular [7]. Moreover, note that a string of the form is in L if and only
if for some choice of S 0 is in L 0 . Let M 0 be a one-way
deterministic finite state automaton for L 0 , and assume without loss of generality that the set
of states in which M 0 can be when the head is at an even position, is disjoint from the set of
states in which M 0 can be when the head is at an odd position. Then, from M 0 we can construct
a one-way nondeterministic finite state automaton for L, by replacing the even position states
by nondeterministic states. Hence, L is regular. 2
6 Conclusions
We have introduced a new measure of the complexity of a language, namely its tiling complexity,
and have proved a gap between the tiling complexity of regular and nonregular languages. We
have applied these results to prove limits on the power of finite state automata with both
probabilistic and nondeterministic states.
An intriguing question left open by this work is whether the class 2NPFA-polytime is closed
under complement. If it is, we can conclude that 2NPFA-polytime = Regular. Recall that the
class 2NPFA does contain nonregular languages, since it contains the class 2PFA, and Freivalds
[10] showed that f0 n 1 is in this class. However, Kaneps [18] showed that the class
2PFA does not contain any nonregular unary language. Another open question is whether
the class 2NPFA contains any nonregular unary language. It is also open whether there is a
nonregular language in 2APFA-polytime.
There are several other interesting open problems. Can one obtain a better lower bound on
the tiling complexity of nonregular languages than that given by Theorem 4.5, perhaps by an
argument that is not based on rank? We know of no nonregular language with tiling complexity
less
n) infinitely often, so the current gap is wide.
--R
On notions of information transfer in VLSI circuits
Proof verification and hardness of approximation problems
Computational Models of Games
On the Power of finite automata with both nondeterministic and probabilistic states
Probabilistic game automata
Finite state verifiers I: the power of interaction
Interactive proof systems and alternating time-space complexity
Probabilistic two-way machines
A lower bound for probabilistic algorithms for finite state machines
On different modes of communication
Introduction to Automata Theory
Hankel and Toeplitz Matrices and Forms: Algebraic Theory
Minimal nontrivial space complexity of probabilistic one-way Turing machines
Running Time to Recognize Nonregular Languages by 2- Way Probabilistic Automata
Regularity of one-letter languages acceptable by 2-way finite probabilistic au- tomata
Some bounds on the storage requirements of sequential machines and Turing machines
The Markov chain tree theorem
Las Vegas is better than determinism in VLSI and distributed computing
Contentment in Graph Theory: Covering Graphs with Cliques.
Games against nature
Probabilistic automata
Finite automata and their decision problems
Succinctness of description of context free
Automaticity: properties of a measure of descriptional complexity
Some complexity questions related to distributed computing
Lower bounds by probabilistic arguments
--TR
--CTR
Lutz Schrder , Paulo Mateus, Universal aspects of probabilistic automata, Mathematical Structures in Computer Science, v.12 n.4, p.481-512, August 2002
Bala Ravikumar, On some variations of two-way probabilistic finite automata models, Theoretical Computer Science, v.376 n.1-2, p.127-136, May, 2007 | nondeterministic probabilistic finite automata;interactive proof systems;matrix tiling;hankel matrices;arthur-merlin games |
285057 | Formal verification of complex coherence protocols using symbolic state models. | Directory-based coherence protocols in shared-memory multiprocessors are so complex that verification techniques based on automated procedures are required to establish their correctness. State enumeration approaches are well-suited to the verification of cache protocols but they face the problem of state space explosion, leading to unacceptable verification time and memory consumption even for small system configurations. One way to manage this complexity and make the verification feasible is to map the system model to verify onto a symbolic state model (SSM). Since the number of symbolic states is considerably less than the number of system states, an exhaustive state search becomes possible, even for large-scale sytems and complex protocols.In this paper, we develop the concepts and notations to verifiy some properties of a directory-based protocol designed for non-FIFO interconnection networks. We compare the verification of the protocol with SSM and with the Stanford Mur 4 , a verification tool enumerating system states. We show that SSM is much more efficient in terms of verification time and memory consumption and therefore holds that promise of verifying much more complex protocols. A unique feature of SSM is that it verifies protocols for any system size and therefore provides reliable verification results in one run of the tool. | Introduction
Caching data close to the processor dynamically is an important technique for reducing
the latency of memory accesses in a shared-memory multiprocessor system. Because multiple
copies of the same memory block may exist, a cache coherence protocol often maintains coherence
among all data copies [29]. In large-scale systems directory-based protocols [4, 5, 19]
remain the solution of choice: They do not rely on efficient broadcast mechanisms, and moreover
they can be optimized and adapt to various sharing patterns. The current trend is towards more
complex protocols, usually implemented in software on a protocol processor [18]. Because of this
flexibility, proposals even exist to let users define their own protocol on a per-application basis
[28].
One major problem is to prove that a protocol is correct. When several coherence transactions
bearing on the same block are initiated at the same time by different processors, messages
may enter a race condition, from which the protocol behavior is often hard to predict [2] because
the protocol designer can not visualize all possible temporal interleavings of coherence messages.
Automated procedures for verifying a protocol are therefore highly desirable.
There are several approaches to verify properties of cache protocols. A recent paper surveys
these approaches [24]. One important class of verification techniques derives from state enumeration
methods (reachability or perturbation analysis), which explore all possible system states
[7, 15]. Generally, the method starts with a system model in which finite state machines specify
the behavior of components in the protocol. A global state is the composition of the states of all
components. A state expansion process starts in a given initial state and exercises all possible
transitions leading to a number of new states. The same process is applied repeatedly to every new
state until no new state is generated. At the end, a global state transition diagram or a reachability
graph showing the transition relations among global states is reported.
The major drawback of state enumeration approaches is that the size of the system state
space increases quickly with the number and complexity of the components in the protocol, often
creating a state space explosion problem [15]. Verifying a system with increasing numbers of
caches becomes rapidly impractical in terms of computation time and memory requirement. As
protocols become more complex, it is not clear whether verifying a small-scale system model can
provide a reliable error coverage for all system sizes [25].
Recently, we have introduced a new approach called SSM (Symbolic State Model) to
address the state space explosion problem [23] and we have applied it to simple snoopy protocols
on a single bus [2]. SSM is a general framework for the verification of systems composed of
homogeneous, communicating finite state machines and thus is applicable to the verification of
cache protocols in homogeneous shared-memory multiprocessors. SSM takes advantage of equivalences
among global states. More precisely, with respect to the properties to verify such as data
consistency, SSM exploits an abstract correspondence relation among global states so that a met-
3astate can represent a very large set of states [23,]. Based on the observation that the behavior of
all caches are characterized by the same finite state machine, caches in the same state are combined
into an class; a global state is then composed of classes. Moreover, the number of caches in
a state class is abstractly represented by a set of repetition constructors indicating 0, 1, or multiple
instances of caches in that class. An abstracted global state represents a family of global states and
can be efficiently expanded because expanding an abstracted state is equivalent to expanding a
very large set of states. SSM verifies properties of a protocol for any system size and therefore the
verification is more reliable than verification relying on state enumeration for small system sizes.
We have developed a tool to apply this new approach. To illustrate its application in a concrete
case, we verify in this paper three important coherence properties of a protocol designed for
non-FIFO interconnection networks. A non-FIFO network is a network in which messages
between two nodes can be received in a different order than they are sent. Therefore, the number
of possible races among coherence messages is much larger than in a system with a FIFO net-
work. To demonstrate the efficiency of our tool, we compare it with Murj [8]. We show that SSM
is much more efficient in terms of verification time and memory consumption and therefore holds
the promise of verifying much more complex protocols.
The paper is structured as follows. Section 2 provides an outline of the protocol for non-FIFO
networks. Verification model, correctness issues and mechanisms for detecting various
types of errors are discussed in section 3. We then develop the verification method in sections 4
and 5. Results of our study are in section 6. Section 7 contains the conclusion.
Directory-Based Protocol for Non-FIFO Networks
The protocol is inspired from Censier and Feautrier's write-invalidate protocol [4]. Every
memory block is associated with a directory entry containing a vector of presence bits, each of
which indicates whether a cache has a copy of the block. The presence bit is set when the copy is
first loaded in cache and is reset when the copy is invalidated or replaced. When multiple copies
exist in different caches, they must be identical to the memory copy. An extra dirty bit per block in
the directory entry indicates whether or not a dirty cached copy exists. In this case, there cannot
be more than one cached copy and we say that the copy is Exclusive. The cache with the exclusive
copy is also often called the Owner of the line. To enforce ownership of the block, invalidations
must be sent to caches with their presence bits set. Finally the replacement of a Shared copy is
silent in the sense that the presence bit is not reset at the memory.
This protocol is applicable in general to CC-NUMAs (Cache-Coherent Non-Uniform
Memory Access machines). In a CC-NUMA the shared memory is distributed equally among processor
nodes and is cached in the private cache of each processor. In this case, a directory is
attached to each memory partition and covers the memory blocks allocated to it. Thus each block
has a Home memory where its directory entry resides. The implementation of this (conceptually)
simple protocol requires careful synchronizations between caches and directory, involving many
cache states, memory states and messages between caches and memory. A complete specification
of the design can be found in [26]. The coherence messages exchanged between memory and
caches are given in table 1. Messages are basically of two types: control and data. Control messages
include requests and acknowledgments. These messages and their role are self-explanatory,
except for SAck, a synchronization message whose role will become clear later.
In the following, we only describe the salient features of the protocol used in the verifica-
tion. To simplify the following description we refer to the "state of the block in the cache" as the
"state of the cache". The same convention applies to the state of the block in the directory and in
other state machines throughout the paper.
2.1 Cache States
Caches can be in three stable states: Invalid (I), Shared (S; clean copy potentially
shared with other caches), and Owner (O; modified and only cached copy-also called Exclu-
sive). However, since cache state transitions are not instantaneous, three transient states are
added to keep track of requests issued by the cache but not yet completed.
1. Read-Miss-Pending (RMP) state: the block frame is empty pending the reception of the block
after a read miss.
2. Write-Miss-Pending (WMP) state: the block frame is empty pending the reception of the block
with ownership after a write miss.
1. Coherence Messages
Type Message Action
Memory
To
Inv Request to invalidate the local copy.
InvO Request to invalidate the local copy and write it back to memory.
UpdM Request to update the main memory copy and change the local
copy to the Shared state.
O-ship Ownership grant.
Data Block copy supplied by the memory controller.
NAck Negative acknowledgment indicating that a request was rejected
because of a locked directory entry.
Cache
To
ReqSC Request a Shared copy.
ReqO Request Ownership.
ReqOC Request Ownership and block copy.
DxM Block copy supplied by an owner in response to an UpdM message
from memory.
DOxMR Block copy supplied by an owner after replacement.
DOxMU Block copy supplied by an owner in response to an InvO message
from memory.
IAck Acknowledgment indicating invalidation complete.
SAck Synchronization message.
3. Write-Hit-Pending (WHP) state: the block frame contains a shared copy pending the reception
of ownership rights to complete a write access.
These states are sufficient in a system with a FIFO network. With a non-FIFO network,
possible races exist between coherence requests sent at the same time by two different caches to
the same block. Such requests are serialized by the home node but the responses they generate
may enter a race and reach caches out of order. Consider the following case. Assume that two processing
nodes p 1 and p 2 issue both a request for an exclusive copy of a block at the same time. The
request from p 1 reaches the home node first and is granted the copy. Then the request from p 2 is
processed by the home and invalidations are sent to p 1 . In a non-FIFO network, it is possible that
the invalidation will reach p 1 before the exclusive copy. A similar scenario can occur if p 1
requests a shared copy at the same time as p 2 wants an exclusive copy or if p 1 requests an exclusive
copy while p 2 wants a shared copy.
To deal with these three race conditions the protocol uses three additional transient cache
states which synchronize the interactions between caches and memory. These states are: Transient
Owner-to-Invalid (TxOI), Transient Shared-to-Invalid (TxSI), and Transient Owner-to-Shared
(TxOS). To resolve the race between two processing nodes requesting an exclusive copy, a cache
in state WMP moves to state TxOI when it receives an invalidation so that, when it receives the
data block, it executes its pending write, writes the block back to memory and invalidates its copy
to end up in state I. States TxSI and TxOS solve the other two races in a similar fashion.
2.2 Memory States
The stable memory states are indicated by the presence and the dirty bits in the directory
and are Shared, Exclusive or Uncached. When a memory block is in a stable state it is free
or unlocked, meaning that the memory controller may accept new requests for the block. However
memory state transitions are not instantaneous. Between the time the directory controller starts
processing an incoming request and the time it considers the request completed, the directory
entry is in a transient state and is locked to maintain a semi-critical section on each memory block
[30]. Requests reaching a locked (busy) directory entry are nacked. The three corresponding transient
memory states are: XData, XOwn, XOwnC, which indicate that the transaction in progress is
for a shared copy, for ownership rights or for an exclusive copy (respectively), and are typical in
systems with FIFO networks.
The protocol is based on intervention forwarding: When a processing node requests a
copy and the block is exclusive in a remote cache, the home node first requests the copy of the
dirty block, updates the memory, and forwards it to the requester. When an owner victimizes its
modified copy for replacement, the memory state remains Exclusive until the write-back message
reaches the memory controller. Between the transmission and the reception of the write-back
message the memory controller may receive a request for a shared copy issued by another cache
and forward it to the owner. When the memory controller receives the block copy sent at the time
of replacement, it will "believe" that the block copy was sent in response to its forwarded request;
meanwhile, the forwarded request is still pending. This problem was also identified in [2]. The
solution suggested in [2] counts on the presumed owner to ignore the forwarded request. How-
ever, in a large-scale system with unpredictable network delays, intractable problems can be
caused by the forwarded request if it is further outpaced by other messages [25].
1. Directory state transitions to synchronize owner and memory.
To solve the problem of ambiguous write-back messages, we use different message IDs
for a cache write back caused by a replacement (DOxMR) or by an invalidation (DOxMU) (see table
1). Moreover we add two transient states to the directory: Synch1 and Synch2. The memory
controller unlocks the directory entry only when it has received both the replaced data block and
the synchronization message. For example, when the memory controller receives a request for a
shared copy (ReqSC), the request is forwarded to the owner and the memory state is changed to
XData. If the presumed owner has written back the block, it replies with a synchronization message
when it receives the request forwarded by the memory. If the memory controller
receives a block message of type DOxMR or a synchronization message (SAck) from the owner,
the directory enters the transient state Synch2 and waits for the synchronization message
(SAck) or for the write-back message (DOxMR) from the owner respectively. State Synch1 takes
care of the similar case when the original request message was ReqOC, as shown in figure 1.
3 Protocol Model and Correctness Properties
3.1 System Model
The first step in any verification is to construct a system model with manageable verification
complexity. The model should leave out the details which are peculiar to an implementation,
while retaining the features essential to the properties to verify. In the early stage of protocol
design, this approach also facilitates the rapid design and modifications of the model. In this section
we describe the system model used in the verification of the protocol. First, a single block is
modeled, which is sufficient to check properties related to cache coherence [20]. Replacements
can take place at any time and are modeled as processor accesses.
Second, we abstract the directory-based CC-NUMA architecture by the system model of
figure 2.a, which is appropriate since we model a single block. The model consists of a directory
XData
ReqOC
DOxMU
SAck,or
SAck,or
SAck,or
SAck,or
ReqSC
DOxMR
DOxMR DOxMR
DOxMR
and of multiple processor-cache pairs. Each processor is associated with one message sending
channel (CH!) and one message receiving channel (CH?) to model the message flow between
caches and main memory. The message channels do not preserve the execution order of memory
accesses (in order to model non-FIFO interconnections). Messages are never lost but they may be
received in a different order than they were issued. When the cache protocol does not treat differently
messages from the local processor and from remote processors, the model of figure 2.a is
equivalent to the model of figure 2.b, in which the home memory is modeled as an independent
active entity. We will use the system model of figure 2.b throughout this paper.
2. Verification model for directory-based CC-NUMA architectures.
Third, values of data copies are tracked by the same abstraction as we first proposed in
[22]. A cache may have a data block in one of three status: nodata (the cache has no valid copy),
fresh (the cache has an up-to-date copy), and obsolete (the cache has an out-of-date copy); the
CH? CH!
Home Memory & Directory (full-map
CH? CH!
CH? CH!
base machine
CH? CH!
Home Memory
CH? CH!
CH? CH!
home node
(a)
Abstract
Model for a CC-NUMA Multiprocessors (single block)
(b) Refined Model.
protocol machine
memory copy is either fresh or obsolete. During the course of verification, the expansion process
keeps track of the status of all block copies in conformance with the protocol semantics.
This third abstraction is necessary, as we discovered in the verification of the S3.mp protocol
[21, 25] (which is different from the protocol used in this paper). Consider the protocol transactions
illustrated in figure 3. Initially, cache A has a dirty copy of the block, replaces it, and
performs a write-back to the home node. Cache A keeps a valid copy of the block until it receives
an acknowledgment from the memory in order to guarantee that the memory receives the block
safely. Meanwhile, cache B sends a request for an exclusive copy to home. Subsequently, cache A
processes the data-forward-request from home, considers it as an acknowledgment for the prior
write-back request and sends the block to B. B then executes its write and replaces the block due
to some other miss. As shown in figure 3, a race condition exists between the two write-back
requests. If the write-back from B wins the race, the stale write-back from A overwrites the values
updated by B. Note that in this example all state transitions are permissible. To overcome this
problem, the verification model need to maintain a global variable to remember which write-back
carries the latest value.
3. A Stale Write-back Error.
3.2 Formal Protocol Model
Given the architectural model of figure 2, we now formally define the constituent finite
state machines interacting in the protocol. A convenient language to specify such machines is
CSP [14]. Message transmission is represented by the postfix '!', and message reception by the
postfix '?'.
Definition 1. (Receiving Channel) The receiving channel machine recording the messages
received from the memory and in transit to the cache has structure RChM= (Q r , S r , Xm !, d1 r , Xc ?,
set of memory-to-cache messages (table 1),
(messages from memory),
Home A
1. write-back
2.
exclusive-request
3. data-forward-request
4.
exclusive-forward
5.
write-back
(messages to cache),
are the messages issued by the memory controller and consumed by the cache,
respectively.
Definition 2. (Sending Channel) The sending channel machine recording the messages issued by
the cache and in transit to the memory controller has structure SChM= (Q s , S s , Xc !, d1 s , Xm ?, d2 s ),
where
set of cache-to-memory messages (table 1),
,(messages from cache),
(messages to memory),
are the messages issued by the cache and consumed by the memory, respectively.
The state of a channel machine is made of all the messages in transit. At each state expansion
step, a receiving (or sending) channel may record the command sent by the memory (or its
cache), or may propagate a command to its cache (or the memory). The behavior of each cache
controller is given in definition 3.
Definition 3. (Cache Machine) The state machine characterizing the cache behavior has
structure CM= (Q c , S r , S s , Xc ?, d1 c , Xc !, d2 c ), where
coherence messages as defined in definitions 1 and 2,
Xc ? and Xc ! are the messages consumed and produced by the cache, respectively.
Upon receiving a message, a cache controller may or may not respond by generating
response messages according to d1 c . Additionally, we embed the processor machine in the cache
machine. The processor may issue accesses to its local cache, which may cause cache state
changes and issuance of coherence messages as specified by d2 c . The finite state machines for
main memory and for the protocol are formalized as follows.
Definition 4. (Memory-Directory) The main memory machine keeping the directory has
structure
messages as defined in definitions 1 and 2,
Xm ? and Xm ! are the caches-to-memory and memory-to-caches commands respectively. When the
memory machine consumes a message, response messages may or may not be sent to caches. Q BM
denotes the set of possible states of a base machine as defined below.
Definition 5. (Base Machine) The base machine is the composition of the cache machine and of
its two corresponding channel machines, that is, BM
Definition 6. (Protocol Machine) The protocol machine is defined as the composition of all base
machines and of the memory machine, that is, PM=
with n caches.
The state tables used in the verification for d1 c , d2 c , and d m can be found in [26]. The
memory controller consumes messages from caches and responds according to the block state and
the message type. Finally, the state of the protocol machine is also referred to as the global state in
this paper.
3.3 Correctness Properties of the Protocol
In this paper we verify three properties: data consistency, incomplete protocol specification
and livelock, with the following definitions.
3.3.1 Data Consistency
The basic condition for cache coherence was given in [4]: All loads must always return the
value which was updated by the latest store with the same address. We formulate this condition
within the framework of the reachability expansion as follows.
Definition 7. (Data Consistency) With respect to a particular memory location, the protocol
preserves data consistency if and only if the following condition is always true during the
reachability analysis: the family of global states originated from G', including G' itself,
consistently return on a load the value written by a STORE access t which writes the most recent
value to the memory location and brings a global state G to G' or the value written by STORE
transitions after t. That is, states reached by expanding G' are not allowed to access the old value
defined before t.
In the architectural model of figure 2 memory accesses are made of several consecutive
events and thus are not atomic. We do not constrain in any way the sequences of access generated
by processors. Moreover the hardware does not distinguish between synchronization instructions
and regular load/store instructions. So, in this paper, latency tolerance mechanisms in the processors
and in the caches are not modeled and we assume that the mechanisms are correct and
enforce proper sequencing and ordering of memory accesses in cooperation with the software.
Based on the model of data values in section 3.1, data inconsistency is detected when a
processor is allowed to read data with obsolete values.
Definition 8. (Detection of Data Inconsistency) All data copies are tagged with values in the set
{nodata, fresh, obsolete} and data transfers are emulated in the expansion. Data inconsistency is
detected when a processor is allowed to read data with obsolete values.
3.3.2 Incomplete Protocol Specification
Because of unforeseen interleavings of events, the protocol specification is often incom-
plete, especially in the early phases of its development. This flaw manifests itself as an unspecified
message reception, i.e., some entity in the protocol receives a message which is unexpected given
its current state [31].
State machine models are very effective at detecting unspecified message reception. The
procedure is simple and is directly tied to the structure of the reachability graph: an unspecified
message reception is detected when the system is in a state and a message is received for which no
transition out of the state is specified in the protocol description. Besides detecting the error, the
state enumeration shows the path leading to the erroneous state.
3.3.3 Deadlock and Livelock
A protocol is deadlocked when it enters a global state without possible exit. In a livelock
situation the processes interacting in the protocol could theoretically make progress but are
trapped in a loop of states (e.g, a processor keeps on re-trying a request which is always rejected
by another processor). Deadlocks are easy to detect in a state enumeration since they are states
without exits to other states, but it is very difficult to detect livelocks.
At the level of abstraction adopted in this paper, protocol components communicate via
messages. Thus we can only detect deadlocks and livelocks derived from the services (functional-
ity) provided by the cache coherence protocol. An example of a protocol-intrinsic livelock is a
blocked processor waiting for a message (e.g., an invalidation acknowledgment) which is never
sent by another processor in the protocol specification. Deadlock and livelock conditions due to a
particular implementation of the protocol such as finite message buffers or the fairness of serving
memory requests cannot be detected at this level of abstraction.
Definition 9. (Livelock) In the context of coherence protocols, a livelock is a condition in which a
given block is locked by some processor(s) so that some processor is permanently prevented from
accessing the block [20].
In our state expansion process, we check the following conditions in order to detect livelocks
and correct the protocol:
Conditions for Livelock-freeness:
(a) The protocol can visit every state in the global state transition diagram infinitely many
times, that is, the global state transition diagram is strongly-connected. Given a global state,
every other state in the global state transition diagram is reachable [2].
(b) If a processor issues a memory access to a block, this memory access must eventually
be satisfied (e.g., a value is always returned on a load to resume processor execution). Specifi-
cally, given an initial global state in which a cache is in an "invalid" state, there must exist reachable
global states in which the cache state becomes "shared" or "dirty" after a read miss or a
access [20].
Conditions (a) and (b) are sufficient to avoid livelocks as defined in definition 9 because
they assure that every processor can read and modify a block an arbitrary number of times. Condition
(a) is stronger than necessary because it assumes that the cache protocol operates in steady
state. A cache protocol machine might start from an initial state and never return to it later. In this
case, the global state graph would comprise two sub-graphs: One sub-graph consisting of the initialization
state would have exits to a second sub-graph corresponding to the steady state operation
of the protocol, but not vice versa. This special case can be identified by careful analysis of
the state graph after a livelock is reported.
4 The Verification Method
In the models of figure 2 the order of the states of base machines in a global state representation
is irrelevant to protocol correctness. Because of this symmetry, the size of state space can be
reduced by a factor n!, given a system with n processors. The symbolic state model (SSM)
exploits more powerful abstraction relations than the symmetric relation in order to further reduce
the size of the state space. To be reliable, the new abstraction must be equivalent to the system
model with respect to the properties to verify.
4.1 Equivalence Between State Transition Systems
In general, we formalize the system to verify as a finite state transition system:
Definition 10. (Finite State Transition System) With respect to a cache block, the behavior of a
cache system with m local cache automata is modeled by a finite state transition system M:(s 0 , A,
s 0 is the initial state ,
A is the set of state symbols,
S is the global state space (a subset of ),
A A
- .
A A
- .
S is the set of operations causing state transitions, and
d represents the state transition function, .
Consider a state transition system M: (s 0 , A, S, S, d). With respect to the properties P to
verify, we want to find a more abstract state transition system
corresponds to M, S r is smaller than S, and error states of M are mapped into error states of M r .
Definition 11. (Correspondence) Given two state transition systems M: (s 0 , A, S, S, d) and M r :
corresponds to M if there exists a correspondence relation j such that:
1. s 0r corresponds to s 0 , i.e., s 0r js 0 ,
2. For each , at least one state corresponds to s, i.e., s r js.
3. If M in state s makes a transition to state t on an enabled operation t, and state s r of M r corresponds
to state s, then there exists a state t r such that M r moves from s r to t r by t and t r corresponds
to t.
Figure
4 illustrates this correspondence relation.
4. Correspondence Relation.
Definition 12. (Equivalence) Two state transition systems M and M r such that M r corresponds to
are equivalent with respect to a property P to verify iff the following conditions are verified at
any step during the expansion of M r . Let s r be the current (correct) state of M r . and let t r be the
next state of M r after a transition t.
1. If P is verified in t r then P holds in all states t of M such that t r jt.
2. If P does not hold in t for some t such that t r jt then P does not hold in t r
3. If P does not hold in t r then there must exist states s and t such that s r js and t r jt and P does not
hold in t.
The first condition of definition 12 establishes that if the expansion of M r completes without
violating property P, then the expansion of M would also complete without violating P. The
second and third conditions of definition 12 ensure that an error state is discovered in the expansion
of M r iff an error state exists in M and the error state of M r corresponds to the error state of
M. In the following we first specify an abstract machine M r corresponding to the protocol
machine M of definition 6. We will then prove that M r is equivalent to M with respect to the cor-
rectness properties of section 3.3
4.2
Abstract
SSM Models with Atomic Memory Accesses
The SSM method was first introduced in [22] under the assumption of atomic memory
accesses. We developed an abstraction relation among global states based on the observation that,
in order to model cache protocols, the state must keep track of whether there exists 0, 1, or multiple
copies in the Exclusive state which has the latest copy of the data. On the other hand, the number
of read-only shared data copies do not affect protocol behavior, provided there is at least one
cached copy. Symbolic states can be represented by using repetition constructors.
Definition 13. (Repetition Constructors-Atomic Memory Accesses)
1. The Null (0) specifies zero instance.
2. The Singleton (1) specifies one and only one instance. This constructor can be omitted in the
state representation.
3. The Plus (+) specifies one or multiple instances.
4. The Star (*) specifies zero, one or multiple instances.
With these repetition constructors we can represent for example the set of global states
such that "one or multiple caches are in the Invalid state, and zero, one or multiple caches are in
the Shared state" as metastate such as corresponds to a set of explicit states in
M.
5. Ordering Relations among Repetition Constructors.
Repetition constructors are ordered by the sets of states they represent. Thus, 1 <
(figure 5). These ordering relations extend to the metastates (called composite states in [22])
such that, for example, contained by because the set of global states represented
by the first composite state is a subset of those represented by the second composite state.
Because of this containment relation among composite states, only the composite states which are
not contained by any other composite state are kept during the verification. At the end of the state
expansion, the state space of M is collectively represented by a relatively small number of essential
composite states in M r [23].
4.3 SSM Models with Non-atomic Memory Accesses
To model protocols with non-atomic accesses, we need to define the elements forming the
basis for the repetition abstraction and to add a new repetition constructor called the Universe
Constructor.
In the model of figure 2, base machines naturally form the units of abstraction repetition.
Henceforth, a set of base machines in the same state will be represented by , in which C is
the cache state, p is the value of the presence bit in the directory, r is the number of base machines
in the set (specified by one of the repetition constructors above), R is the state of the receiving
channel, and S is the state of the sending channel. R and S are specified by the messages in transit
in the channels. Since the channels model non-FIFO networks, the order of messages in each
channel is irrelevant. Often, when there is no confusion, part of the notation may be omitted. For
example, we will use the notation , where q combines the cache state with the states of its two
message channels.
Although the singleton, the plus, and the star are useful to represent an unspecified number
of instances of a given construct (such as base machines in a given global state), they are not precise
enough to model intermediate states in complex protocol transactions triggered by event
counting. Consider an abstract state (S * , I + ). When a write miss occurs, all caches in shared state
S must be invalidated and the ultimate state is (O, I + ) in which the processor in state O has the
exclusive dirty copy. At the behavioral level [23], this state transition is done in one step because
memory accesses are assumed atomic. However, when accesses are no longer atomic, invalidations
are sent to caches in state S and the number of shared copies is counted down one-by-one
upon receiving invalidation acknowledgments. As a result, we need to distinguish the two states
contained by metastate (S * , I these two states correspond to the cases
where either some or no caches are in state S. To deal with this problem, we first define the inval-
idation-set:
Definition 14. (Invalidation-Set) The invalidation-set (Inv-Set) contains the set of caches with
their presence bits set and which must be invalidated before the memory grants an exclusive copy.
When a request for an exclusive copy (such as request-for-ownership ReqO or request-for-
owner-copy ReqOC in our protocol) is pending at the memory, copies must be invalidated and the
state expansion process needs to keep track of whether the invalidation-set is empty. Since all
caches in the same state are specified by repetition constructors, the exact number of caches in a
particular state is unknown and using the * constructor alone to represent any number of copies
may prevent the expansion of some possible states.
Consider the following composite state with the invalidation-set between brack-
ets: and where Q denotes all other
base machines with their presence bits reset.
6. Expansion Steps with Null and Non-null Instances Covered by the * Constructor.
When the memory receives the request for an exclusive copy (ReqOC) from the cache in
state C, it cannot determine whether the invalidation-set is empty because the definition of *
includes the cases of null and non-null instances. One way to solve this shortcoming in the notation
is to explore both cases in the expansion process. When the global state is expanded, two
states, corresponding to an empty and to a non-empty invalidation-set are generated. The expansion
steps are shown in figure 6. expansion step is q 1
* ), which means that
some machines in state q 1 change state to q 2 and others remain in q 1 .
with C1 C2 C
f ReqOC
f ReqOC
Data f
s3: QC1
f IAck
f IAck
s4: QC2-
Data f
f IAck
if caches in C1 do not acknowledge invalidation requests (C1 changes to C1')
the memory receives IAcks from caches in C2'
caches in C2 respond to Inv (C2 changes to C2')
the memory receives IAcks from caches in C2'
1. In s0, suppose that memory receives a request for an exclusive copy from the cache in state C.
Two states corresponding to an empty and to a non-empty invalidation-set are generated. In s2,
invalidations are sent to caches in the invalidation-set, whereas in s1, the requester obtains an
exclusive copy (the new owner) and the invalidation-set is empty.
2. In the expansion of s2, caches in state C2 receive invalidations, respond with an invalidation
acknowledgment, and change state to C2'.
3. When the memory receives invalidation acknowledgments from caches in state C2' in s3, two
states with an empty and a non-empty invalidation-sets are again generated.
4. In global state s5, assume that caches in state C1 do not acknowledge invalidations because of
an incorrect design. In s6, when the acknowledgment messages from caches in state C2' are
received by the memory, the expansion may consider the invalidation-set as empty again and
make a transition to s4. However, the case where the invalidation-set is not empty is also covered
by * and must also be expanded. Either the process never stops or some errors go undetected
In order to solve this problem, the expansion process needs to remember which expansion
path it followed. In figure 6, transitions (s0 - s1) and (s0 - s2) correspond to an empty and a
non-empty invalidation-set respectively. However, the invalidation-set in s2 should in fact cover
only three cases: (1) , (2), and (3) . Unfortu-
nately, splitting states such as s2 results in a combinatorial explosion of the state space. A more
efficient solution is to work on state s2 and keep track of whether the invalidation-set is empty.
To this end, we introduce a new constructor called the universe constructor or u construc-
tor. When a transition is applied to a non-empty invalidation-set of the form the
null case is not generated. Rather the components inside the invalidation-set are
expanded one by one without considering the null case. To keep track of the fact that we have
expanded a component at least once without considering the null case, we use the u constructor.
The component expansion is of the type . The u constructor is similar to * except
that the transition to the null case can now be exercised in the expansion of the inval-
idation-set. An invalidation-set may be considered empty if and only if it has the
form .
Let's examine the expansion steps using the u constructor to see how the procedure works
(figure 7).
1. In global state s1, the expansion process explores the path in which some caches respond to
invalidations. In global state s2, the constructor * is replaced by u for the class of caches
* . x n
. q
2 . q
remaining in C2, so that the next time it is expanded, the expansion process will consider the
null case.
7. Resolution Provided by the u Constructor.
2. In global state s3, the expansion process chooses to expand the class of caches in state C1, considering
only the case of the non-empty set. If caches in state C1 do not acknowledge invalida-
tions, the process moves into state s4. According to the condition for emptiness of the
invalidation-set, the pending request for an exclusive copy in state s4 is never resumed
(because of the caches in class C1'). This situation can be easily detected as a livelock situation
in which the protocol is trapped in a loop.
s2: QC1
f IAck
s3: QC1
f IAck
f IAck
f IAck C2
f IAck
f IAck C2
f IAck
Data f
if caches in C1 don't acknowledge Inv if caches in C1 acknowledge Inv (C1 changes to C1')
caches in C2 respond to Inv (C2 changes to C2')
the memory receives IAcks from caches in C2'
the memory receives IAcks from caches in C1'
the memory receives IAcks from caches in C1' or C2'
pending request will
f ReqOC
never be resumed
3. On the other hand, if all caches in the invalidation-set acknowledge memory, the expansion
process takes another path through states s4', s5 and s6.
The only occurrence of event counting in write-invalidate protocols is the collection of
acknowledgments for invalidations. In write-update protocols updates must be acknowledged in
the same way: the equivalent of the invalidation-set could be called the update-set.
4.4 Symbolic State Model
Combining the basic framework of section 4.2 and the refinement of section 4.3, in a system
with an unspecified number of caches, we group base machines in the same state into state
classes and specify their number in each class by one of the following repetition constructors.
Definition 15. (Enhanced Set of Repetition Constructors)
1. The Null (0) specifies zero instance.
2. The Singleton (1) specifies one and only one instance. This constructor can be omitted in the
state representation.
3. The Plus (+) specifies one or multiple instances.
4. The Star (*) specifies zero, one or multiple instances. However, the case of "zero instance" is
not explored for transactions depending on event counting in the expansion.
5. The Universe (u) specifies zero, one or multiple instances. The case of "zero instance" is
explored for transactions depending on event counting in the expansion.
Definition 16. (Composite State) A composite state represents the state of the protocol machine
for a system with an arbitrary number of base machines. It is constructed over state classes of the
form , where n=|Q BM | is the number of states of a base machine, q i -Q BM , r i
-[0, 1, +, *, u] and q MM -Q m is the memory machine state.
Repetition constructors are again ordered by the possible states they specify; namely, 1 <
8). This order leads to the definition of state containment.
8. Ordering Relations Among the Repetition Constructors (Enhanced Set).
. q n
Definition 17. (Containment) We say that composite state S 2 contains composite state S 1 , or S 1
and q
The consequence of containment is that, if then the family of states represented
by S 2 is a superset of the family of states represented by S 1. Therefore, S 1 can be discarded during
the verification process provided we We will prove that the expansion process based on
the expansion rules of section 4.4.1 is a monotonic operator on the set of composite states S, that
is, is a memory event applied to S 1 and S 2 .
4.4.1 Rules for the Expansion Process
The set of operators applicable to composite states during the state generation process is
defined as follows, where '/' stands for ``or'' and `-' shows a state transition.
1. Aggregation: (q 0 , q r
2. Coincident Transition: q 1
r , where r -[1, +, *, u] and - t is an observed transition.
3. One-step Transition:
(a) (Q, q 1
(b) (Q, q 1
where all machines not in state q 1 are denoted by Q in the tuple, - t is a transition applied to the
base machine in state q 1 such that q 1 - t q 2 , and t causes all other base machines in q 1 to move to
state q 3 . After the transition, some machines in Q may be affected as shown by the change from Q
to Q'.
4. N-steps Transitions: This rule specifies the repetitive application of the same transition N
times, where N is an arbitrary positive integer.
(a) (Q, q 1 , q MM
(b) (Q, q 1
(c) (Q, q 1
5. Progress Transitions: Provided q 1 - t q 2 , we have
* , q MM ), and
$ such that q r 1
* , q MM ').
where the states between bracket form the invalidation-set and base machines not in the invalida-
tion-set are denoted by Q in the tuples.
Aggregation rules group base machines in the same state. One example of a coincident
transition is when the memory controller sends an invalidation signal to every cache with a valid
copy. A one-step transition occurs for example when the memory receives a request for an exclusive
copy from a base machine in class q 1 ; this base machine changes its state to q 2 (the request
message is removed from the sending channel) because the memory normally processes only one
request for an exclusive copy at a time; in this case all other machines in q 1 and in Q may stay in
the same state or may change state because new invalidation messages are sent to their receiving
channel.
Rules (b) and (c) of an N-steps transition correspond to two chains of transitions:
and
The same transition q 1 - t q 2 can be applied an unlimited number of times as long as there are
base machines in state q 1 . The transition - t has no effect on other machines (denoted by Q in the
tuple). Typical examples are: (1) processors replacing their copy in state shared, (2) processors
receiving the same type of messages, and (3) processors issuing the same memory access independently
Two additional rules with similar interpretation as N-steps transitions are required for the
progress of the expansion process. These progress transitions deal with protocol transactions
involving event counting as explained in section 4.3 and correspond to two chains of transitions:
* , q MM ), and
* , q MM ').
In our protocol, they model the processing of a request for an exclusive copy at the memory. Transition
- t is the reception of an invalidation acknowledgment (such as IAck in table 1). Inv-Set is
the set of caches with their presence bits set at the memory and which must be invalidated before
the memory grants an exclusive copy. Rule (a) applies during the invalidation process whereas
rule (b) applies after the successful invalidation of all copies.
During the state expansion process, all cache transactions possible in the current state are
explored. A state expansion step has two phases. First, a new composite state is produced by
applying one of the above transition rules to the current state. Second, the aggregation rule is
applied to lump base machines in the same state (see for example figure 11).
4.5 Monotonicity of the State Expansion
In general, a system to verify is composed of finite state machines so that one machine can
communicate with all other machines directly and a composite state of any SSM is of the form
are the possible states of each machine and r i s are repetition constructors. A partial
order exists among repetition constructors such as the one in figure 8. State expansion rules
include aggregation rules, one-step transition rules and compound transition rules corresponding
to multiple applications of the same one-step transition rule. Aggregation rules are the rules used
to represent symbolic states as compactly as possible, based on the partial order on the repetition
constructors of the abstract state representation. Containment of composite states is based on the
partial order among constructors.
In this context, we can prove that the expansion rules in SSM are monotonic operators:
namely, Intuitively, if an SSM state S 1 is contained by S 2 and if an
expansion step is done correctly, then the next states of all the states included in S 1 must also be
contained in S 2 . This expansion and containment of the abstract states in SSM are independent of
the properties to verify. Properties such as data consistency (see definition 7) are formulated by
users, and then are checked on the reduced state space.
Lemma 1. The aggregation process is monotonic, that is, if and , then we
have , where q is a possible state of each state
machine and all r i are repetition constructors.
Proof: The proof follows from the ordering relation among the repetition constructors and from
checking all possible combinations of r 11 , r 12 , r 21 and r 22 subject to the constraints of this lemma
and to the aggregation rule. q
Lemma 2. The immediate successor S 1 originated from state
is contained by state S 2 originated from state
if the same expansion rule taken on the same memory event t is
. q n
q r 21
. q n
. q n
applied to S 1 and S 2 .
Proof: We only need to consider the effect of applying t to machines in state q i in S 1 and S 2 . To
simplify the notation, all classes q j (j - i) are lumped in Q. Provided q i - t q k , the following two
states are generated when a one-step transition rule is applied to S 1 and S 2 .
Q' means that the transition may cause state changes of other machines. Since includes the
case of a single base machine, must contain the case of zero base machine. It is clear that S 1 -
containment relation is also true when compound rules involving multiple one-step transitions
such as the N-steps rule and the Progress rule are applied to S 1 and S 2 . q
Lemma 3. The claim S 1 - S 2 holds if
Proof: Because the aggregation process is monotonic by lemma 1, lemma 3 simply extends the
results of lemma 2. q
Theorem 1. (Monotonicity) If S 1 - S 2 , then for every S 1 reachable from S 1 there exists S 2 reachable
from S 2 such that S 1 - S 2.
Proof: This is an immediate result of lemma 3. q
The algorithm for the state expansion process is shown in figure 9. Two lists keep track of
non-expanded and visited states. At each step, a new state is produced and states which are contained
by any other states are pruned. The final output is a set of essential states.
Definition 18. (Essential State) Composite state S is essential if and only if there does not exist a
composite state S such that S - S.
Readers should be aware of the fact that the generation of all essential states is successful
only when the verified system is correct. If the system is incorrect, expanding error states which
lead to unpredictable states is practically meaningless. We assume that the state expansion process
terminates whenever an error is detected. As illustrated in figure 10, the state space reported at the
end of a error-free expansion process is partitioned into several families of states (which may be
r
r
overlapping) represented by essential composite states [23].
9. Algorithm for Generating Essential States.
Theorem 2. The essential composite states generated by the algorithm of figure 9 are complete.
They symbolically represent all states which can be produced by a basic state enumeration
method with no state abstraction.
Proof: Consider states s, t such that s t in the state enumeration method, and composite SSM
states s r , t r and s r t r in the symbolic form such that s r covers s. The resulting next state t r also
covers t, because during the generation of composite states from s to t the same transition functions
are applied and the same information is accumulated as in the expansion of s into t. q
4.5.1 Uniqueness of the Set of Essential States
The set of essential states is unique provided the state graph connecting the essential states
is strongly connected; namely, there exists at least one path from every essential state to all other
essential states.
Algorithm: Essential States Generation.
W: list of working composite states.
H: list of visited composite states.(output:essential states)
while (W is not empty) do
begin
get current state A from W.
for all state class v - A
for all applicable operations t on v
for any state P - W and Q - H
then discard A'.
else begin
remove P from W if P - A'.
remove Q from H if Q - A'.
add A' to W.
if discard A and terminate
all FOR loops starting a new run.
insert A to H if A is fully expanded and is not contained.
end.
Theorem 3. If a successful run of the verification starting with a legal initial state generates a set
of essential states ES such that the state transition graph formed by essential states in ES is
strongly connected, then the set ES is unique in the sense that the state expansion process always
produces the same set of essential states ES if it starts with any legal and reachable state S.
Proof: The set of essential states defines a fixpoint where the state expansion process terminates.
From theorem 2, the states in ES represents all possible configurations that the system can reach.
Therefore, S must be contained by at least one S e in ES. Because the symbolic state expansion is
monotonic, all states derived from S are contained by states derived from S e . When the state transition
graph of ES is strongly connected, there must exist at least one path from S e to all other
essential states. It is impossible to reach an essential state S e - ES from S. q
10. Representation of the State Space by Essential States.
Theorem 3 does not hold when the state graph is not strongly connected. Consider the
simple case in which the state graph consists of two subgraphs, G1 and G2. G1 and G2 are individually
strongly connected and paths exist from G1 to G2, but not vice versa. If the state expansion
process starts from a state which is contained by states in G2 but not by states in G1, then
only subgraph G2 is produced. In order to generate the entire state graph, the state expansion must
start with a state in G1. However, a livelock error that G2 has no transition to G1 may be reported
in the above case according to the conditions in section 3.3.3. To overcome the problem, we can
isolate the subgraphs and analyze them.
Protocol designers cannot determine whether the state graph is strongly connected in
advance. It is, however, normally safe to start the state expansion process with an initial state in
which all caches are invalid because this is usually the state when the system is turned on.
4.6 Accumulation of State Information
The accumulation and compaction of state information in composite states is a major
strength of the SSM method over other approaches. Consider the simple state transition caused by
a read misses under the assumption of atomic memory accesses:
essential state
Initially, no processor has a copy of the block. On each read miss, the caches receive
shared data copies and all other caches remain in the Invalid state. In order to reach the state
(Shared, Shared, Invalid), which is covered by (Shared traditional state enumeration
method would need to model at least three caches. In general, it is difficult to predict the number
of caches needed in a model to reach all the possible states of a protocol. The SSM method eliminates
this uncertainty since it verifies a protocol model independently of the number of processors
5 Correspondence between State Enumeration and SSM Models
We have shown that the SSM expansion is monotonic. We still need to prove that the
abstract SSM state transition system M r : (s 0r , A, S r , S, d r ) is equivalent to the explicit state transition
system M:(s 0 , A, S, S, d) with respect to the properties of section 3.3. The correspondence
relation j in SSM is as follows.
Definition 19. (Correspondence Relation) State corresponds to state
, i.e. s r js, if s is one of the states abstractly represented by s r , where a i is the
state of the local automaton i and . The number of local automata of s in state
must be a case covered by the repetition constructor r j , namely,
We can always find an abstract initial state s 0r which corresponds to the initial state s 0 in
the explicit model. For instance, it is normal to start the verification with an initial state in which
no cached copy exist. In this case, all caches are invalid and (Invalid
Invalid,.,Invalid).
Theorem 4. Consider the state transition system M:(s 0 , A, S, S, d) of an explicit model with (an
arbitrary number of) m local automata and the abstract state transition model
d r ) in the SSM. Consider two states and , where
a i is the state of the local automaton i and . Given s r js and , we
can find such that and t r jt.Then, t is a state which violates a properties
of section 3.3, iff t r is an error state in M r .
Proof:
(1) In regard to data consistency and completeness of specification, the proof is a direct conse-
r2 . q n
rn
s: a 1 a 2 . a m
a i:1.m q j:1.n
, A
a i:1.m q j
a i:1.m q j
s: a 1 a 2 . a m
r2 . q n
rn
a i:1.m q j:1.n
, A
quence of theorems 1 and 2. Because s is one of the states represented by s r (i.e. s r js), the monotonic
operation of SSM guarantees that t is a state characterized by t r . Furthermore, data
consistency and completeness of specification are properties checked on the current states independent
of other states (for instance, data inconsistency is found when a processor is allowed to
read stale data; definition 8). Thus, t r must be an error state if t is an error state, and vice versa.
(2) To show absence of simple deadlocks and livelocks as defined in definition 9, we need to show
that processors are never trapped and are able to complete their reads and writes eventually (sec-
tion 3.3). Consider that the explicit model M is trapped in a subset of states: (s1 -> s2 -> s3 ->.
-> sn -> s1). In the abstract SSM model M r , we must have a corresponding set of states (s1 r -> s2 r
-> s3 r -> sn r -> s1 r ) such that si r jsi for all i because of theorem 1 and theorem 2. Suppose
that the circular loop (s1 r -> s2 r -> s3 r -> sn r -> s1 r ) is broken because some enabled transition
from si r to t r . A corresponding exit from si to t must exist and ti r jt because both M and M r
have the same constituent finite state machines. q
6 Protocol Error Detection
Since unexpected message reception errors are easy to detect, we only describe the model
and the procedure for detecting inconsistencies. We also present a subtle livelock error found during
the course of this verification. Finally, we compare the performance of the SSM method with
Murj in terms of time complexity and memory usage. In all verification results reported here, the
expansion process starts with an initial state with no cached copy, empty message channels and
state. In the SSM method, the initial state is ( , free).
6.1 Data Inconsistency
The detection mechanism for data inconsistency is based on the model described in section
3.3.1. A status variable is added to caches and channel messages carrying data with possible
values of nodata (n), fresh (f), and obsolete (o). The status of the memory copy can be fresh (f) or
obsolete (o). Movement of data copies are modeled by assigning the status of one variable to
another variable.
In
Figure
11 the state of each class has been augmented in between parentheses by the status
associated with every data value. The figure illustrates the state transitions triggered
by a read miss request (ReqSC) and a transition ending with an owner copy in a cache. In accordance
with definition 7, the owner has the fresh copy, whereas all other copies, including the
memory copy become obsolete. Data inconsistency is detected whenever a processor can read an
obsolete data.
6.2 Livelock
The expansion steps leading to a livelock in the original protocol are now described. Ini-
tially, consider a system state with an owner and no request in progress (directory entry is free).
The state has the form (., free).
11. Data Transfer and Detection of Data Inconsistency in SSM.
Consider the following scenario.
1. The owner replaces its copy and writes the block back to memory. The state is (.,
indicating that a write-back message is in the output channel.
O
I n
Data f
f ReqSC WMP n
A fresh copy is
cached in S state
A fresh copy is
in propagation
Caches do not have a copy
Directory is free to accept
new requests and the
memory copy is fresh
I n
Data f
Data f
f ReqSC WMP n
State transition: memory responds to ReqSC requests
I n
Data f
f ReqSC WMP n
Load data from the memory
Aggregation after the state transition
I n
)Data f
f ReqSC WMP n
Many intermediate states are not shown
A write-miss request loads data from the memory
after successfully invalidating other cached copies
I n
f ReqSC WMP n
State transition: receiving the data
and executing the pending write
New owner with fresh data
memory copy becomes obsolete
2. Next, the same cache experiences a write miss and sends a request for an exclusive copy. The
new state is (., free) and a race exists between the write-back and the
ownership messages in the case of a non-FIFO network.
12. Livelock Detection in SSM
3. The memory receives the ownership message before the write-back message; in this case the
memory state is changed to XOwnC and an invalidation (InvO) is sent to the cache because the
memory still records that the cache is an owner. The resulting state is (.,
XOwnC).
4. The cache receives the InvO message and changes its state to TxOI. The system state
becomes (., XOwnC).
WMP
f DOxMR ReqOC
I
NAck f RMP
f ReqSC WMP
NAck f WMP
f ReqOC WMP
f ReqO WMPInvO DOxMR XOwnC
I
NAck f RMP
f ReqSC WMP
NAck f WMP
f ReqOC WMP
f ReqO WMPInvO f Synch1
I
NAck f RMP
f ReqSC WMP
NAck f WMP
f ReqOC WMP
f ReqO
TxOIf DOxMR XOwnC
I
NAck f RMP
f ReqSC WMP
NAck f WMP
f ReqOC WMP
f ReqO TxOIf f Synch1
I
NAck f RMP
f ReqSC WMP
NAck f WMP
f ReqOC WMP
f ReqO WMPf DOxMR ReqOC
initial state:
loop forever in this sink state
memory receives the ReqOC
read miss
miss
miss
read miss
read miss
read miss
miss
miss
cache receives the NAck
memory receives and aborts the ReqSC
cache receives the NAck
cache receives the NAck
cache receives the NAck
memory receives and aborts the ReqSC
memory receives and aborts the ReqSC
memory receives and aborts the ReqSC
memory receives and aborts the request
memory receives and aborts the request
memory receives and aborts the request
memory receives and aborts the request
cache receives the NAck
cache receives the NAck
cache receives the NAck
cache receives the NAck
memory receives the DOxMR
cache receives the InvO
memory receives the DOxMR
WMP
InvO DOxMR
TxOI
5. Finally, when the memory receives the write-back message, it enters the synchronization state
Synch1 and expect a synchronization message SAck from the cache (figure 1). The system
state is (., Synch1). However, the synchronization message will never be sent
by the cache, which locks the directory entry forever.
In the SSM method, this error was successfully detected by reporting that a cycle exists
between four global states without exit to a state outside the loop, as shown in figure 12 (the global
state transition diagram is not strongly connected). This error was not detected by the present
Murj system because checking the connectivity of the global state diagram is overwhelmingly
complex when the size of the global state diagram is large.
The livelock condition originates from the fact that memory does not check the presence
bits when it receives an ownership request. The livelock can be removed by the following correction
to the protocol. When the memory receives a ReqOC message, it checks whether the processor
identifier of the message corresponds to the current owner. If it does, the memory state is
changed to the synchronization state Synch1 directly (following the state diagram in figure 1).
Later, when the write-back message arrives, the memory updates its copy of the block, supplies
the cache with the copy of the block and unlocks the directory entry.
6.3 Comparison with the Murj System
The Murj system, developed by Dill et al. [8], is based on state enumeration. There are
two versions of Murj: the non-symmetric Murj system (Murj-ns) and the symmetric Murj system
(Murj-s). In Murj-ns, two system states are equivalent if and only if they are identical
whereas Murj-s exploits the symmetry of the system by using a characteristic state to represent
states which are permutations of each other [16]. For example, two system states composed of
three local cache states, (shared, shared, invalid) and (invalid, shared, shared), are deemed equivalent
because the order of cache states in the global state representation is irrelevant to the correctness
of the protocol.
The time complexity and memory usage of a verification are closely related to the size of
the system state space. Generally, an exhaustive search algorithm performs three fundamental
operations.
1. Generate a new state, if there is any left; otherwise terminate and report the final set of global
states.
2. Compare the new state against the set of previously visited states.
3. Keep the new state for future expansion if the new state was not visited before.
The most time-consuming step is comparing the new state to previously visited states. The
time complexity grows in proportion to the size of the search space (the set of states generated and
analyzed during the procedure), while the memory usage increases with the size of the global
TxOI
state space (the set of states saved and reported at the end). Since the search space is a direct
expansion of the global state space, reducing the size of the global state space is particularly
important. Murj incorporates state encoding to reduce memory usage and hash tables to speed up
the search and comparison operations. These optimizations are not implemented in SSM.
Table
shows performance comparisons between Murj-ns, Murj-s, and SSM running on
Model MBytes of memory for the verification of the protocol. We
make the following observations. First, for small-scale systems with less than five processors, the
time complexity and the memory usage of Murj-ns and Murj-s are tolerable. Second, the sizes of
both the global state space and the search space of Murj-s are significantly less than those of
Murj-ns, but there is little difference in the times taken by both methods. In the case of four processor
systems, we observe that Murj-s takes longer than Murj-ns. (The extra overhead due to
the state permutation mappings in Murj-s may explain this.) As more processors are added in the
model, the verification time and the memory usage increase drastically in both cases. As compared
to Murj, SSM is very efficient. The verification based on SSM runs in 0.9 seconds with
Mbytes of memory and the state space (123 global states) is comparatively very small.
The fact that the performance of classical enumeration techniques is acceptable for small
system sizes raises the question of whether more elaborate approaches such as SSM are really
needed. Since the final set of essential states reported in the SSM covers all possible states the system
can reach, essential states with the maximum number of base machines in different states represent
the most complex states of the system. In the verification using SSM, the most complex
essential states consisted of 25 base machines in different states. This means that a system model
of at least 25 processors is required to obtain a 100% error coverage in a state enumeration
method. In the case of Murj-ns, we observe (roughly) a times increase in the size of the search
space each time one more process is added to the model. If this trend continues up to 25 proces-
sors, the search space could reach a size of 10 37 states for a model with 25 processors. The time
2. Comparison between SSM, Murj-ns and Murj-s
Method Number of
processors
Size of
global state
space
Size of
search space
Verification
time
Memory
usage
(Mbytes)
Murj-ns
5 excessive memory usage (over 200Mbytes)
Murj-s
SSM any n >1 123 4,205 0.9 0.02
and the memory space needed by a verification of such complexity would be prohibitive on any
existing machine.
In the protocol verified in this paper, the number of messages floating in any one of the
message channels at any time is bounded in spite of the fact that the number of processors in the
model is arbitrary. However, the SSM method does not preclude the possibility that a protocol
may allow processors to send multiple or even an arbitrary number of messages of the same type
[25]. As a result, the model for message channels may need to be adapted by using some finite
variables to represent infinite system behavior [13]. In such cases, repetition constructors might be
useful to keep track abstractly of the number of messages of the same type.
The SSM method can detect protocol-intrinsic livelocks (section 3.3.3). Because the number
of global states reported is relatively small (123 states in this case), the time complexity of
checking the connectivity of the global state transition diagram is more manageable than in Murj.
7 Conclusion
Cache coherence protocols designed for systems assuming non-FIFO networks are
required in systems with adaptive routing and fault-tolerant interconnection networks. In this
paper, we have verified a directory-based cache coherence protocol for non-FIFO networks. The
verification of the protocol was done by the Murj system and the SSM method. Generally speak-
ing, from the study, we have found that the Murj system is effective in verifying small-scale systems
with manageable complexity. However, we have shown that, for the protocol verified in this
paper, a system model with at least 25 processors is required in order to reach 100% error cover-
age. With this many processors, the complexity of the state space search would be prohibitive for
the Murj system, whereas the performance of SSM shows that it could deal with much more
complex protocols than the one used in this paper.
Overall, the SSM method offers three advantages over classic state enumeration methods
with no state abstraction. First, it overcomes the state explosion problem. Second, since the entire
global state space is symbolically represented by a small number of essential states, the time complexity
of checking the connectivity of the global state transition diagram (needed for livelock
detection) is manageable. Third, it verifies the protocol for any system size.
Recently, Ip and Dill have integrated a variation of the SSM method in Murj [17]. Their
tool expands explicit states and then infers abstract states based on generated explicit states,
whereas our tool works directly on the abstract states. Therefore, the new Murj tool may require
multiple runs (adding one more processor to the model in each consecutive run) to reach the complete
verification results obtained with our method. Their experience confirms that classical state
enumeration approaches will be sufficient to verify protocols for systems with small numbers of
processors, whereas methods based on symbolic state representations such as SSM will be critical
in the future for the design of complex protocols in large-scale multiprocessors.
In the architectural model of figure 2 memory accesses are made of several consecutive
events and thus are not atomic. We do not constrain in any way the sequences of access generated
by processors. Moreover the hardware does not distinguish between synchronization instructions
and regular load/store instructions. So, in this paper, latency tolerance mechanisms in the processors
and in the caches are not modeled and we assume that the mechanisms are correct and
enforce proper sequencing and ordering of memory accesses [9]. However, the methodology of
SSM does not preclude the verification of consistency in the presence of latency tolerance hard-
ware. In order to include latency tolerance hardware. synchronization accesses must be modeled
and the sequence of accesses generated by the processors are constrained by the memory consistency
model [11]. This approach was applied in [26] and in [27] for the delayed consistency protocol
specified in [10].
Whereas state enumeration approaches are appropriate for verifying coherence properties,
they do not seem to be applicable to the verification of memory access orders. The reason is that
no one has found a way so far to formulate the verification property for memory order over the
state enumeration graph. Thus, the verification of memory access orders must still rely on testing
procedures [6] or on manual proofs [1, 12].
Acknowledgments
This research was supported by the National Science Foundation under Grant No. CCR-
9222734. We also want to acknowledge the contributions of David L. Dill and C. Norris Ip who
provided invaluable information on the Murj system.
--R
"A Lazy Cache Algorithm"
"The Cache Coherence Problem in Shared-Memory Multiprocessors"
"Cache Coherence Protocols: Evaluation Using a Multiprocessor Simulation Model"
"A new solution to coherence problems in multicache sys- tems"
"Directory-Based Cache Coherence in Large Scale Multiprocessors"
Reasoning About Parallel Architectures
"Protocol Representation with Finite-State Models"
"Protocol Verification as a Hardware Design Aid"
"Memory Access Buffering in Multiprocessors"
"Delayed Consistency and Its Effects on the Miss Rate of Parallel Programs"
"Mem- ory Consistency and Event Ordering in Shared-Memory Multiprocessors"
"Proving Sequential Consistency of High Performance Shared Memories"
"Verification of a Distributed Cache Memory by Using Abstractions"
"Communicating Sequential Processes"
"Algorithms for Automated Protocol Verification"
"Better Verification Through Symmetry"
"Verifying Systems with Replicated Components in Murj"
"The Stanford Flash Multiprocessor Design,"
"The Directory-Based Cache Coherence Protocol for the DASH Multiprocessor"
"Formal Verification of the Gigamax Cache Consistency Protocol"
"The S3.mp Scalable Shared Memory Multiprocessor"
"The Verification of Cache Coherence Protocols,"
"A New Approach for the Verification of Cache Coherence Proto- cols"
"A Survey of Techniques for Verifying Cache Coherence Proto- cols"
"Verifying Distributed Directory-based Cache Coherence Protocols: S3.mp, a Case Study"
"Symbolic State Model: A New Approach for the Verification of Cache Coherence Protocols,"
"Formal Verification of Delayed Consistency Protocols"
"Tempest and Typhoon: User-Level Shared Memory,"
"A Survey of Cache Coherence Schemes for Multiprocessors"
"Data Coherence Problem in a Multicache System"
"Towards Analyzing and Synthesizing Protocols"
--TR
Cache coherence protocols: evaluation using a multiprocessor simulation model
Memory access buffering in multiprocessors
The cache coherence problem in shared-memory multiprocessors
A lazy cache algorithm
A Survey of Cache Coherence Schemes for Multiprocessors
Directory-Based Cache Coherence in Large-Scale Multiprocessors
Proving sequential consistency of high-performance shared memories (extended abstract)
Delayed consistency and its effects on the miss rate of parallel programs
Reasoning about parallel architectures
The verification of cache coherence protocols
The Stanford FLASH multiprocessor
Tempest and typhoon
Symbolic state model
Verification techniques for cache coherence protocols
Communicating sequential processes
A New Approach for the Verification of Cache Coherence Protocols
Protocol Verification as a Hardware Design Aid
Formal Verification of Delayed Consistency Protocols
Verifying Distributed Directory-Based Cahce Coherence Protocols
Verification of a Distributed Cache Memory by Using Abstractions
Verifying Systems with Replicated Components in Murphi
Better Verification Through Symmetry | state enumeration methods;formal methods;cache coherence protocols;shared-memory multiprocessors;state abstraction |
285060 | Property testing and its connection to learning and approximation. | In this paper, we consider the question of determining whether a function f has property P or is &egr;-far from any function with property P. A property testing algorithm is given a sample of the value of f on instances drawn according to some distribution. In some cases, it is also allowed to query f on instances of its choice. We study this question for different properties and establish some connections to problems in learning theory and approximation.In particular, we focus our attention on testing graph properties. Given access to a graph G in the form of being able to query whether an edge exists or not between a pair of vertices, we devise algorithms to test whether the underlying graph has properties such as being bipartite, k-Colorable, or having a p-Clique (clique of density p with respect to the vertex set). Our graph property testing algorithms are probabilistic and make assertions that are correct with high probability, while making a number of queries that is independent of the size of the graph. Moreover, the property testing algorithms can be used to efficiently (i.e., in time linear in the number of vertices) construct partitions of the graph that correspond to the property being tested, if it holds for the input graph. | Introduction
We are interested in the following general question of Property
Let P be a fixed property of functions, and f
be an unknown function. Our goal is to determine (possibly
if f has property P or if it is far from any
function which has property P, where distance between functions
is measured with respect to some distribution D on the
domain of f . Towards this end, we are given examples of the
form (x; f(x)), where x is distributed according to D. We
may also be allowed to query f on instances of our choice.
The problem of testing properties emerges naturally in the
context of program checking and probabilistically checkable
proofs as applied to multi-linear functions or low-degree polynomials
[14, 7, 6, 19, 21, 36, 5, 4, 10, 11, 8, 9]. Property testing
per se was considered in [36, 35]. Our definition of property
testing is inspired by the PAC learning model [37]. It allows
the consideration of arbitrary distributions rather than uniform
ones, and of testers which utilize randomly chosen instances
only (rather than being able to query instances of their own
choice).
Full version available from http://theory.lcs.mit.edu/~oded/
y Dept. of Computer Science and Applied Math., Weizmann Institute of
Science, ISRAEL. E-mail: oded@wisdom.weizmann.ac.il. On sabbatical
leave at LCS, MIT.
z Laboratory for Computer Science, MIT, 545 Technology Sq., Cambridge,
MA 02139. E-mail: shafi@theory.lcs.mit.edu.
x Laboratory for Computer Science, MIT, 545 Technology Sq., Cambridge,
MA 02139. E-mail: danar@theory.lcs.mit.edu. Supported by an NSF
postdoctoral fellowship.
We believe that property testing is a natural notion whose
relevance to applications goes beyond program checking, and
whose scope goes beyond the realm of testing algebraic prop-
erties. Firstly, in some cases one may be merely interested
in whether a given function, modeling an environment, (resp.
a given program) possesses a certain property rather than be
interested in learning the function (resp. checking that the program
computes a specific function correctly). In such cases,
learning the function (resp., checking the program) as means
of ensuring that it satisfies the property may be an over-kill.
Secondly, learning algorithms work under the postulation that
the function (representing the environment) belongs to a particular
class. It may be more efficient to test this postulation
first before trying to learn the function (and possibly failing
when the postulation is wrong). Similarly, in the context of
program checking, one may choose to test that the program
satisfies certain properties before checking that it computes a
specified function. This paradigm has been followed both in
the theory of program checking [14, 36] and in practice where
often programmers first test their programs by verifying that
the programs satisfy properties that are known to be satisfied
by the function they compute. Thirdly, we show how to apply
property testing to the domain of graphs by considering
several classical graph properties. This, in turn, offers a new
perspective on approximation problems as discussed below.
THE RELEVANT PARAMETERS. Let F be the class of functions
which satisfy property P. Then, testing property P corresponds
to testing membership in the class F . The two parameters
relevant to property testing are the permitted distance, ffl,
and the desired confidence, ffi . We require the tester to accept
each function in F and reject every function which is further
than ffl away from any function in F . We allow the tester to
be probabilistic and make incorrect positive and negative assertions
with probability at most ffi . The complexity measures
we focus on are the sample complexity (the number of examples
of the function's values that the tester requires), the query
complexity (the number of function queries made - if at all),
and the running time of the tester.
1.1. Property Testing and Learning Theory
As noted above, our formulation of testing mimics the standard
frameworks of learning theory. In both cases one is given
access to an unknown target function (either in the form of
random instances accompanied by the function values or in
the form of oracle access to the function). A semantic difference
is that, for sake of uniformity, even in case the functions
are Boolean, we refer to them as functions rather than con-
cepts. However, there are two important differences between
property testing and learning. Firstly, the goal of a learning algorithm
is to find a good approximation to the target function
testing algorithm should only determine
whether the target function is in F or is far away from it. This
makes the task of the testing seem easier than that of learning.
On the other hand, a learning algorithm should perform well
only when the target function belongs to F whereas a testing
algorithm must perform well also on functions far away from
F . Furthermore, (non-proper) learning algorithms may output
an approximation ~
f of the target f 2 F so that ~
f 62 F .
We show that the relation between learning and testing is
non-trivial. On one hand, proper (representation dependent)
learning implies testing. On the other hand, there are function
classes for which testing is harder than (non-proper) learning,
. Nonetheless, there are also function
classes for which testing is much easier than learning. Further
details are given in Subsection 2.2. In addition, the graph
properties discussed below provide a case where testing (with
queries) is much easier than learning (also with queries).
1.2. Testing Graph Properties
In the main technical part of this paper, we focus our attention
on testing graph properties. We view graphs as Boolean
functions on pairs of vertices, the value of the function representing
the existence of an edge. We mainly consider testing
algorithms which use queries and work under the uniform dis-
tribution. That is, a testing algorithm for graph property P
makes queries of the form "is there an edge between vertices
u and v" in an unknown graph G. It then decide whether G
has property P or is "ffl-away" from any graph with property P,
and is allowed to err with probability 1=3. Distance between
two N -vertex graphs is defined as the fraction of vertex-pairs
which are adjacent in one graph but not in the other.
We present algorithms of poly(1=ffl) query-complexity and
running-time 1 at most exp( ~
testing the following
graph properties:
k-Colorability for any fixed k - 2. (Here the query-
complexity is poly(k=ffl), and for the running-time
is ~
ae-Clique for any ae ? 0. That is, does the N -vertex graph
has a clique of size aeN .
ae-CUT for any ae ? 0. That is, does the N -vertex graph has
a cut of size at least aeN 2 . A generalization to k-way cuts
works within query-complexity poly((log k)=ffl).
ae-Bisection for any ae ? 0. That is, does the N -vertex graph
have a bisection of size at most aeN 2 .
1 Here and throughout the paper, we consider a RAM model in which trivial
manipulation of vertices (e.g., reading/writing a vertex name and ordering
vertices) can be done in constant time.
Furthermore:
ffl For all the above properties, in case the graph has the desired
property, the testing algorithm outputs some auxiliary
information which allows to construct, in poly(1=ffl) \Delta N -
time, a partition which approximately obeys the property.
For example, for ae-CUT, we can construct a partition with
at least (ae \Gamma ffl)N 2 crossing edges.
ffl Except for Bipartite (2-Colorability) testing, running-time
of poly(1=ffl) is unlikely, as it will imply NP ' BPP .
ffl None of these properties can be tested without queries when
using o(
examples.
ffl The k-Colorability tester has one-sided error: it always
accepts k-colorable graphs. Furthermore, when rejecting
a graph, this tester always supplies a poly(1=ffl)-size sub-graph
which is not k-colorable. All other algorithms have
two-sided error, and this is unavoidable within o(N ) query-
complexity.
ffl Our algorithms for k-Colorability, ae-Clique and ae-Cut can
be easily extended to provide testers with respect to product
distributions: that is, distributions
the form \Pi(u;
is a distribution on the vertices. In contrast, it is not possible
to test any of the graph properties discussed above in a
distribution-free manner.
GENERAL GRAPH PARTITION. All of the above properties are
special cases of the General Graph k-Partition property, parameterized
by a set of lower and upper bounds. The parameterized
property holds if there exists a partition of the vertices into k
disjoint subsets so that the number of vertices in each subset
as well as the number of edges between each pair of subsets
is within the specified lower and upper bounds. We present
a testing algorithm for the above general property. The algorithm
uses ~
queries, runs in time exponential in its
query-complexity, and makes two-sided error. Approximating
partitions, if they exist, can be efficiently constructed in this
general case as well. We note that the specialized algorithms
perform better than the general algorithm with the appropriate
parameters.
OTHER GRAPH PROPERTIES. Going beyond the general graph
partition problem, we remark that there are graph properties
which are very easy to test (e.g., Connectivity, Hamiltonicity,
and Planarity). On the other hand, there are graph properties
in NP which are extremely hard to test; namely, any testing
algorithm must inspect at
of the vertex
pairs. In view of the above, we believe that providing a characterization
of graph properties according to the complexity of
testing them may not be easy.
OUR TECHNIQUES. Our algorithms share some underlying
ideas. The first is the uniform selection of a small sample and
the search for a suitable partition of this sample. In case of
k-Colorability certain k-colorings of the subgraph induced by
this sample will do, and these are found byk-coloring a slightly
augmented graph. In the other algorithms we exhaustively try
all possible partitions. This is reminiscent of the exhaustive
sampling of Arora et. al. [3], except that the partitions considered
by us are always directly related to the combinatorial
structure of the problem. We show how each possible partition
of the sample induces a partition of the entire graph so that the
following holds. If the tested graph has the property in question
then, with high probability over the choice of the sample,
there exists a partition of the sample which induces a partition
of the entire graph so that the latter partition approximately
satisfies the requirements established by the property in ques-
tion. For example, in case the graph has a ae-cut, there exists a
2-way-partition of the sample inducing a partition of the entire
graph with (ae \Gamma ffl)N 2 crossing edges. On the other hand,
if the graph should be rejected by the test, then by definition
no partition of the entire graph (and in particular none of the
induced partitions) approximately obeys the requirements.
The next idea is to use an additional sample to approximate
the quality of each such induced partition of the graph,
and discover if at least one of these partitions approximately
obeys the requirements of the property in question. An important
point is that since the first sample is small (i.e., of
size poly(1=ffl)), the total number of partitions it induces is
only exp poly(1=ffl). Thus, the additional sample must approximate
only these many partitions (rather than all possible
partitions of the entire graph) and it suffices that this sample
be of size poly(1=ffl),
The difference between the various algorithms is in the way
in which partitions of the sample induce partitions of the entire
graph. The simplest case is in testing Bipartiteness. For a
partition of the sample, all vertices in the graph which
have a neighbor in S 1 are placed on one side, and the rest of the
vertices are placed on the other side. In the other algorithms
the induced partition is less straightforward. For example, in
case of ae-Clique, a partition (S 1 of the sample S with
induces a candidate clique roughly as follows.
Consider the set T of graph vertices each neighboring all of
Then the candidate clique consists of the aeN vertices with
the highest degree in the subgraph induced by T. In the, ae-
Cut, ae-Bisection and General Partition testing algorithms, we
use auxiliary guesses which are implemented by exhaustive
search.
1.3. Testing Graph Properties and Approximation
The relation of testing graph properties to approximation
is best illustrated in the case of Max-CUT. A tester for the
class ae-Cut, working in time T (ffl; N ), yields an algorithm for
approximating the maximum cut in an N -vertex graph, up to
additive error fflN 2 , in time 1
ffl \DeltaT (ffl; N ). Thus, for any constant
ffl ? 0, we can approximate the size of the max-cut to within
fflN 2 in constant time. This yields a constant time approximation
scheme (i.e., to within any constant relative error) for
dense graphs, improving on Arora et. al. [3] and de la Vega [17]
who solved this problem in polynomial-time (O(N 1=ffl 2
)-time
and exp( ~
In both works
the problem is solved by actually constructing approximate
max-cuts. Finding an approximate max-cut does not seem to
follow from the mere existence of a tester for ae-Cut; yet, as
mentioned above, our tester can be used to find such a cut in
time linear in N (i.e., ~
One can turn the question around and ask whether approximation
algorithms for dense instances can be transformed into
corresponding testers as defined above. In several cases this is
possible. For example, using some ideas from our work, the
Max-CUT algorithm of [17] can be transformed into a tester
of complexity comparable to ours. We do not know whether
the same is true with respect to the algorithms in [3]. Results
on testing graph properties can be derived also from work
by Alon et. al. [1]. That paper proves a constructive version
of the Regularity Lemma of Szemer-edi, and obtains from it
a polynomial-time algorithm that given an N -vertex graph,
finds a subgraph of size f(ffl; k) which
is not k-colorable, or omits at most fflN 2 edges and k-colors
the rest. Noga Alon has observed that the analysis can be modified
to yield that almost all subgraphs of size f(ffl; k) are not
k-colorable, which in turn implies a tester for k-Colorability.
In comparison with our k-Colorability Tester, which takes a
sample of O(k 2 log k=ffl 3 ) vertices, the k-colorability tester derived
(from [1]) takes a much bigger sample of size equaling a
tower of (k=ffl) 20 exponents (i.e., log f(ffl;
A DIFFERENT NOTION OF APPROXIMATION FOR MAX-CLIQUE.
Our notion of ae-Clique Testing differs from the traditional notion
of Max-Clique Approximation. When we talk of testing
"ae-Cliqueness", the task is to distinguish the case in which an
N -vertex graph has a clique of size aeN from the case in which
it is ffl-far from the class of N -vertex graphs having a clique
of size aeN . On the other hand, traditionally, when one talks
of approximating the size of Max-Clique, the task is to distinguish
the case in which the max-clique has size at least aeN
from, say, the case in which the max-clique has size at most
aeN=2. Whereas the latter problem is NP-Hard, for ae - 1=64
(see [9, Sec. 3.9]), we've shown that the former problem can be
solved in exp(O(1=ffl 2 ))-time, for any ae; ffl ? 0. Furthermore,
Arora et. al. [3] showed that the "dense-subgraph" problem, a
generalization of ae-cliqueness, has a polynomial-time approximation
scheme (PTAS) for dense instances.
TESTING k-COLORABILITY VS. APPROXIMATING k-
COLORABILITY. Petrank has shown that it is NP-Hard to
distinguish 3-colorable graphs from graphs in which every
3-partition of the vertex set violates at least a constant fraction
of the edges [30]. In contrast, our k-Colorability Tester implies
that solving the same promise problem is easy for dense
graphs, where by dense graphs we mean N -vertex graphs
edges. This is the case since, for every ffl ? 0,
our tester can distinguish, in exp(k 2 =ffl 3 )-time, between k-
colorable N -vertex graphs and N -vertex graphs which remain
non-k-colorable even if one omits at most fflN 2 of their edges. 2
We note that deciding k-colorability even for N -vertex
graphs of minimum degree at least k\Gamma3
k\Gamma2 \Delta N is NP-complete
(cf., Edwards [18]). On the other hand, Edwards also gave
a polynomial-time algorithm for k-coloring k-colorable N -
vertex graphs of minimum degree at least ffN , for any constant
k\Gamma2 .
1.4. Other Related Work
PROPERTY TESTING IN THE CONTEXT OF PCP: Property testing
plays a central role in the construction of PCP systems. Specif-
ically, the property tested is being a codeword with respect to
a specific code. This paradigm explicitly introduced in [6] has
shifted from testing codes defined by low-degree polynomials
[6, 19, 5, 4] to testing Hadamard codes [4, 10, 11, 8], and
recently to testing the "long code" [9].
PROPERTY TESTING IN THE CONTEXTOF PROGRAM CHECKING:
There is an immediate analogy between program self-testing
[14] and property-testing with queries. The difference is that
in self-testing, a function f (represented by a program) is
tested for being close to a fully specified function g, whereas
in property-testing the test is whether f is close to any function
in a function class G. Interestingly, many self-testers [14, 36]
work by first testing that the program satisfies some properties
which the function it is supposed to compute satisfies
(and only then checking that the program satisfies certain constraints
specific to the function). Rubinfeld and Sudan [36]
defined property testing, under the uniform distribution and
using queries, and related it to their notion of Robust Char-
acterization. Rubinfeld [35] focuses on property testing as
applied to properties which take the form of functional equations
of various types.
PROPERTY TESTING IN THE CONTEXT OF LEARNING THEORY:
Departing from work in Statistics regarding the classification
of distributions (e.g., [24, 16, 41]), Ben-David [12] and Kulkarni
and Zeitouni [28] considered the problem of classifying an
unknown function into one of two classes of functions, given
labeled examples. Ben-David studied this classification problem
in the limit (of the number of examples), and Kulkarni and
Zeitouni studied it in a PAC inspired model. For any fixed ffl, the
problem of testing the class F with distance parameter ffl can
be casted as such a classification problem (with F and the set
of functions ffl-away from F being the two classes). A different
variant of the problem was considered by Yamanishi [39].
TESTING GRAPH PROPERTIES. Our notion of testing a graph
property P is a relaxation of the notion of deciding the graph
As noted by Noga Alon, similar results, alas with much worse dependence
on ffl, can be obtained by using the results of Alon et. al. [1].
property P which has received much attention in the last two
decades [29]. In the classical problem there are no margins
of error, and one is required to accept all graphs having property
P and reject all graphs which lack it. In 1975 Rivest and
Vuillemin [33] resolved the Aanderaa-Rosenberg Conjecture
[34], showing that any deterministic procedure for deciding
any non-trivial monotone N -vertex graph property must ex-
entries in the adjacency matrix representing the
graph. The query complexity of randomized decision procedures
was conjectured by Yao to be \Omega\Gamma N 2 ). Progress towards
this goal wasmade by Yao [40], King [27] and Hajnal [23] culminating
in an \Omega\Gamma N 4=3 ) lower bound. Our results, that some
non-trivial monotone graph properties can be tested by examining
a constant number of random locations in the matrix,
stand in striking contrast to all of the above.
APPROXIMATION IN DENSE GRAPHS. As stated previously,
Arora et. al. [3] and de la Vega [17] presented PTAS for dense
instances of Max-CUT. The approach of Arora et. al. uses
Linear Programming and Randomized Rounding, and applies
to other problems which can be casted as a "smooth" Integer
Programs. 3 The methods of de la Vega [17] are purely
combinatorial and apply also to similar graph partition prob-
lems. Following the approach of Alon et. al. [1], but using
a modification of the regularity Lemma (and thus obtaining
much improved running times), Frieze and Kannan [20] devise
PTAS for several graph partition problems such as Max-Cut
and Bisection. We note that compared to all the above re-
sults, our respective graph partitioning algorithms have better
running-times. Like de la Vega, our methods use elementary
combinatorial arguments related to the problem at hand.
Still our methods suffice for dealing with the General Graph
Partition Problem.
Important Note: In this extended abstract, we present
only two of our results on testing graph properties: the k-Colorability
and the ae-Clique testers. The definition and theorem
regarding the General Graph Partition property appears in
Subsection 3.3. All other results as well as proofs and further
details can be found in our report [22].
2. General Definitions and Observations
2.1. Definitions
fFng be a parameterized class of functions,
where the functions 4 in Fn are defined over f0; 1g n and let
be a corresponding class of distributions (i.e., Dn
is a distribution on f0; 1g n ). We say that a function f defined
on f0; 1g n is ffl-close to Fn with respect to Dn if there exists
a function g 2 Fn such that
3 In [2], the approach of [3] is extended to other problems, such as Graph
Isomorphism, using a new rounding procedure for the Assignment Problem.
4 The range of these functions may vary and for many of the results and
discussions it suffices to consider Boolean function.
Otherwise, f is ffl-far from Fn (with respect to Dn ).
We shall consider several variants of testing algorithms, where
the most basic one is defined as follows.
Definition 2.1 (property testing): Let A be an algorithm
which receives as input a size parameter n, a distance parameter
confidence Fixing
an arbitrary function f and distribution Dn over f0; 1g n ,
the algorithm is also given access to a sequence of f-labeled
examples, where each x i is independently
drawn from the distribution Dn . We say that A
is a property testing algorithm (or simply a testing algorithm) for
the class of functions F if for every n, ffl and ffi and for every
function f and distribution Dn over f0; 1g n the following
holds
with probability at least 1 \Gamma ffi (over the
examples drawn from Dn and the possible coins tosses
of A), A accepts f (i.e., outputs 1);
ffl if f is ffl-far from Fn (with respect to Dn ) then with
probability at least rejects f (i.e., outputs 0).
The sample complexity of A is a function of n; ffl and ffi bounding
the number of labeled examples examined by A on input
Though it was not stated explicitly in the definition, we shall
also be interested in bounding the running time of a property
testing algorithm (as a function of the parameters n; ffi; ffl, and
in some case of a complexity measure of the class F ). We
consider the following variants of the above definition: (1) Dn
may be a specific distribution which is known to the algorithm.
In particular, we shall be interested in testing with respect to the
uniform distribution; (2) Dn may be restricted to a known class
of distributions (e.g., product distributions); (3) The algorithm
may be given access to an oracle for the function f , which
when queried on x 2 f0; 1g n , returns f(x). In this case we
refer to the number of queries made by A (which is a function
of n, ffl, and ffi ), as the query complexity of A.
2.2. Property Testing and PAC Learning
A Probably Approximately Correct (PAC) learning algorithm
[37] works in the same framework as that described in
Definition 2.1 except for the following (crucial) differences:
(1) It is given a promise that the unknown function f (referred
to as the target function) belongs to F ; (2) It is required to
output (with probability at least
h which is ffl-close to f , where closeness is as defined in
Equation (1) (and ffl is usually referred to as the approximation
parameter). Note that the differences pointed out above
effect the tasks in opposite directions. Namely, the absence
of a promise makes testing potentially harder than learning,
whereas deciding whether a function belongs to a class rather
than finding the function may make testing easier.
In the learning literature, a distinction is made between
proper (or representation dependent) learning and non-proper
learning [31]. In the former model, the hypothesis output
by the learning algorithm is required to belong to the same
function class as the target function f , i.e. h 2 F , while in
the latter model, no such restriction is made. We stress that
a proper learning algorithm (for F ) may either halt without
output or output a function in F , but it may not output functions
not in F . 5 There are numerous variants of PAC learning
(including learning with respect to specific distributions, and
learning with access to an oracle for the target function f ).
Unless stated otherwise, whenever we refer in this section to
PAC learning we mean the distribution-free no-query model
described above. The same is true for references to property
testing. In addition, apart from one example, we shall restrict
our attention to classes of Boolean functions.
TESTING IS NOT HARDER THAN PROPER LEARNING.
Proposition 2.1 If a function class F has a proper learning
algorithm A, then F has a property testing algorithm A 0
such that mA 0 (n; ffl;
Furthermore, the same relation holds between the running
times of the two algorithm.
The proof of this proposition, as well as of all other propositions
in this section, can be found in our report [22]. The above
proposition implies that if for every n, Fn has polynomial (in
n) VC-dimension [38, 15], then F has a tester whose sample
complexity is poly(n=ffl) \Delta log(1=ffi). The reason is that classes
with polynomial VC-dimension can be properly learned from
a sample of the above size [15]. However, the running time of
such a proper learning algorithm, and hence of the resulting
testing algorithm might be exponential in n.
Corollary 2.2 Every class which is learnable with a
poly(n=ffl) sample is testable with a poly(n=ffl) sample (in
at most exponential time).
TESTING MAY BE HARDER THAN LEARNING. In contrast to
Proposition 2.1 and to Corollary 2.2, we show that there are
classes which are efficiently learnable (though not by a proper
learning algorithm) but are not efficiently testable. This is
proven by observing that many hardness results for proper
learning (cf. [31, 13, 32]) actually establish the hardness of
testing (for the same classes). Furthermore, we believe that
it is more natural to view these hardness results as referring
to testing. Thus, the separation between efficient learning
and efficient proper learning translates to a separation between
efficient learning and efficient testing.
5 We remark that in case the function is F have an easy to recognize
representation, one can easily guarantee that the algorithm never outputs a
function not in F . Standard classes considered in works on proper learning
typically have this feature.
Proposition 2.3 If NP 6ae BPP then there exist function
classes which are not poly(n=ffl)-time testable but are
poly(n=ffl)-time (non-properly) learnable.
We stress that while Proposition 2.1 generalizes to learning and
testing under specific distributions, and to learning and testing
with queries, the proof of Proposition 2.3 uses the premise
that the testing (or proper learning) algorithm works for any
distribution and does not make queries.
TESTING MAY BE EASIER THAN LEARNING.
Proposition 2.4 There exist function classes F such that F
has a property testing algorithm whose sample complexity and
running time are O(log(1=ffi)=ffl), yet any learning algorithm
for F must have sample complexity exponential in n.
The impossibility of learning the function class in Proposition
2.4 is due to its exponential VC-dimension, (i.e., it is a pure
information theoretic consideration). We now turn to function
classes of exponential (rather than double exponential) size.
Such classes are always learnable with a polynomial sample,
the question is whether they are learnable in polynomial-time.
We present a function class which is easy to test but cannot
be learned in polynomial-time (even under the uniform distri-
bution), provided trapdoor one-way permutations exist (e.g.,
factoring is intractable).
Proposition 2.5 If there exist trapdoor one-way permutations
then there exists a family of functions which can be tested in
poly(n=ffl)-time but can not be learned in poly(n=ffl)-time,
even with respect to the uniform distribution. Furthermore,
the functions can be computed by poly(n)-size circuits.
The class presented in Proposition 2.5 consists of multi-valued
functions. We leave it as an open problem whether a similar
result holds for a class of Boolean functions.
LEARNING AND TESTING WITH QUERIES (under the uniform
distribution). Invoking known results on linearity testing [14,
7, 19, 10, 11, 8] we conclude that there is a class of 2 n functions
which can be tested within query complexity O(log(1=ffi)=ffl),
and yet learning it requires at least n queries. Similarly, using
results on low-degree testing [7, 6, 21, 36], there is a class of
which can be tested within query complexity
O( log(1=ffi)
ffl \Delta n), and yet learning it requires exp(n) many
queries.
AGNOSTICLEARNING AND TESTING. In a variant of PAC learn-
ing, called Agnostic PAC learning [26], there is no promise
concerning the target function f . Instead, the learner is required
to output a hypothesis h from a certain hypothesis class
H, such that h is ffl-close to the function in H which is closest
to f . The absence of a promise makes agnostic learning
closer in spirit to property testing than basic PAC learning. In
particular, agnostic learning with respect to a hypothesis class
H implies proper learning of the class H and thus property
testing of H.
LEARNING AND TESTING DISTRIBUTIONS. The context of
learning (cf., [25]) and testing distributions offers a dramatic
demonstration to the importance of a promise (i.e., the fact that
the learning algorithm is required to work only when the target
belongs to the class, whereas the testing algorithm needs to
work for all targets which are either in the class or far away
from it).
Proposition 2.6 There exist distribution classes which are efficiently
learnable (in both senses mentioned above) but cannot
be tested with a subexponential sample (regardless of the
running-time).
3. Testing Graph Properties
We concentrate on testing graph properties using queries
and with respect to the uniform distribution.
We consider undirected, simple graphs (no multiple edges
or self-loops). For a simple graph G, we denote by V(G)
its vertex set and assume, without loss of generality, that
jV(G)jg. The graph G is represented by
the (symmetric) Boolean function
where g(u; only if there is an edge between u
and v in G. This brings us to associated undirected graphs
with directed graphs, where each edge in the undirected graph
is associated with a pair of anti-parallel edges. Specifically,
for a graph G, we denote by E(G) the set of ordered pairs
which correspond to edges in G (i.e., (u; v) 2 E(G) iff there
is an edge between u and v in G). The distance between
two N -vertex graphs, G 1 and G 2 , is defined as the number of
entries which are in the
symmetric difference of E(G 1 ) and E(G 2 ). We denote
This notation is extended naturally to a set, C, of N -vertex
graphs; that is, dist(G; C) )g.
3.1. Testing k-Colorability
In this subsection we present an algorithm for testing the
k-Colorability property for any given k. Namely, we are interested
in determining if the vertices of a graph G can be colored
by k colors so that no two adjacent vertices are colored by the
same color, or if any k-partition of the graph has at least fflN 2
violating edges (i.e. edges between pairs of vertices which
belong to the same side of the partition).
The test itself is straightforward. We uniformly select a
sample, denoted X, of O
vertices of the graph,
query all pairs of vertices in X to find which are edges in
G, and check if the induced subgraph is k-Colorable. In
lack of efficient algorithms for k-Colorability, for k - 3, we
use the obvious exponential-time algorithm on the induced
subgraph. The resulting algorithm is called the k-Colorability
Testing Algorithm. Towards analyzing it, we define violating
edges and good k-partitions. 6
Definition 3.1.1 (violating edges and good k-partitions): We
say that an edge (u; v) 2 E(G) is a violating edge with respect
to a k-partition
say that a k-partition is ffl-good if it has at most fflN 2 violating
edges (otherwise it is ffl-bad). The partition is perfect if it has
no violating edges.
Theorem 3.1 The k-Colorability Testing Algorithm is a property
testing algorithm for the class of k-Colorable graphs whose
query complexity is poly(k log(1=ffi)=ffl) and whose running
time is exponential in its query complexity. If the tested graph
G is k-Colorable, then it is accepted with probability 1, and
with probability at least 1 \Gamma ffi (over the choice of the sampled
vertices), it is possible to construct an ffl-good k-partition of
V(G) in time poly(k log(1=ffi)=ffl) \Delta jV(G)j.
Proof: If G is k-Colorable then every subgraph of G is k-
Colorable, and hence G will always be accepted. The crux
of the proof is to show that every G which is ffl-far from the
class of k-Colorable graphs, denoted G k , is rejected with probability
at least We establish this claim by proving its
counter-positive. Namely, that every G which is accepted with
probability greater than ffi , must have an ffl-good k-partition
(and is thus ffl-close to G k ). This is done by giving a (construc-
tive) proof of the existence of an ffl-good k-partition of V(G).
Hence, in case G 2 G k , we also get an efficient probabilistic
procedure for finding an ffl-good k-partition of V(G). Note
that if the test rejects G then we have a certificate that
in form of the (small) subgraph induced by X which is not
k-colorable.
We view the set of sampled vertices X as a union of two
disjoint sets U and S, where U is a union of ' (disjoint) sets
, each of size m. The size of S is m as well, where
4k=ffl. The set U (or rather
a k-partition of U) is used to define a k-partition of V(G).
The set S ensures that with high probability, the k-partition of
U which is induced by the perfect k-partition of
defines an ffl-good partition of V(G).
In order to define a k-partition of V(G) given a k-partition
of U, we first introduce the notion of a clustering of the vertices
in V(G) with respect to this partition of U. More precisely,
we define the clustering based on the k-partition of a subset
U, where this partition, denoted (U 0
k ), is the
one induced by the k-partition of U. The clustering is defined
so that vertices in the same cluster have neighbors on the
6 k-partitions are associated with mappings of the vertex set into the canonical
k-element set [k]. The partition associated with
shall use the mapping notation
-, and the explicit partition notation
same sides of the partition of U 0 . For every A ' [k], the A-
cluster, denoted CA , contains all vertices in V(G) which have
neighbors in U 0
i for every i 2 A (and do not have neighbors in
the other U 0
's). The clusters impose restrictions on possible
extensions of the partition of U 0 to partitions
of all V(G), which do not have violating edges incident to
vertices in U 0 . Namely, vertices in CA should not be placed
in any V i such that i 2 A. As a special case, C ; is the set of
vertices that do not have any neighbors in U 0 (and hence can
be put on any side of the partition). In the other extreme, C [k]
is the set of vertices that in any extension of the partition of U 0
will cause violations. For each i, the vertices in C [k]nfig are
forced to be put in V i , and thus are easy to handle. It is more
difficult to deal with the the clusters CA where jAj
Definition 3.1.2 (clusters): Let U 0 be a set of vertices, and
let - 0 be a perfect k-partition of U 0 . Define U 0
ig. For each subset A ' [k] we define the A-cluster
with respect to - 0 as follows:
The relevance of the above clusters becomes clear given the
following definitions of extending and consistent partitions.
Definition 3.1.3 (consistent extensions): Let U 0 and - 0 be as
above. We say that a k-partition - of V(G) extends a k-partition
- 0 of U 0 if
extended partition - is consistent with - 0 if -(v) 6= - 0 (u) for
every is the [k]-cluster
Thus, each vertex v in the cluster CA (w.r.t - 0 defined on
forced to satisfy -(v) 2 -
every k-partition
- which extends - 0 in a consistent manner. There
are no restrictions regarding vertices in C ; and vertices in
C [k] (the latter is guaranteed artificially in the definition and
the consequences will have to be treated separately). For
the consistency condition forces
We now focus on the main problem of the analysis. Given
a k-partition of U, what is a good way to define a k-partition
of V(G)? Our main idea is to claim that with high probability
the set U contains a subset U 0 so that the clusters with respect
to the induced k-partition of U 0 determine whatever needs
to be determined. That is, if these clusters allow to place
some vertex on a certain side of the partition, then doing so
does not introduce too many violating edges. The first step in
implementing this idea is the notion of a restricting vertex.
Definition 3.1.4 (restricting vertex): A pair (v; i), where
is said to be restricting with respect
to a k-partition - 0 (of U 0 ) if v has at least ffl
neighbors
7 In the Bipartite case, this is easy too (since C ; is likely to contain few
vertices of high degree).
in [B:i= 2B CB . Otherwise, (v; i) is non-restricting. A vertex
restricting with respect to - 0 if
for every
A the pair (v; i) is restricting. Otherwise, v is
non-restricting. As always, the clusters are with respect to - 0 .
Thus, a vertex v 2 CA is restricting if for every
adding
v to U 0
(and thus to U 0 ) will cause may of its neighbors to
move to a cluster corresponding to a bigger subset. That is,
v's neighbors in the B-cluster (w.r.t (U 0
move to
the (B [ fig)-cluster (w.r.t (U 0
Given a prefect k-partition of U, we construct U 0 in steps
starting with the empty set. At step j we add to U 0 a vertex
which is a restricting
vertex with respect to the k-partition of the current set U 0 . If
no such vertex exists, the procedure terminates. When the
procedure terminates (and as we shall see it must terminate
after at most ' steps), we will be able to define, based on the
k-partition of the final U 0 , an ffl-good k-partition of V(G). The
procedure defined below is viewed at this point as a mental
experiment. Namely, it is provided in order to show that with
high probability there exists a subset U 0 of U with certain
desired properties (which we later exploit).
Restriction Procedure (Construction of U 0 )
Input: a perfect k-partition of
1. U 0 ;.
2. For do the following. Consider the current
set U 0 and its partition - 0 (induced by the perfect k-partition
of U).
ffl If there are less than (ffl=8)N restricting vertices with
respect to - 0 then halt and output U 0 .
ffl If there are at least (ffl=8)N restricting vertices but
there is no restricting vertex in U j , then halt and output
error.
ffl Otherwise (there is a restricting vertex in U j ), add the
first (by any fixed order) restricting vertex to U 0 .
3.1.5 For every U and a perfect k-partition of U, after
at most iterations, the Restriction Procedure halts
and outputs either U 0 or error.
The proof of this claim, as well as all other missing proofs, can
be found in our report [22]. Before we show how U 0 can be
used to define a k-partition - of V(G), we need to ensure that
with high probability, the restriction procedure in fact outputs
a set U 0 and not error. To this end, we first define the notion of
a covering set.
Definition 3.1.6 (covering sets - for k-coloring): We say that
U is a covering set for V(G), if for every perfect k-partition
of U, the Restriction Procedure, given this partition as input,
halts with an output U 0 ae U (rather than an error message).
In other words, U is such that for every perfect k-partition of
U and for each of the at most ' iterations of the procedure, if
there exist at least (ffl=8)N restricting vertices with respect to
the current partition of U 0 , then U j will include at least one
such restricting vertex.
Lemma 3.1.7 With probability at least 1 \Gamma ffi, a uniformly
chosen set of size ' \Delta
is a covering set.
Definition 3.1.8 (closed partitions): Let U 0 be a set and - 0 a
k-partition of it. We call (U closed if there are less than
(ffl=8)N restricting vertices with respect to - 0 .
Clearly, if the Restriction Procedure outputs a set U 0 then this
set together with its (induced) partition are closed. If (U
is closed, then most of the vertices in V(G) are non-restricting.
Recall that a non-restricting vertex v, belonging to a cluster
[k], has the following property. There exists at least
one index
A, such that (v; i) is non-restricting. It follows
from Definition 3.1.4 that for every consistent extension of - 0
to - which satisfies there are at most fflN violating
edges incident to v. 8 However, even if v is non-restricting
there might be indices
A such that (v; i) is restricting, and
hence there may exist a consistent extensions of - 0 to - which
in which there are more than ffl
violating
edges incident to v. Therefore, we need to define for each
vertex its set of forbidden indices which will not allow to have
restricting pair (v; i).
Definition 3.1.9 (forbidden sets): Let (U closed and
consider the clusters with respect to - 0 . For each v 2 V(G) n
U 0 we define the forbidden set of v, denoted F v , as the smallest
set satisfying
ffl For every
if v has at least (ffl=4)N neighbors in
the clusters CB for which
2 B, then i is in F v .
For
Lemma 3.1.10 Let (U be an arbitrary closed pair and
's be as in Definition 3.1.9. Then:
8 N .
2. Let - be any k-partition of V(G)
that
Then, the number
of edges (v; v 0 is at
most (ffl=2)N 2 .
First note that by definition of a consistent extension no vertex in cluster
CB , where i 2 B, can have -value i. Thus, all violated edges incident to v
are incident to vertices in clusters CB so that
B. Since the pair (v; i) is
non-restricting, there are at most fflN such edges.
The lemma can be thought of as saying that any k-partition
which respects the forbidden sets is good (i.e., does not have
many violating edges). However, the partition applies only
to vertices for which the forbidden set is not [k]. The first
item tells us that there cannot be many such vertices which
do not belong to the cluster C [k] . We next show that, with
high probability over the choice of S, the k-partition - 0 of U 0
(induced by the k-partition of U[S) is such that C [k] is small.
This implies that all the vertices in C [k] (which were left out
of the partition in the previous lemma) can be placed in any
side without contributing too many violating edges (which are
incident to them).
Definition 3.1.11 (useful k-partitions): We say that a pair
8 N . Otherwise it is ffl-unuseful.
The next claim directly follows from our choice of m and the
above definition.
3.1.12 Let U 0 be a fixed set of size ' and - 0 be a fixed
k-partition of U 0 so that (U S be a
uniformly chosen set of size m. Then, with probability at
least ffik \Gamma' , there exists no perfect k-partition of U 0 [ S which
extends - 0 .
The following is a corollary to the above claim and to the fact
that the number of possible closed pairs (U
by all possible k-partitions of U is at most k ' .
Corollary 3.1.13 If all closed pairs (U are determined
by all possible k-partitions of U are unuseful, then with
probability at least over the choice of S, there is no
perfect k-partition of
We can now wrap up the proof of Theorem 3.1. If G is
accepted with probability greater than ffi , then by Lemma 3.1.7,
the probability that it is accepted and U is a covering set is
greater than ffi =2. In particular, there must exist at least one
covering set U, such that if U is chosen then G is accepted
with probability greater than ffi =2 (with respect to the choice
of S). That is, (with probability greater than ffi =2) there exists
a perfect partition of U [ S. But in such a case (by applying
Corollary 3.1.13), there must be a useful closed pair (U
(where U 0 ae U). If we now partition V(G) as described in
Lemma 3.1.10, where vertices with forbidden set [k] are placed
arbitrarily, then from the two items of Lemma 3.1.10 and the
usefulness of (U that there are at most fflN 2
violating edges with respect to this partition. This completes
the main part of the proof. (Theorem 3.1)
3.2. Testing Max-Clique
Let !(G) denote the size of the largest clique in graph G,
and C ae
jV(G)jg be the set of graphs
having cliques of density at least ae. The main result of this
subsection is:
Theorem 3.2 Let ' There exists a property
testing algorithm, A, for the class C ae whose edge-query complexity
is O(' 2 ae 2 =ffl 6 ) and whose running time is exp('ae=ffl 2 ).
In particular, A uniformly selects O(' 2 ae 2 =ffl 4 ) vertices in G
and queries the oracle only on the existence of edges between
these vertices. In case G 2 C ae , one can also retrieve in time
set of ae \Delta jV(G)j vertices in G which
is almost a clique (in the sense that it lacks at most ffl \Delta jV(G)j 2
edges to being a clique).
Theorem 3.2 is proven by presenting a seemingly unnatural
algorithm/tester (see below). However, as a corollary, we
observe that "the natural" algorithm, which uniformly selects
poly(log(1=ffi)=ffl) many vertices and accepts iff they induce a
subgraph with a clique of density ae \Gamma ffl
2 , is a valid C ae -tester as
well.
Corollary 3.3 Let R be a uniformly
selected set of m vertices in V (G). Let GR be the subgraph
(of G) induced by R. Then,
2In the rest of this subsection we provide a motivating discussion
to the algorithm asserted in Theorem 3.2. Recall that
jV(G)j denotes the number of vertices in G.
Our first idea is to select at random a small sample U
of V(G) and to consider all subsets U 0 of size ae\Delta jUj of
U where poly(1=ffl). For each U 0 let T(U 0 ) be the
set of all vertices which neighbor every vertex in U 0 (i.e.,
\Gamma(u)). In the subgraph induced by T(U 0 ),
consider the set Y(U 0 ) of aeN vertices with highest degree in
the induced subgraph. Clearly, if G is ffl-far from C ae , then
at least fflN 2 edges to being a clique (for every
choice of U and U 0 ). On the other hand, we show that if G has a
clique C of size aeN then, with high probability over the choice
of U, there exists a subset U 0 ae U such that Y(U 0 ) misses at
most (ffl=3)N 2 to being a clique (in particular, U
will do).
Assume that for any fixed U 0 we could sample the vertices
in Y(U 0 ) and perform edge queries on pairs of vertices in this
sample. Then, a sample of O(t=ffl 2 ) vertices (where
suffices for approximating the edge density in Y(U 0 ) to within
an ffl=3 fraction with probability In particular a
sample can distinguish between a set Y(U 0 ) which is far from
being a clique and a set Y(U 0 ) which is almost a clique. The
point is that we need only consider
possible sets
is only a polynomial in 1=ffl.
The only problem which remains is how to sample from
Certainly, we can sample sampling
V(G) and testing membership in T, but how do we decide
which vertex is among those of highest degree? The first idea
is to estimate the degrees of vertices in T using an additional
sample, denoted W. Thus, instead of considering the aeN
vertices of highest degree in T, we consider the aeN vertices
in T having the most neighbors in T " W. The second idea is
that we can sample T, order vertices in this sample according
to the number of neighbors in T " W, and take the ae fraction
with the most such neighbors.
3.3. The General Partition Problem
The following General Graph Partition property generalizes
all properties considered in previous subsections. In particular,
it captured any graph property which requires the existence of
partitions satisfying certain fixed density constraints. These
constraints may refer both to the number of vertices on each
side of the partition and to the number of edges between each
pair of sides.
ae lb
be a set of
non-negative parameters so that ae lb
GP \Phi be the class of graphs which have a
k-way partition
denotes the set of edges with one endpoint
in V j and one in V j 0
. That is, Eq. (3) places lower and
upper bounds on the relative sizes of the various parts; whereas
Eq. (4) imposes lower and upper bounds on the density of edges
among the various pairs of parts. For example, k-colorability is
expressed by setting % ub
setting ae lb
ae ub
similarly setting the % xx
's for j 0 6= j).
Theorem 3.4 There exists an algorithm A such that for every
given set of parameters \Phi, algorithm A is a property
testing algorithm for the class GP \Phi with query complexity
log(k=fflffi), and running time
exp
Recall that better complexities for Max-CUT and Bisection
(as well as for k-Colorability and ae-Clique), are obtained by
custom-made algorithms.
Acknowledgments
We wish to thank Noga Alon, Ravi Kannan, David Karger
and Madhu Sudan for useful discussions.
--R
The algorithmic aspects of the regularity lemma.
A new rounding procedure for the assignment problem with applications to dense graph arrangement problems.
Polynomial time approximation schemes for dense instances of NP-hard problems
Proof verification and intractability of approximation problems.
Probabilistic checkable proofs: A new characterization of NP.
Checking computations in polylogarithmic time.
Linearity testing in characteristic two.
Free bits
Efficient probabilistically checkable proofs and applications to approximation.
Improved non-approximability results
Can finite samples detect singularities of real-valued functions? In 24th STOC
Training a 3-node neural network is NP-complete
Learnability and the Vapnik-Chervonenkis dimension
On determining the rationality of the mean of a random variable.
The complexity of colouring problems on dense graphs.
Approximating clique is almost NP-complete
The regularity lemma and approximation schemes for dense problems.
Property testing and its connection to learning and approximation.
Distinguishability of sets of distributions.
On the learnability of discrete distributions.
Toward efficient agnostic learning.
On probably correct classification of concepts.
Lecture notes on evasiveness of graph properties.
The hardness of approximations: Gap location.
Computational limitations on learning from examples.
The minimum consistent DFA problem cannot be approximated within any polynomial.
On recognizing graph properties from adjacency matrices.
On the time required to recognize properties of graphs: A problem.
Robust functional equations and their applications to program testing.
Robust characterization of polynomials with applications to program testing.
A theory of the learnable.
On the uniform convergence of relative frequencies of events to their probabilities.
Probably almost discriminative learning.
Lower bounds to randomized algorithms for graph properties.
A general classification rule for probability mea- sures
--TR
A theory of the learnable
Using dual approximation algorithms for scheduling problems theoretical and practical results
The complexity of colouring problems on dense graphs
A polynomial approximation scheme for scheduling on uniform processors: Using the dual approximation approach
Computational limitations on learning from examples
Learnability and the Vapnik-Chervonenkis dimension
Training a 3-node neural network in NP-complete
Checking computations in polylogarithmic time
Self-testing/correcting for polynomials and for approximate functions
Approximating clique is almost NP-complete (preliminary version)
Can finite samples detect singularities of real-valued functions?
Toward efficient agnostic learning
The minimum consistent DFA problem cannot be approximated within any polynomial
Small-bias probability spaces
Efficient probabilistically checkable proofs and applications to approximations
On probably correct classification of concepts
Self-testing/correcting with applications to numerical problems
The algorithmic aspects of the regularity lemma
Improved non-approximability results
On the learnability of discrete distributions
The hardness of approximation
Probably Almost Discriminative Learning
Polynomial time approximation schemes for dense instances of <italic>NP</italic>-hard problems
MAX-CUT has a randomized approximation scheme in dense graphs
Testing of the long code and hardness for clique
Adaptively secure multi-party computation
Some optimal inapproximability results
Property testing in bounded degree graphs
Spot-checkers
A sublinear bipartiteness tester for bounded degree graphs
Recycling queries in PCPs and in linearity tests (extended abstract)
Testing problems with sub-learning sample complexity
Fast Probabilistic Algorithms for Verification of Polynomial Identities
Robust Characterizations of Polynomials withApplications to Program Testing
Linearity testing in characteristic two
Free bits, PCPs and non-approximability-towards tight results
Clique is hard to approximate within n1-
A new rounding procedure for the assignment problem with applications to dense graph arrangement problems
The regularity lemma and approximation schemes for dense problems
Probabilistically checkable proofs and the testing of hadamard-like codes
--CTR
Michal Parnas , Dana Ron , Ronitt Rubinfeld, Testing membership in parenthesis languages, Random Structures & Algorithms, v.22 n.1, p.98-138, January
Oren Ben-Zwi , Oded Lachish , Ilan Newman, Lower bounds for testing Euclidean Minimum Spanning Trees, Information Processing Letters, v.102 n.6, p.219-225, June, 2007
Michal Parnas , Dana Ron, Testing the diameter of graphs, Random Structures & Algorithms, v.20 n.2, p.165-183, March 2002
Hana Chockler , Dan Gutfreund, A lower bound for testing juntas, Information Processing Letters, v.90 n.6, p.301-305,
Uriel Feige , Gideon Schechtman, On the integrality ratio of semidefinite relaxations of MAX CUT, Proceedings of the thirty-third annual ACM symposium on Theory of computing, p.433-442, July 2001, Hersonissos, Greece
J. Feigenbaum , S. Kannan , M. Strauss , M. Viswanathan, Testing and spot-checking of data streams (extended abstract), Proceedings of the eleventh annual ACM-SIAM symposium on Discrete algorithms, p.165-174, January 09-11, 2000, San Francisco, California, United States
Eldar Fischer, Testing graphs for colorable properties, Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms, p.873-882, January 07-09, 2001, Washington, D.C., United States
Eldar Fischer, The difficulty of testing for isomorphism against a graph that is given in advance, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA
Oded Goldreich , Luca Trevisan, Three theorems regarding testing graph properties, Random Structures & Algorithms, v.23 n.1, p.23-57, August
Nir Ailon , Bernard Chazelle, Information theory in property testing and monotonicity testing in higher dimension, Information and Computation, v.204 n.11, p.1704-1717, November 2006
Gunnar Andersson , Lars Engebretsen, Property testers for dense constraint satisfaction programs on finite domains, Random Structures & Algorithms, v.21 n.1, p.14-32, August 2002
Beate Bollig , Ingo Wegener, Functions that have read-once branching programs of quadratic size are not necessarily testable, Information Processing Letters, v.87 n.1, p.25-29, July
Uriel Feige , Gideon Schechtman, On the optimality of the random hyperplane rounding technique for max cut, Random Structures & Algorithms, v.20 n.3, p.403-440, May 2002
Kenji Obata, Approximate max-integral-flow/min-multicut theorems, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA
Nir Ailon , Bernard Chazelle , Seshadhri Comandur , Ding Liu, Estimating the distance to a monotone function, Random Structures & Algorithms, v.31 n.3, p.371-383, October 2007
Eli Ben-Sasson , Prahladh Harsha , Sofya Raskhodnikova, Some 3CNF properties are hard to test, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, June 09-11, 2003, San Diego, CA, USA
Alon , W. Fernandez de la Vega , Ravi Kannan , Marek Karpinski, Random sampling and approximation of MAX-CSP problems, Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, May 19-21, 2002, Montreal, Quebec, Canada
Eldar Fischer, On the strength of comparisons in property testing, Information and Computation, v.189 n.1, p.107-116, 25 February 2004
Alon , Asaf Shapira, Testing satisfiability, Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, p.645-654, January 06-08, 2002, San Francisco, California
Alon, Testing subgraphs in large graphs, Random Structures & Algorithms, v.21 n.3-4, p.359-370, October 2002
Eldar Fischer , Ilan Newman, Testing versus estimation of graph properties, Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, May 22-24, 2005, Baltimore, MD, USA
Hana Chockler , Orna Kupferman, -Regular languages are testable with a constant number of queries, Theoretical Computer Science, v.329 n.1-3, p.71-92, 13 December 2004
Eldar Fischer , Arie Matsliah, Testing graph isomorphism, Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, p.299-308, January 22-26, 2006, Miami, Florida
Artur Czumaj , Christian Sohler, Estimating the weight of metric minimum spanning trees in sublinear-time, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA
Eldar Fischer , Ilan Newman, Testing of matrix properties, Proceedings of the thirty-third annual ACM symposium on Theory of computing, p.286-295, July 2001, Hersonissos, Greece
Alon , Asaf Shapira, Testing satisfiability, Journal of Algorithms, v.47 n.2, p.87-103, July
Ioannis Giotis , Venkatesan Guruswami, Correlation clustering with a fixed number of clusters, Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, p.1167-1176, January 22-26, 2006, Miami, Florida
Michal Parnas , Dana Ron, Testing metric properties, Information and Computation, v.187 n.2, p.155-195, 15 December
Christian Borgs , Jennifer Chayes , Lszl Lovsz , Vera T. Ss , Balzs Szegedy , Katalin Vesztergombi, Graph limits and parameter testing, Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, May 21-23, 2006, Seattle, WA, USA
Ccile Germain-Renaud , Dephine Monnier-Ragaigne, Grid result checking, Proceedings of the 2nd conference on Computing frontiers, May 04-06, 2005, Ischia, Italy
Robert Krauthgamer , Ori Sasson, Property testing of data dimensionality, Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, January 12-14, 2003, Baltimore, Maryland
Eldar Fischer , Guy Kindler , Dana Ron , Shmuel Safra , Alex Samorodnitsky, Testing juntas, Journal of Computer and System Sciences, v.68 n.4, p.753-787, June 2004
Beate Bollig, A large lower bound on the query complexity of a simple boolean function, Information Processing Letters, v.95 n.4, p.423-428, 31 August 2005
Alon , Asaf Shapira, Linear equations, arithmetic progressions and hypergraph property testing, Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, January 23-25, 2005, Vancouver, British Columbia
Michal Parnas , Dana Ron, Testing metric properties, Proceedings of the thirty-third annual ACM symposium on Theory of computing, p.276-285, July 2001, Hersonissos, Greece
Y. Kohayakawa , V. Rdl , L. Thoma, An optimal algorithm for checking regularity: (extended abstract), Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, p.277-286, January 06-08, 2002, San Francisco, California
Artur Czumaj , Christian Sohler, Soft kinetic data structures, Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms, p.865-872, January 07-09, 2001, Washington, D.C., United States
Eli Ben-Sasson , Oded Goldreich , Prahladh Harsha , Madhu Sudan , Salil Vadhan, Robust pcps of proximity, shorter pcps and applications to coding, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA
Cristina Bazgan , W. Fernandez de la Vega , Marek Karpinski, Polynomial time approximation schemes for dense instances of minimum constraint satisfaction, Random Structures & Algorithms, v.23 n.1, p.73-91, August
Eldar Fischer , Eric Lehman , Ilan Newman , Sofya Raskhodnikova , Ronitt Rubinfeld , Alex Samorodnitsky, Monotonicity testing over general poset domains, Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, May 19-21, 2002, Montreal, Quebec, Canada
Eldar Fischer , Ilan Newman , Ji Sgall, Functions that have read-twice constant width branching programs are not necessarily testable, Random Structures & Algorithms, v.24 n.2, p.175-193, March 2004
P. Drineas , A. Frieze , R. Kannan , S. Vempala , V. Vinay, Clustering Large Graphs via the Singular Value Decomposition, Machine Learning, v.56 n.1-3, p.9-33
Harry Buhrman , Lance Fortnow , Ilan Newman , Hein Rhrig, Quantum property testing, Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, January 12-14, 2003, Baltimore, Maryland
Eli Ben-Sasson , Madhu Sudan , Salil Vadhan , Avi Wigderson, Randomness-efficient low degree tests and short PCPs via epsilon-biased sets, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, June 09-11, 2003, San Diego, CA, USA
Nikhil Bansal , Avrim Blum , Shuchi Chawla, Correlation Clustering, Machine Learning, v.56 n.1-3, p.89-113
Gereon Frahling , Christian Sohler, Coresets in dynamic geometric data streams, Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, May 22-24, 2005, Baltimore, MD, USA
Abraham D. Flaxman , Alan M. Frieze, The diameter of randomly perturbed digraphs and some applications, Random Structures & Algorithms, v.30 n.4, p.484-504, July 2007
Ravi Kumar , Ronitt Rubinfeld, Algorithms column: sublinear time algorithms, ACM SIGACT News, v.34 n.4, December
W. Fernandez de la Vega , Marek Karpinski , Claire Kenyon , Yuval Rabani, Approximation schemes for clustering problems, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, June 09-11, 2003, San Diego, CA, USA
V. Rdl , M. Schacht, Property testing in hypergraphs and the removal lemma, Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, June 11-13, 2007, San Diego, California, USA
Viraj Kumar , Mahesh Viswanathan, Conformance testing in the presence of multiple faults, Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, January 23-25, 2005, Vancouver, British Columbia
Michael A. Bender , Dana Ron, Testing properties of directed graphs: acyclicity and connectivity, Random Structures & Algorithms, v.20 n.2, p.184-205, March 2002
Alon , Asaf Shapira, Every monotone graph property is testable, Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, May 22-24, 2005, Baltimore, MD, USA
Michal Parnas , Dana Ron , Ronitt Rubinfeld, Tolerant property testing and distance approximation, Journal of Computer and System Sciences, v.72 n.6, p.1012-1042, September 2006
Artur Czumaj , Funda Ergn , Lance Fortnow , Avner Magen , Ilan Newman , Ronitt Rubinfeld , Christian Sohler, Sublinear-time approximation of Euclidean minimum spanning tree, Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, January 12-14, 2003, Baltimore, Maryland
Ccile Germain-Renaud , Nathalie Playez, Result checking in global computing systems, Proceedings of the 17th annual international conference on Supercomputing, June 23-26, 2003, San Francisco, CA, USA
Alon , Asaf Shapira, Testing subgraphs in directed graphs, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, June 09-11, 2003, San Diego, CA, USA
Oded Goldreich , Madhu Sudan, Locally testable codes and PCPs of almost-linear length, Journal of the ACM (JACM), v.53 n.4, p.558-655, July 2006
Oded Goldreich, Property testing in massive graphs, Handbook of massive data sets, Kluwer Academic Publishers, Norwell, MA, 2002
Fast approximate probabilistically checkable proofs, Information and Computation, v.189 n.2, p.135-159, March 15, 2004
Fast approximate PCPs, Proceedings of the thirty-first annual ACM symposium on Theory of computing, p.41-50, May 01-04, 1999, Atlanta, Georgia, United States
M. Kiwi, Algebraic testing and weight distributions of codes, Theoretical Computer Science, v.299 n.1-3, p.81-106,
Artur Czumaj , Christian Sohler, Testing hypergraph colorability, Theoretical Computer Science, v.331 n.1, p.37-52, 15 February 2005
Asaf Shapira, A combinatorial characterization of the testable graph properties: it's all about regularity, Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, May 21-23, 2006, Seattle, WA, USA
Nina Mishra , Dana Ron , Ram Swaminathan, A New Conceptual Clustering Framework, Machine Learning, v.56 n.1-3, p.115-151 | approximation algorithms;computational learning theory;graph algorithms |
285064 | A Theory for Total Exchange in Multidimensional Interconnection Networks. | AbstractTotal exchange (or multiscattering) is one of the important collective communication problems in multiprocessor interconnection networks. It involves the dissemination of distinct messages from every node to every other node. We present a novel theory for solving the problem in any multidimensional (cartesian product) network. These networks have been adopted as cost-effective interconnection structures for distributed-memory multiprocessors. We construct a general algorithm for single-port networks and provide conditions under which it behaves optimally. It is seen that many of the popular topologies, including hypercubes, k-ary n-cubes, and general tori satisfy these conditions. The algorithm is also extended to homogeneous networks with 2k dimensions and with multiport capabilities. Optimality conditions are also given for this model. In the course of our analysis, we also derive a formula for the average distance of nodes in multidimensional networks; it can be used to obtain almost closed-form results for many interesting networks. | Introduction
Multidimensional (or cartesian product) networks have prevailed the interconnection network design
for distributed memory multiprocessors both in theory and in practice. Commercial machines
like the Ncube, the Cray T3D, the Intel iPSC, Delta and Paragon, have a node interconnection
structure based on multidimensional networks such as hypercubes, tori and meshes. These networks
are based on simple basic dimensions: linear arrays in meshes [15], rings in k-ary n-cubes
[6] and general tori, complete graphs in generalized hypercubes [4]. Structures with quite powerful
dimensions have also been proposed, e.g. products of trees or products of graphs based on groups
[21, 9].
One important issue related to multiprocessor interconnection networks is that of information
dissemination. Collective communications for distributed-memory multiprocessors have recently
received considerable attention, as for example is evident from their inclusion in the Message
Passing Interface standard [18] and from their support of various constructs in High Performance
Fortran [12, 16]. This is easily justified by their frequent appearance in parallel numerical algorithms
[11, 13, 3].
Broadcasting, scattering, gathering, multinode broadcasting and total exchange constitute a
set of representative collective communication problems that have to be efficiently solved in order
to maximize the performance of message-passing parallel programs. A general survey regarding
such communications was given in [10]. In total exchange, which is also known as multiscattering
or all-to-all personalized communication, each node in a network has distinct messages to send
to all the other nodes. Various data permutations occurring e.g. in parallel FFT and basic linear
algebra algorithms can be viewed as instances of the total exchange problem [3].
The subject of this work is the development of a general theory for solving the total exchange
problem in multidimensional networks. A multitude of quantities or properties in such networks
can be decomposed to quantities and properties of the individual dimensions. For example, the
degree of a node is the sum of the degrees in each of the dimensions. We show here that the total
exchange problem can also be decomposed to the simpler problem of performing total exchange in
single dimensions. This is a major simplification to an inherently complex problem for inherently
complex networks. We provide general algorithms applicable to any multidimensional network
A Theory for Total Exchange in Multidimensional Interconnection Networks
given that we have total exchange algorithms for each dimension. Optimality conditions are given
and it is seen that they are met for many popular networks, e.g. hypercubes, tori and generalized
hypercubes to name a few.
The results presented here apply to packet-switched networks that follow the so-called constant
model [10]. The assumptions pertaining the model we will follow are:
ffl communication links are bidirectional and fully duplex
ffl a message requires one time unit (or step) to be transferred between two nodes
ffl only adjacent nodes can exchange messages.
Another parameter of the model is that of port capabilities. Depending on whether a node can
communicate with one or all of its neighbors at the same time unit, two basic possibilities arise:
Single-port: a node can send at most one message and receive at most one message at each step.
Multiport: a node can send and receive messages from all its neighbors simultaneously.
As discussed in [10], the above assumptions constitute the standard model when examining theoretical
aspects of communications in packet-switched networks. Furthermore, results and conclusions
under this model can form the basis of arguments for other models, such as the linear
one which also quantifies the effect of message lengths. Many recent works focus exclusively on
wormhole-routed networks (an excellent survey on collective communications for such machines
was given in [17]). However, we believe that studies should not be limited to one particular type of
architecture: "it is important to consider several types of communication models, given the broad
variety of present and future communication hardware" [2]. In addition, since a circuit-switched
or wormhole routed network can emulate a packet-switched network by performing only nearest-neighbor
communications, the results also constitute a reference point for methods developed for
the former type of networks.
Algorithms to solve the total exchange problem for specific networks and under a variety
of assumptions have appeared in many recent works, mostly concentrating in hypercubes and
two-dimensional tori (e.g. [22, 14, 2, 23]). Under the single-port model we know of two optimal
algorithms, in [3, pp. 81-83] for hypercubes, and in [19] for star graphs. In contrast, our results
are applicable not only to one particular structure but rather provide a general procedure for
solving the problem in any multidimensional network.
2. Multidimensional Networks 3
This paper is organized as follows. We introduce formally multidimensional networks in the
next section and we give some of their properties related to our study. Section 3 gives lower
bounds on the time required for solving the total exchange problem under both port assumptions.
In the same section we derive a new formula for the single-port bound as applied to the networks
of interest. The result has its own merit as it also provides almost closed-form formulas for the
average distance in networks for which no such formula was known up to now. In Section 4 we
concentrate on single-port networks. We develop a total exchange algorithm and we give conditions
under which it behaves optimally. We also review known results about simple dimensions and
conclude that our method can be optimally applied to hypercubes, k-ary n-cubes and other
popular interconnects. In Section 5 we modify the algorithm and adapt it to the multiport model.
The extension works for networks which have dimensions (homogeneous
networks). Again, we provide optimality conditions and observe that they are satisfied for a
number of interesting topologies. The results are summarized in Section 6.
2. Multidimensional Networks
E) be an undirected graph 1 [5] with node (or vertex) set V and edge (or link) set
E. This is the usual model of representing a multiprocessor interconnection network: processors
correspond to nodes and communication links correspond to edges in the graph. The number of
nodes in G is An edge in E between nodes v and u is written as the unordered pair
(v; u) and v and u are said to be adjacent to each other, or just neighbors.
A path in G from node v to node u, denoted as v ! u, is a sequence of nodes
u, such that all vertices are distinct and for all We say that the length
of a path is ' if it contains ' vertices apart from v. The distance, dist(v; u), between vertices v
and u is the length of a shortest path between v and u. Finally, the eccentricity of v, e(v), is the
distance to a node farthest from v, i.e.
The maximum eccentricity in G is known as the diameter of G.
1 The terms 'graph' and `network' are considered synonymous here.
4 A Theory for Total Exchange in Multidimensional Interconnection Networks
a b
Figure
1. Cartesian product of two graphs
Given k graphs G product is defined as the graph
E) whose vertices are labeled by a k-tuple (v
We will call such products of graphs multidimensional graphs and G i will be called the ith
dimension of the product. The ith component of the address tuple of a node will be called the
ith address digit or the ith coordinate. The definition of E above in simple words states that two
nodes are adjacent if they differ in exactly one address digit. Their differing coordinates should
be adjacent in the corresponding dimension. An example is given in Fig. 1. Dimension 1 is a
graph consisting of a two-node path with consists of a three-node
ring with 3g. Their product has node set
(a; 1); (a; 2); (a; 3); (b; 1); (b; 2); (b;
According to the definition, node (a; 1) has the following neighbors: since node a is adjacent to
node b in the first dimension, node (a; 1) will be adjacent to node (b; 1); since node 1 is adjacent
to both nodes 2 and 3 in the second dimension, node (a; 1) will also be adjacent to nodes (a; 2)
and (a; 3).
Hypercubes are products of two-node linear arrays (or rings), tori are products of rings. If
all dimensions of the torus consist of the same ring, we obtain k-ary n-cubes [6]. Meshes are
products of linear arrays [15]. Generalized hypercubes are products of complete graphs [4]. If all
dimensions G i , are identical then the network is characterized as homogeneous.
3. Lower Bounds for Total Exchange 5
Multidimensional graphs have is the number of nodes in
k. It is also known that if dist i (v is the distance between v i and u i in G i
then the distance between
dist
It will be convenient to use the don't care symbol `\Lambda' as a shorthand notation for a set of
addresses. An appearance of this symbol at an element of an address tuple represents all legal values
of this element. In the previous example, (a;
while denotes the whole node set of the graph.
3. Lower Bounds for Total Exchange
In the total exchange problem, a node v has to send messages, one for each of the
other nodes in an n-node network. Let us first assume that the single-port model is in effect. If
there exist n d nodes in distance d from v, where then the messages sent by v
must cross
links in total. For all messages to be exchanged, the total number of link traversals must be
The quantity s(v) is known as the total distance or the status [5] of node v.
Every time a message is communicated between adjacent nodes one link traversal occurs.
Under the single-port model nodes are allowed to transmit only one message per step, so that the
maximum number of link traversals in a single step is at most n. Consequently, we can at best
subtract n units from SG in each step, so that a lower bound on total exchange time is
In other words, total exchange under the single-port assumption requires time bounded below by
the average status, AS(G), of the vertices.
6 A Theory for Total Exchange in Multidimensional Interconnection Networks
For multiport networks tighter bounds are obtained through cuts of the network. Partition
the vertex set V in two disjoint sets V 1 and V 2 such that
be the number
of edges in E joining the two parts, i.e. edges
from nodes in V 1 destined for nodes in V 2 must cross these C V1 V2 edges. The total number of
such messages is jV 1 jjV 2 j. Since only C V1V2 messages are able to pass from V 1 to V 2 at a time, we
obtain the following lower bound for total exchange time:
We are of course interested in maximizing the fraction in the right-hand side by selecting V 1 and
appropriately so that the tightest possible bound results. In many cases a bisection of the graph
is the most appropriate choice, although any sensible partition will yield quite tight bounds.
3.1. Status in multidimensional networks
In the course of our analysis on the single-port model we will need to compare the time needed
for total exchange with the lower bound of (3). We present here a formula for the status and the
average status of vertices in multidimensional graphs, as required by (3). The results are based
on the status of vertices in individual dimensions.
Theorem 1 Let is the status of v i in G i ,
the status of
Proof. The status of node v can be calculated through (2) or by using the equivalent formula:
where dist(v; u) is the distance between v and u. Hence, the status of v i in G i can be written as
dist
3. Lower Bounds for Total Exchange 7
We know that in a multidimensional network the distance between two vertices is equal to
the sum of distances between the corresponding coordinates (Eq. (1)). Consequently, from (5) we
obtain
dist
dist
ae X
dist
oe
ae X
oe
ae n
oe
as claimed.
The quantity s(v)=(n \Gamma 1) is known as the average distance of node v, giving the average
number of links that have to be traversed by a message departing from v. It is an important
performance measure of the network since under uniform addressing distributions it is directly
linked with the average delay a message experiences before reaching its destination [20]. Hence,
Theorem 1 can also be used to calculate the average distance of vertices in many graphs for which
no closed-form formula was known up to now. As an example, in generalized hypercubes [4] each
dimension is a complete graph with m i vertices, In a complete graph all nodes are
adjacent to each other, so that s i (v Consequently, the average distance in generalized
hypercubes is
In [4] it was possible to derive a formula only for the case where all m i are equal to each other.
In the context of the total exchange problem we are interested in the average status of the
nodes in the network. Let AS(G i ) be the average status of G i , defined in (3) as
We have the following corollary.
8 A Theory for Total Exchange in Multidimensional Interconnection Networks
. If AS(G i ) is the average status of G i ,
then the average status of G is given by
Proof. From Theorem 1 we obtain
(v 1 ;:::;v k )2G
which, divided by n, gives the required result.
4. Single-port Algorithm
A \Theta B. A k-dimensional network G 1 \Theta \Delta \Delta \Delta \Theta G k can still be expressed as the product
of two graphs by taking so we may consider two dimensions
without loss of generality. Let
Finally, let
\Phi
Graph G consists of n 2 (interconnected) copies of VA . Let A j be the jth copy of A with node set
takes all values in VA . Similarly, G can be viewed as n 1 copies of B, and we let
be the ith copy of B with node set (v i ; ). An example is shown in Fig. 2.
4. Single-port Algorithm 9
(a) (b)
A A A A1 2 3 43
Figure
2. A 4 \Theta 3 torus as (a) four copies of a three-node ring or (b) three copies of a four-node
ring
We will develop the basic idea behind our algorithm through the example in Fig. 2. Consider
the top node of A 1 . This node belongs to A 1 as well as B 1 . All nodes in A 1 have, among other
messages, messages destined for the rest of the nodes in A 1 . These messages can be distributed
by performing a total exchange within A 1 . In addition, nodes in A 1 have messages for all nodes in
A 2 , A 3 and A 4 . Somehow, these messages have to travel to their appropriate destinations. What
we will do is the following: all messages of the top node of A 1 meant for the nodes in A 2 will be
transferred to the top node of A 2 . All messages of the middle node of A 1 destined for the nodes
in A 2 will be transferred to the middle node of A 2 . Similar will be the case for the bottom node
of A 1 . Once all these messages have arrived in A 2 , the only thing remaining is to perform a total
exchange within A 2 and all these messages will be distributed to the correct destinations.
Next, nodes of A 1 have to transfer their messages meant for A 3 to nodes of A 3 . The procedure
will be identical to the procedure we followed for messages meant for A 2 . Finally, the remaining
messages in A 1 are destined for A 4 and one more repetition of the above procedure will complete
the task. Notice that what we did for messages originating at nodes of A 1 has to be done also for
messages originating at the other copies of A, i.e. A 2 , A 3 and A 4 . We are now ready to formalize
our arguments.
We are going to adopt the following the message of node
destined for node (v k ; u l ). We will furthermore introduce the ' ' symbol to denote a
corresponding set of messages. For example, messages of node (v
Theory for Total Exchange in Multidimensional Interconnection Networks
1 For every
2 For every
3 For every
5 For every
6 Do in parallel for all A j ,
7 In A j perform total exchange of messages m ( ;u k
(messages reside in node (v
Figure
3. Algorithm A1
destined for the nodes of A l , and m (v destined for node
messages of (v Notice that this last set normally
includes nodes. Since no node sends messages to itself, it is
always implied that from any set of messages, we have removed every message whose source and
destination are the same.
Consider the set of messages set represents our total exchange problem:
every node has one message for every other node. Next consider the set m ( This is the
set of messages of nodes in A j destined for the other nodes in A j : they can be distributed by a
total exchange operation within A j . Finally, consider the set m (v
for the nodes of A k . This set will be transferred to node (v Thus, after such transfers,
node (v
and so on. Notice that every node in A k will have received messages meant for every node in A k :
these messages clearly can be distributed to the appropriate destinations through a total exchange
operation within A k .
To recapitulate, we can solve the total exchange problem in using Algorithm A1
shown in Fig. 3. First we perform all the transfers we described above and then we perform the
total exchanges within each A j . The transfers correspond to lines 1-4 in Algorithm A1. After
they are completed, every node (v i ; u j ), for every i, j, will have received all messages meant for
the jth copy of A originating at nodes (v
4. Single-port Algorithm 11
Lines 5-7 of the algorithm distribute these messages to the correct vertices of A j in n 2 rounds.
In the kth round a total exchange is performed and the exchanged messages have originated from
A k .
Algorithm A1 solves the total exchange problem but lines 1-4 do not show how the transfer
of messages is exactly implemented. First of all, there may exist path collisions between transfers
from (v and transfers from (v i we try to do them
simultaneously. Let us consider again the example in Fig. 2. At some point all nodes in A 1 want
to transfer their messages, say, for nodes in A 4 . We make the observation that these transfers can
indeed be done in parallel. That is, the top node of A 1 can transfer its messages to the top node
in A 4 , the middle node of A 1 can transfer its own messages to the middle node of A 4 and so on,
without any interference between them. The trick is to use only paths in the second dimension
(B). That is, all the transfers of the top node of A 1 use links in B 1 , all transfers from the bottom
node of A 1 use links in B 3 , etc.
Consequently, a straightforward way of parallelizing line 1 is the following: when transferring
messages from (v allow use of links in the second dimension. In other
words, the allowable paths (v
paths (v have no node in common. Consequently, lines
1-4 can be rewritten in the improved form:
1 Do in parallel for all
2 For every
3 For every
using links in B
We may still improve matters by further parallelizing lines 1-3. Within B i we need to transfer
messages from every vertex u j to every other vertex u k . In Table 1 we list the
messages to be transferred by some vertex (v . Notice that we do not have to transfer
messages meant for A j anywhere, so the jth column of the table is actually unused (it will only
be used for a total exchange within A j ). Column k contains all messages of (v
to be transferred first to node (v
12 A Theory for Total Exchange in Multidimensional Interconnection Networks
For
.
.
.
.
.
.
Table
1. Messages to be transferred from node s actually unused since
messages of (v do not have to be transferred to any other copy of A.
Instead of transferring the messages column by column (i.e. transfer all messages in column
1 to A 1 , then all messages in column 2 to A 2 , etc.) we transfer them horizontally (row by row).
The batch R r of messages in row r contains all messages m (v We will transfer all of
them, except of course for m (v which is meant for a node of A j . Let us
consider again the network in Fig. 2 and assume that the bottom nodes of A 1 , A 2 , A 3 and A 4
want to transfer their first batch, R 1 . The batch of the bottom node of A 1 contains one message
for each of the bottom nodes of A 2 , A 3 and A 4 . Similarly, batch R 1 for the bottom node of A 2
contains one message for the other three nodes in question. It should be immediately clear that
these messages constitute an instance of the total exchange problem in every node has one
message for every other node in B 1 .
In general, when every node (v
transfers its own batch R r
of
Table
1, a total exchange within B i can distribute the messages appropriately. Consequently,
all rows of Table 1 of every node will be transferred where they should by performing n 1 total
exchanges in at the rth exchange all nodes (v batch of messages (rth
row of the corresponding tables).
Based on the above discussion, and recalling that transfers within B i do not interfere with
transfers within may express our total exchange algorithm in its final form, Algorithm
A2, appearing in Fig. 4. Algorithm A2 is a general solution to the total exchange problem
for any multidimensional network. If the network has k ? 2 dimensions,
4. Single-port Algorithm 13
2 Do in parallel for all
In B i perform total exchange with node (v
sending messages m (v
4 For every
5 Do in parallel for all A j ,
6 In A j perform total exchange with node (v
sending messages m (v
Figure
4. Algorithm A2
Algorithm A2 can be used recursively, by taking . The total
exchanges in A j (lines 4-6) can be performed by invoking the algorithm with
The algorithm is in a highly desirable form: it only utilizes total exchange algorithms for each
of the dimensions. The problem of total exchange in a complex network is now reduced to the
simpler problem of devising total exchange algorithms for single dimensions. For example, we are
in a position to systematically construct algorithms for tori, based on algorithms for rings.
We now proceed to determine the time requirements of the algorithm and the conditions under
which it behaves optimally.
4.1. Optimality conditions
It is not very hard to calculate the time required for Algorithm A2. This is because it is written in
a form suitable for the single-port model: every node participates in one total exchange operation
at a time. When each total exchange is performed under the single-port model, in effect no node
sends/receives more than one message at a time.
Theorem 2 If single-port total exchange algorithms for graphs A and B take steps
correspondingly then Algorithm A2 for
14 A Theory for Total Exchange in Multidimensional Interconnection Networks
time units.
Proof. The result is straightforward: lines 1-3 perform n 1 total exchanges within B i (for all
parallel), each requiring steps. Similarly, lines 4-6 perform n 2 total exchanges
within A j (for all in parallel), each requiring steps.
Corollary 2 If and a single-port total exchange algorithm for G i takes
total exchange in G under the single-port model can be performed in
steps, where
Proof. The proof is by induction. If we only have one dimension then the corollary is trivially
true. Assume as an induction hypothesis that it holds for up to dimensions. Then we must
have
where T 0 is the time needed for total exchange in G
j. If we let Theorem 2 gives
as required.
Theorem 3 If single-port total exchange for every dimension
can be performed in time equal to the lower bound of (3) then the same is true for G.
Proof. If in G i total exchange can be performed in time equal to the lower bound of (3) then
From Corollary 2, we must have
5. Multiport Algorithm 15
which, combined with Corollary 1, shows that and the algorithm is thus optimal.
The last theorem provides the main optimality condition for Algorithm A2. If we have total
exchange algorithms for every dimension and these algorithms achieve the bound of (3) then
Algorithm A2 also achieves this bound. For example, in hypercubes every dimension is a two-node
graph. Trivially, in a two-node graph the time for total exchange is just one step, equal
to the average status. Thus the optimality condition is met and the presented algorithm is an
optimal algorithm for single-port hypercubes.
More generally, we have shown elsewhere [8] that there exist algorithms that need time equal
to (3) for any Cayley [1] network. Consequently, the optimality condition is met for arbitrary
products of Cayley networks. Rings and complete graphs are examples of Cayley networks and
thus Algorithm A2 solves optimally the total exchange problem in k-ary n-cubes, general tori and
generalized hypercubes.
5. Multiport Algorithm
In this section we will modify Algorithm A2 to work better under the multiport model. In its
present form, Algorithm A2 is not particularly efficient under this model. This is because lines
4-6 are executed after lines 1-3 have finished. During execution of lines 1-3 only edges of the
second dimension (B) are used while lines 4-6 use only edges of the first dimension (A). In the
multiport model we try to keep as many edges busy as possible and the behavior of Algorithm A2
does not contribute to that effect. We seek, consequently, to transfer messages in both dimensions
simultaneously. In other words we will reconstruct the algorithm such that lines 1-3 overlap in
time as much as possible with lines 4-6.
The theory we present here applies to homogeneous networks. We recall that a multidimensional
network is homogeneous when all its dimensions are identical. Thus,
H k for some graph H. We will only consider the two-dimensional case, i.e.
also be seen that the algorithm we derive is applicable when the dimensionality of the graph is in
general a power of 2, i.e.
E) where is, G has n 2 nodes.
A Theory for Total Exchange in Multidimensional Interconnection NetworksBBB
Figure
5. A 3 \Theta 3 homogeneous mesh
For A 1 For A 2 For A 3
Table
2. Messages to be transferred from node (1,1)
The network in Fig. 5 will be used as an example for our arguments. For node (1; 1) we give
the messages it will distribute in Table 2. The messages in the first column are meant for the
other nodes in A 1 . A total exchange within A 1 may thus begin immediately to distribute such
messages. Since this total exchange uses only links in the first dimension, node (1,1) is also
available to participate in some total exchange in the second dimension (i.e. in B 1 ). In a general
network, node (v can participate in a total exchange within B i as soon as the first total
exchange in A j starts. Within A j the transferred messages are m (v in column
j of
Table
1.
Let us see what messages will be involved in the first total exchange within B i . Our objective
is the following: we want every node (v to receive messages so
that after this total exchange in B i is done, another total exchange can be initiated within A j .
Consequently, we seek to arrange the transfers so that (v receives one message for each node
in A j , i.e. receive messages with destinations Notice that any node (v receive
messages through a total exchange in B i : since A j has n nodes (including (v all the
receptions of (v should be meant for nodes other than (v
5. Multiport Algorithm 17
In the network in Fig. 5, we let for example node (1,1) send m (1;1) (2; 2). This message will at
some point be received by node (1,2) and it will provide one message for the forthcoming total
exchange in A 2 . If (1,2) sends m (1;2) (2; 3) then node (1,3) will also be provided with one message
for total exchange in A 3 . Similarly, needed by node (1,1).
We define the following operators:
\Theta
These operators work like addition/subtraction modulo n but produce numbers ranging from 1
to n instead of 0 to are better suited for our purposes here. Based on this operator
and the preceding discussion, we see that one effective scheduling is to let node (v
and all Hence, this node will also
receive will use for the next total exchange in
A j .
Let us see what other messages will be sent during this first total exchange in B i . In our
example it is seen that since node (1,1) decided to send m (1;1) (2; 2), it cannot send another
message to node (1,2). Thus it has to send a message to node (1,3). Since this node will receive
which covers one destination in A 3 , the only choice for (1,1) is to send m (1;1) (3; 3).
This message completes the set of messages needed by (1,3) for the next total exchange in A 3
since all other vertices in A 3 are now covered. Similarly, (1,2) and (1,3) must send m (1;2) (3; 1) and
three nodes will have a complete set of messages, suitable for total exchanges
within A 1 , A 2 and A 3 .
In general, the second message that node (v
provide node (v second message for the total exchange in A j \Phi . The pattern
should now be clear: during the first total exchange in B i , every node (v
sends the following messages:
A Theory for Total Exchange in Multidimensional Interconnection Networks
or, in a compact form:
This node will provide node (v with the 'th message it needs (i.e. a message destined for
node (v i\Phi Notice that the above set contains one message to be received by
each node (v i.e. it is a perfect set for participation in the first total exchange in B i .
Also, it should be clear that (v receive the following messages:
Again notice that this set contains one message for each node (v . Thus we
achieved our goal: every node in B i receives a full set of messages to be used for the subsequent
total exchange in A j .
B, the first total exchange in A j finishes exactly when the first total exchange in B i
finishes. Thus the second total exchange in A j can start immediately, using the newly acquired
(through the exchange in B i ) messages. Then the story repeats itself: a second total exchange
in B i can be performed simultaneously with the second total exchange in A j . Our goal for this
total exchange in B i remains the same: to distribute messages that can be used for a third total
exchange in A j .
The idea behind selecting a group of messages for this second total exchange in B i is similar
to the one in the first total exchange we saw above. Now, we let (v
The situation is repeated continuously. While the rth total exchange within A j is in progress, the
rth total exchange in B i is also performed in order to provide nodes with messages for the next
total exchange in A j . During the rth exchange in B i a node (v sends the
5. Multiport Algorithm 19
following messages:
Observe that the destinations v i\Phi n ' are in the order given by
That is, the natural sequence which we used in the first total exchange in B i is
left-rotated by r positions. Based on this observation, it is easy to verify that the above set of
messages can be given in the compact form:
Similarly, it is seen that after the rth exchange in B i , node (v received messages
which can be used during the (r + 1)th total exchange in A j .
Let us recapitulate. During the first total exchange in A j , (v
taneously, total exchanges in B i start. During the rth exchange in B i the same node sends the set
of messages given in (7), and receives the set given in (8). This set will be used for the (r
exchange in A j . This will occur for all total exchanges in B i are performed
in parallel with the total exchanges in A j .
The last (nth) total exchange in A j will involve the messages received during the (n \Gamma 1)th
total exchange in B i . It can be noticed that (v i ; u j ) has sent all its messages meant for nodes
in all other copies of A, A k (k 6= j), except for nodes (v In the example of Fig. 5, we saw
that during the first two exchanges in B 1 , node (1,1) sent all its messages with the exception of
messages m (1;1) (1; which are destined for node (1,2) and (1,3). The situation
is similar for nodes (1,2) and (1,3). In conclusion, messages m (v are the
20 A Theory for Total Exchange in Multidimensional Interconnection Networks
only messages remaining to be sent. Observe that this is a perfect set of messages for a (final)
total exchange in B i . This nth exchange can be performed while the nth exchange in A j occurs.
What we have described up to now is formulated as Algorithm A3 in Fig. 6. The total
exchanges in the copies of A and B are completely parallelized, hence lines 1-3. Lines 4-8
perform the transfers we described above in B i . Lines 9-13 perform the total exchanges in A j .
Notice how simple lines 11-13 are: whatever was sent through the rth exchange in B i is used
during the (r 1)th exchange in A j .
As it is, the algorithm works for any two-dimensional homogeneous network. Extension to
more than two dimensions seems rather difficult because the homogeneity will be lost, in the sense
that A could be different than B. For example, if can be written as only
vice versa.
However, it is easy to see that the algorithm is applicable if the dimensionality is a power
of 2 . If
then we let
. The algorithm can then be applied
recursively for A and B, by e.g. setting
, and so on.
We proceed now to determine the time requirements of Algorithm A3 and to give optimality
conditions.
5.1. Optimality conditions
Theorem 4 If H has n nodes and total exchange in H requires TH steps then Algorithm A3 in
Proof. Procedure TEA() performs n total exchanges in A j (for all
thus requiring nTH steps. Similarly, TEB() also requires nTH steps. The algorithm finishes when
both procedures have finished, i.e. at time
By the recursive application of the algorithm for networks where the dimensionality is a power
of 2 we have the following corollary.
5. Multiport Algorithm 21
1 Do in parallel
4 For
5 Do in parallel for all
6 Perform total exchange in
7 Do in parallel for all
8 Perform total exchange in
9 Do in parallel for all A j ,
Perform total exchange in A j : node (v
12 Do in parallel for all A j ,
Perform total exchange in A j : node (v
the messages received from the second dimension (B i );
Figure
6. Algorithm A3 for multiport homogeneous networks:
22 A Theory for Total Exchange in Multidimensional Interconnection Networks
Corollary 3 Let . If total exchange in H requires TH time units, then
total exchange in G can be performed in
steps.
Proof. The proof is by induction. The case of two dimensions was covered in Theorem 4. If, as
an induction hypothesis, for G
apply Theorem 4 with G 0 treated as H, T 0 treated as TH , and n d=2 treated as n. It is then seen
that claimed.
Theorem 5 Let . If total exchange in H can be performed in time equal
to the lower bound of (4) then the same is true for G.
Proof. From Corollary 3, total exchange in G requires
If TH achieves the lower bound in (4) then there exists a partition VH1 , VH2 of the node set of H
such that
is the number of links separating the two parts.
Consider the following partition of V , the node set of G:
Then clearly, . Notice that G contains n d\Gamma1 copies of H and
that in order to separate the two parts we only need to disconnect each copy of H, by removing
links only in the first dimension. Since C VH 1 VH 2
links are needed to disconnect each copy of H,
we obtain
Thus, V 1 and V 2 is a partition of G such that
6. Discussion 23
which is equal to T , the time needed for total exchange in G. Thus the bound in (4) is tight for
G, too.
Summarizing, Algorithm A3 is a multiport total exchange algorithm for homogeneous networks
whose dimensionality is a power of 2. If total exchange in H can be performed in time equal to
the lower bound of (4) then Algorithm A3 optimally solves the problem in G. For example, in [7]
we have given algorithms that achieve this lower bound in linear arrays and rings. Consequently,
Algorithm A3 leads to an optimal total exchange algorithm for homogeneous meshes and tori with
dimensions.
6. Discussion
We have given a systematic procedure for performing total exchange in multidimensional net-
works. The main contribution is probably the existence of a decomposition of the problem to
simpler subproblems. Given that we have total exchange algorithms for single dimensions, we can
synthesize an algorithm for the multidimensional structure. In contrast with all the other works
on the problem, this approach is not limited to one particular network but to any graph that can
be expressed as a cartesian product.
Except for the structured nature of our method, we also showed that it is optimal with
respect to the number of communication steps for many popular networks. Under the single-port
assumption, Algorithm A2 provides optimal solutions for hypercubes, k-ary n-cubes, general tori
and actually any product of Cayley graphs. For most of these networks, this is the first optimal
algorithm to appear in the literature.
Under the multiport assumption, we reached similar conclusions for homogeneous networks
dimensions: Algorithm A3 solves the problem in any such network. Optimality is also
guaranteed if the single-dimension algorithm achieves the bound of (4). In particular, based on
known results for linear arrays and rings, meshes and k-ary n-cubes with 2 k dimensions can optimally
take advantage of our algorithm. We are currently studying the behavior of the algorithm
in the case where the number of dimensions is not a power of two. Some preliminary results
indicate that the algorithm could still be applicable.
A Theory for Total Exchange in Multidimensional Interconnection Networks
--R
"A group-theoretic model for symmetric interconnection networks,"
"Optimal communication algorithms for hypercubes,"
Parallel and Distributed Computation: Numerical Meth- ods
"Generalized hypercube and hyperbus structures for a computer network,"
Distance in Graphs.
"Deadlock-free message routing in multiprocessor interconnection networks,"
"Optimal total exchange in linear arrays and rings,"
"Optimal total exchange in Cayley graphs,"
"Methods and problems of communication in usual networks,"
"On the impact of communication complexity on the design of parallel numerical algorithms,"
"Compiling Fortran D for MIMD distributed-memory machines,"
"Communication efficient basic linear algebra computations on hypercube architectures,"
"Optimum broadcasting and personalized communication in hypercubes,"
Introduction to Parallel Algorithms and Architectures: Arrays
"High Performance Fortran,"
"Collective communication in wormhole-routed massively parallel computers,"
"MPI: A message-passing interface standard,"
"Communication aspects of the star graph interconnection net- work,"
"The performance of multicomputer interconnection net- works,"
"Product-shuffle networks: towards reconciling shuffles and butterflies,"
"Data communications in hypercubes,"
"Communication algorithms for isotropic tasks in hypercubes and wraparound meshes,"
--TR
--CTR
Yu-Chee Tseng , Sze-Yao Ni , Jang-Ping Sheu, Toward Optimal Complete Exchange on Wormhole-Routed Tori, IEEE Transactions on Computers, v.48 n.10, p.1065-1082, October 1999
V. Dimakopoulos, All-port total exchange in cartesian product networks, Journal of Parallel and Distributed Computing, v.64 n.8, p.936-944, August 2004
Shan-Chyun Ku , Biing-Feng Wang , Ting-Kai Hung, Constructing Edge-Disjoint Spanning Trees in Product Networks, IEEE Transactions on Parallel and Distributed Systems, v.14 n.3, p.213-221, March
V. Dimakopoulos , Nikitas J. Dimopoulos, Optimal Total Exchange in Cayley Graphs, IEEE Transactions on Parallel and Distributed Systems, v.12 n.11, p.1162-1168, November 2001 | total exchange;collective communications;interconnection networks;packet-switched networks;multidimensional networks |
285099 | Capabilities-Based Query Rewriting in Mediator Systems. | Users today are struggling to integrate a broad range of information sources providing different levels of query capabilities. Currently, data sources with different and limited capabilities are accessed either by writing rich functional wrappers for the more primitive sources, or by dealing with all sources at a lowest common denominator. This paper explores a third approach, in which a mediator ensures that sources receive queries they can handle, while still taking advantage of all of the query power of the source. We propose an architecture that enables this, and identify a key component of that architecture, the Capabilities-Based Rewriter (CBR). The CBR takes as input a description of the capabilities of a data source, and a query targeted for that data source. From these, the CBR determines component queries to be sent to the sources, commensurate with their abilities, and computes a plan for combining their results using joins, unions, selections, and projections. We provide a language to describe the query capability of data sources and a plan generation algorithm. Our description language and plan generation algorithm are schema independent and handle SPJ queries. We also extend CBR with a cost-based optimizer. The net effect is that we prune without losing completeness. Finally we compare the implementation of a CBR for the Garlic project with the algorithms proposed in this paper. | Introduction
Organizations today must integrate multiple heterogeneous
information sources, many of which are
not conventional SQL database management systems.
Examples of such information sources include bibliographic
databases, object repositories, chemical structure
databases, WAIS servers, etc. Some of these systems
provide powerful query capabilities, while others
are much more limited. A new challenge for the
database community is to allow users to query this
data using a single powerful query language, with location
transparency, despite the diverse capabilities of
the underlying systems.
Figure
(1.a) shows one commonly proposed integration
architecture [1, 2, 3, 4]. Each data source has
a wrapper, which provides a view of the data in that
source in a common data model. Each wrapper can
translate queries expressed in the common language
to the language of its underlying information source.
The mediator provides an integrated view of the data
Research partially supported by Wright Laboratories,
Wright Patterson AFB, ARPA Contract F33615-93-C-1337.
Wrapper i
CBR
component
subqueries
target query
Mediator
Query Decomposition
description of queries
supported by wrapper
plan for
(1.a)
Client
Mediator
Wrapper 1 Wrapper 2 Wrapper n
Information
Information
Information
Source n
Figure
1: (a) A typical integration architecture. (b)
CBR-mediator interaction.
exported by the wrappers. In particular, when the
mediator receives a query from a client, it determines
what data it needs from each underlying wrapper,
sends the wrappers individual queries to collect the
required data, and combines the responses to produce
the query result.
This scenario works well when all wrappers can support
any query over their data. However, in the types
of systems we consider, this assumption is unrealis-
tic. It leads to extremely complex wrappers, needed
to support a powerful query interface against possibly
quite limited data sources. For example, in many systems
the relational data model is taken as the common
data model, and all wrappers must provide a full SQL
interface, even if the underlying data source is a file
system, or a hierarchical DBMS. Alternatively, this
assumption may lead to a "lowest common denomina-
tor" approach in which only simple queries are sent to
the wrappers. In this case, the search capabilities of
more sophisticated data sources are not exploited, and
hence the mediator is forced to do most of the work, resulting
in unnecessarily poor performance. We would
like to have simple wrappers that accurately reflect the
search capabilities of the underlying data source. To
enable this, the mediator must recognize differences
and limitations in capabilities, and ensure that wrappers
receive only queries that they can handle.
For Garlic [1], an integrator of heterogeneous multimedia
data being developed at IBM's Almaden Re-search
Center, such an understanding is essential.
Garlic needs to deal efficiently with the disparate data
types and querying capabilities needed by applications
as diverse as medical, advertising, pharmaceutical
research, and computer-aided design. In our
model, a wrapper is capable of handling some set of
queries, known as the supported queries for that wrap-
per. When the mediator receives a query from a client,
it decomposes it into a set of queries, each of which
references data at a single wrapper. We call these individual
queries target queries for the wrappers. A
target query need not be a supported query; it may
sometimes be necessary to further decompose it into
simpler supported Component SubQueries (CSQs) in
order to execute it. A plan combines the results of the
CSQs to produce the answer to the target query.
To obtain this functionality, we are exploring
a Capabilities-Based Rewriter (CBR) module (Fig-
ure 1.b) as part of the Garlic query engine (media-
tor). The CBR uses a description of each wrapper's
ability, expressed in a special purpose query capabilities
description language, to develop a plan for the
wrapper's target query.
The mediator decomposes a user's query into target
queries q for each wrapper w without considering
whether q is supported by w. It then passes q to the
CBR for "inspection." The CBR compares q against
the description of the queries supported by wrapper
and produces a plan p for q, if either (i) q is directly
supported by w, or (ii) q is computable by the
mediator through a plan that involves selection, projection
and join of CSQs that are supported by w. The
mediator then combines the individual plans p into a
complete plan for the user's query.
The CBR allows a clean separation of wrapper
capabilities from mediator internals. Wrappers are
"thin" modules that translate queries in the common
model into source-specific queries. 2 Hence, wrappers
reflect the actual capabilities of the underlying data
sources, while the mediator has a general mechanism
for interpreting those capabilities and forming execution
strategies for queries. This paper focuses on the
technology needed to enable the CBR approach. We
first present a language for describing wrappers' query
capabilities. The descriptions look like context-free
grammars, modified to describe queries rather than
arbitrary strings. The descriptions may be recursive,
thus allowing the description of infinitely large supported
queries. In addition, they may be schema-
independent. For example, we may describe the capabilities
of a relational database wrapper without re-
In general, there is a one-to-one mapping and no optimization
is involved in this translation. All optimization is done at
the mediator.
ferring to the schema of a specific relational database.
An additional benefit of the grammar-like description
language is that it can be appropriately augmented
with actions to translate a target query to a query of
the underlying information system. This feature has
been described in [5] and we will not discuss it further
in this paper.
The second contribution of this paper is an architecture
for the CBR and an algorithm to build plans for a
target query using the CSQs supported by the relevant
wrapper. This problem is a generalization of the problem
of determining if a query can be answered using a
set of materialized queries/views [6, 7]. However, the
CBR uses a description of potentially infinite queries
as opposed to a finite set of materialized views. The
problem of identifying CSQs that compute the target
query has many sources of exponentiality even for the
restricted case discussed by [6, 7]. The CBR algorithm
uses optimizations and heuristics to eliminate sources
of exponentiality in many common cases.
In the next section, we present the language used to
describe a wrapper's query capabilities. In Section 3
we describe the basic architecture of the CBR, identifying
three modules: Component SubQuery Discov-
ery, Plan Construction, and Plan Refinement. These
components are detailed in Sections 4, 5 and 6, re-
spectively. Section 7 summarizes the run-time performance
of the CBR, while Section 8 compares the CBR
with related work. Finally, Section 9 concludes with
some directions for future work in this area.
2 The Relational Query Description
RQDL is the language we use to describe a wrap-
per's supported queries. We discuss only Select-
Project-Join queries in this paper. In section 2.1 we
introduce the basic language features , followed in sections
2.2 and 2.3 by the extensions needed to describe
infinite query sets and to support schema-independent
descriptions. Section 2.4 introduces a normal form for
queries and descriptors that increases the precision of
the language. The complete language specification appears
in [8].
The description language focuses on conjunctive
queries. We have found that it is powerful enough
to express the abilities of many wrappers and sources,
such as lookup catalogs and object databases. Indeed,
we believe that it is more expressive than context-free
grammars (we are currently working on the proof).
2.1 Language Basics
An RQDL specification contains a set of query tem-
plates, each of which is essentially a parameterized
query. Where an actual query might have a con-
stant, the query template has a constant placeholder,
allowing it to represent many queries of the same
form. In addition, we allow the values assumed by
the constant placeholders to be restricted by specifier-
metapredicates. A query is described by a
template (loosely speaking) if (1) each predicate in the
query matches one predicate in the template, and vice
versa, and (2) any metapredicates on the placeholders
of the template evaluate to true for the matching
constants in the query. The order of the predicates in
query and template need not be the same, and different
variable names are of course possible.
For example, consider a "lookup" facility that provides
information - such as name, department, office
address, and so on - about the employees of a
company. The "lookup" facility can either retrieve
all employees, or retrieve employees whose last name
has a specific prefix, or retrieve employees whose
last name and first name have specific prefixes. 3
We integrate "lookup" into our heterogeneous system
by creating a wrapper, called lookup, that
exports a predicate emp(First-Name, Last-Name,
Department, Office, Manager). ( The Manager
field may be 'Y' or 'N'.) The wrapper also exports
a predicate prefix(Full, Prefix) that is successful
when its second argument is a prefix of its first
argument. This second argument must be a string,
consisting of letters only. We may write the following
Datalog query to retrieve emp tuples for persons
whose first name starts with 'Rak' and whose last
name starts with 'Aggr':
prefix(FN,'Rak'), prefix(LN,'Aggr')
In this paper we use Datalog [9] as our query language
because it is well-suited to handling SPJ queries
and facilitates the discussion of our algorithms. 4 We
use the following Datalog terms in this paper: Distinguished
variables are the variables that appear in the
target query head. A join variable is any variable that
appears twice or more in the target query tail. In the
query (Q1) the distinguished variables are FN, LN, D,
O and M and the join variables are FN and LN.
Description (D2) is an RQDL specification of
lookup's query capabilities. The identifiers starting
with and $LP) are constant placeholders.
isalpha() is a metapredicate that returns true if
its argument is a string that contains letters only.
Metapredicates start with an underscore and a lowercase
letter. Intuitively, template (QT2.3) describes
query (Q1) because the predicates of the query match
those of the template (despite differences in order and
in variable names), and the metapredicates evaluate
to true when $FP is mapped to 'Rak' and $LP to
'Aggr'.
(D2) answer(F,L,D,O,M) :- (QT2.1)
emp(F,L,D,O,M)
answer(F,L,D,O,M) :- (QT2.2)
emp(F,L,D,O,M),
prefix(L, $LP), isalpha($LP)
answer(F,L,D,O,M) :- (QT2.3)
emp(F,L,D,O,M),
prefix(L, $LP), prefix(F,$FP),
3 The "lookup" facility is very similar to a Stanford University
facility.
4 We could have used SPJ SQL queries instead of Datalog.
Then, we would use a description language that looks like SQL
and not Datalog. The same notions, i.e., placeholders, nonter-
minals, and so on, hold. The CBR algorithm is also the same.
In general, a template describes any query that can
be produced by the following steps:
1. Map each placeholder to a constant, e.g., map $LP
to 'Aggr'.
2. Map each template variable to a query variable,
e.g., map F to FN.
3. Evaluate the metapredicates and discard any template
that contains at least one metapredicate
that evaluates to false.
4. Permute the template's subgoals.
2.2 Descriptions of Large and Infinite
Sets of Supported Queries
RQDL can describe arbitrarily large sets of templates
(and hence queries) when extended with non-terminals
as in context-free grammars. Nonterminals
are represented by identifiers that start with an underscore
capital letter. They have zero or
more parameters and they are associated with nonterminal
templates. A query template t containing non-terminals
describes a query q if there is an expansion
of t that describes q. An expansion of t is obtained
by replacing each nonterminal N of t with one of the
nonterminal templates that define N until there is no
nonterminal in t.
For example, assume that lookup allows us to pose
one or more substring conditions on one or more fields
of emp. For example, we may pose query (Q3), which
retrieves the data for employees whose office contains
the strings 'alma' and 'B'.
substring(O,'alma'), substring(O,'B')
(D4) uses the nonterminal Cond to describe the
supported queries. In this description the query template
(QT4.1) is supported by nonterminal templates
such as (NT4.1).
(D4)answer(F,L,D,O,M) :- (QT4.1)
emp(F,L,D,O,M), Cond(F,L,D,O,M)
substring(F, $FS), Cond(F,L,D,O,M)
substring(L, $LS), Cond(F,L,D,O,M)
substring(D, $DS), Cond(F,L,D,O,M)
substring(O,$OS), Cond(F,L,D,O,M)
substring(M, $MS), Cond(F,L,D,O,M)
To see that description (D4) describes query (Q3),
we expand Cond(F,L,D,O,M) in (QT4.1) with the
nonterminal template (NT4.4) and then again expand
Cond with the same template. The Cond subgoal
in the resulting expansion is expanded by the empty
template (NT4.6) to obtain expansion (E5).
substring(O,$OS), substring(O,$OS1)
Before a template is used for expansion, all of its
variables are renamed to be unique. Hence, the second
occurrence of placeholder $OS of template (NT4.4)
is renamed to $OS1 in (E5). (E5) describes query
(Q3), i.e., the placeholders and variables of (E5) can
be mapped to the constants and variables of (Q3).
2.3 Schema Independent Descriptions of
Supported Queries
Description (D4) assumes that the wrapper exports
a fixed schema. However, the query capabilities of
many sources (and thus wrappers) are independent of
the schemas of the data that reside in them. For exam-
ple, a relational database allows SPJ queries on all of
its relations. To support schema independent descriptions
RQDL allows the use of placeholders in place
of the relation name. Furthermore, to allow tables
of arbitrary arity and column names, RQDL provides
special variables called vector variables, or simply vec-
tors, that match lists of variables that appear in a
query. We represent vectors in our examples by identifiers
starting with an underscore ( ). In addition, we
provide two built-in metapredicates to relate vectors
and attributes: subset and in. subset( R,
succeeds if each variable in the list that matches R
appears in the list that matches A. in($Position,
X, matches a variable list, and
there is a query variable that matches X and appears
at the position number that matches $Position. (For
readability we will use italics for vectors and bold for
metapredicates).
For example, consider a wrapper called file-wrap
that accesses tables residing in plain UNIX files. It
may output any subset of any table's fields and may
impose one or more substring conditions on any field.
Such a wrapper may be easily implemented using the
UNIX utility AWK. (D6) uses vectors and the built-in
metapredicates to describe the queries supported by
file-wrap.
substring(X,$S), Cond(
In general, to decide whether a query is described
by a template containing vectors we must expand the
nonterminals, map the variables, placeholders, and
vectors, and finally, evaluate any metapredicates. To
illustrate this, we show how to verify that query (Q7)
is described by (D6).
substring(O,'alma'), substring(O,'B')
First, we expand (QT6.1) by replacing the non-terminal
Cond with (NT6.1) twice, and then with
(NT6.2), thus obtaining expansion (E8).
in($Position,X, A),substring(X,$S),
in($Position1,X1, A),substring(X1,$S1),
subset( R,
Expansion (E8) describes query (Q7) because there
is a mapping of variables, vectors, and placeholders
of (E8) that makes the metapredicates succeed and
makes every predicate of the expansion identical to a
predicate of the query. Namely, vector A is mapped
to [F,L,D,O,M], vector R to [L,D], placeholders
$Position and $Position1 to 4, $S to 'alma', $S1 to
'B', and the variables X and X1 to O. We must be careful
with vector mappings; if the vector V that maps
to appears in a metapredicate, we replace
However, if the vector V appears
in a predicate as p( V ) the mapping results in
Finally, the metapredicate in(4, O,
[F,L,D,O,M]) succeeds because O is the fourth variable
of the list, and subset([L,D], [F,L,D,O,M])
succeeds because [L,D] is a "subset" of [F,L,D,O,M].
Vectors are useful even when the schema is known
as the specification may otherwise be repetitious, as
in description (D4). In our running example, even
though we know the attributes of emp, we save effort
by not having to explicitly mention all of the column
names to say that a substring condition can be placed
on any column.
2.4 Query and Description Normal Form
If we allow templates' variables and vectors to map
to arbitrary lists of constants and variables, descriptions
may appear to support queries that the underlying
wrapper does not support. This is because using
the same variable name in different places in the query
or description can cause an implicit join or selection
that does not explicitly appear in the description. For
example, consider query (Q9), which retrieves employees
where the manager field is 'Y' and the first and
last names are equal, as denoted by the double appearance
of FL in emp.
(D6) should not describe query (Q9). Nevertheless,
we can construct expansion (E10), which erroneously
matches query (Q9) if we map A to [FL,FL,D,O,'Y']
and R to [FL,D]:
This section introduces a query and description
normal form that avoids inadvertently describing joins
and selections that were not intended. In the normal
form both queries and descriptions have only explicit
equalities. A query is normalized by replacing
every constant c with a unique variable V and
then by introducing the subgoal
more, for every join variable V that appears n ? 1
times in the query we replace its instances with the
Vn and introduce the subgoals
We replace any
appearance of V in the head with V 1 . For example,
query (Q11) is the normal form of (Q9).
FL1=FL2, M='Y'
Description (D6) does not describe (Q11) because
(D6) does not support the equality conditions that
RQDL
Plan Refinement
Target Query
Specification
Plans (not fully optimized)
Component SubQueries
Component SubQuery Discovery
Algebraically Optimal Plans
Plan Construction
Figure
2: The CBR's components
appear in (Q11). Description (D12) supports equality
conditions on any column and equalities between any
two columns: (NT12.2) describes equalities with constants
and (NT12.3) describes equalities between the
columns of our table.
(D12) answer( R) :- (QT12.1)
Table
in($Position,X, A), substring(X, $S),
in($Position1,X, A), X=$C, Cond(
in($Pos1,X, A), in($Pos2,Y, A),
X=Y, Cond(
For presentation purposes we use the more compact
unnormalized form of queries and descriptions when
there is no danger of introducing inadvertent selections
and joins. However, the algorithms rely on the normal
form.
3 The Capabilities-Based Rewriter
The Capabilities-Based Rewriter (CBR) determines
whether a target query q is directly supported
by the appropriate wrapper, i.e., whether it matches
the description d of the wrapper's capabilities. If not,
the CBR determines whether q can be computed by
combining a set of supported queries (using selections,
projections and joins). In this case, the CBR will produce
a set of plans for evaluating the query. The CBR
consists of three modules, which are invoked serially
(see
Figure
Component SubQuery (CSQ) Discovery:
finds supported queries that involve one or more
subgoals of q. The CSQs that are returned contain
the largest possible number of selections and
joins, and do no projection. All other CSQs are
pruned. This prevents an exponential explosion
in the number of CSQs.
ffl plan construction: produces one or more plans
that compute q by combining the CSQs exported
by CSQ Discovery. The plan construction algorithm
is based on query subsumption and has
been tuned to perform efficiently in the cases typically
arising in capabilities-based rewriting.
ffl plan refinement: refines the plans constructed
by the previous phase by pushing as many projections
as possible to the wrapper.
3.1 Consider query (Q13), which retrieves
the names of all managers that manage departments
that have employees with offices in the 'B'
wing, and the employees' office numbers. This query
is not directly supported by the wrapper described in
(D12).
emp(F1,L1,D,O1,M1), substring(O1,'B')
The CSQ detection module identifies and outputs
the following CSQs:
answer 14
answer 15
emp(F1,L1,D,O1,M1), substring(O1, 'B')
Note, the CSQ discovery module does not output
the 2 4 CSQs that have the tail of (Q14) but export a
different subset of the variables F0, L0, D, and O0 (like-
wise for (Q15). The CSQs that export fewer variables
are pruned.
The plan construction module detects that a join
on D of answer 14 and answer 15 produces the required
answer of (Q13). Consequently, it derives the plan
(P16).
answer 14
answer 15
Finally, the plan refinement module detects that
variables O0, F1, L1, and M1 in answer 14 and answer 15
are unnecessary. Consequently, it generates the more
efficient plan (P19).
answer 17
answer
emp(F1,L1,D,O1,M1), substring(O1, 'B')
answer 17 (F0,L0,D), answer (D,O1)The CBR's goal is to produce all algebraically optimal
plans for evaluating the query. An algebraically
optimal plan is one in which any selection, projection
or join that can be done in the wrapper is done there,
and in which there are no unnecessary queries. More
Definition 3.1 (Algebraically Optimal Plan P )
plan P is algebraically optimal if there is no other
plan P 0 such that for every CSQ s of P there is a
corresponding CSQ s 0 of P 0 such that the set of sub-goals
of s 0 is a superset of the set of subgoals of s (i.e.,
s 0 has more selections and joins than s) and the set
of exported variables of s is a superset of the set of
exported variables of s 0 (i.e., s 0 has more projections
than
In the next three sections we describe each of the modules
of the CBR in turn.
The CSQ discovery module takes as input a target
query and a description. It operates as a rule production
system where the templates of the description
are the production rules and the subgoals of the target
query are the base facts. The CSQ discovery module
uses bottom-up evaluation because it is guaranteed to
terminate even for recursive descriptions [10]. How-
ever, bottom-up derivation often derives unnecessary
facts, unlike top-down. We use a variant of magic sets
rewriting [10] to "focus" the bottom-up derivation. To
further reduce the set of derived CSQs we develop two
pruning techniques as decsribed in Sections 4.2
and 4.3. Reducing the number of derived CSQs makes
the CSQ discovery more efficient and also reduces the
size of the input to the plan construction module.
The query templates derive answer facts that correspond
to CSQs. In particular, a derived answer fact
is the head of a produced CSQ whereas the underlying
base facts, i.e., the facts that were used for deriving
answer, are the subgoals of the CSQ. Nonterminal
templates derive intermediate facts that may be
used by other query or nonterminal templates. We
keep track of the sets of facts underlying derived facts
for pruning CSQs. The following example illustrates
the bottom-up derivation of CSQs and the gains that
we realize from the use of the magic-sets rewriting.
The next subsection discusses issues pertaining to the
derivation of facts containing vectors.
EXAMPLE 4.1 Consider query (Q3) and description
(D4) from page 3. The subgoals emp(F,L,D,O,M),
substring(O, 'alma'), and substring(O,'B') are
treated by the CSQ discovery module as base facts.
To distinguish the variables in target query subgoals
from the templates' variables we ``freeze'' the vari-
ables, e.g. F,L,D,O, into similarly named constants,
e.g. f,l,d,o. Actual constants like 'B' are in single
quotes.
In the first round of derivations template (NT4.6)
derives fact Cond(F,L,D,O,M) without using any base
fact (since the template has an empty body). Hence,
the set of facts underlying the derived fact is empty.
Variables are allowed in derived facts for nontermi-
nals. The semantics is that the derived fact holds for
any assignment of frozen constants to variables of the
derived fact.
In the second round many templates can fire. For
example, derives the fact Cond(F,L,D,o,M)
using Cond(F,L,D,O,M) and substring(o,'alma'),
or using Cond(F,L,D,o,M) and substring(o,'B').
Thus, we generate two facts that, though identical,
they have different underlying sets and hence we must
retain both since they may generate different CSQs.
In the second round we may also fire (NT4.6) again
and produce Cond(F,L,D,O,M) but we do not retain
it since its set of underlying facts is equal to the version
of Cond(F,L,D,O,M) that we have already produced.
Eventually, we generate answer(f,l,d,o,m) with
set of
underlying facts femp(f,l,d,o,m), substring(o,
'alma'), substring(o,'B')g. Hence we output the
CSQ (Q3), which, incidentally, is the target query.
The above process can produce an exponential
number of facts. For example, we could
have proved Cond(o,L,D,O,M), Cond(F,o,D,O,M),
Cond(o,o,D,O,M), and so on. In general, assuming
that emp has n columns and we apply m substrings on
it we may derive n m facts. Magic-sets can remove this
source of exponentiality by "focusing" the nontermi-
nals. Applying magic-sets rewriting and the simplifications
described in Chapter 13.4 of [10] we obtain
the following equivalent description. We show only
the rewriting of templates (NT4.4) and (NT4.6). The
others are rewritten similarly.
emp(F,L,D,O,M), Cond(F,L,D,O,M)
mg Cond(F,L,D,Office,M),
substring(Office, $OS),
Cond(F,L,D,Office,M)
mg Cond(F,L,D,O,M)
emp(F,L,D,O,M)
only Cond(f,l,d,o,m) facts (with different
underlying sets) are produced. Note, the magic-sets
rewritten program uses the available information in a
way similar to a top-down strategy and thus derives
only relevant facts. 2
4.1 Derivations Involving Vectors
When the head of a nonterminal template contains
a vector variable it may be possible that a derivation
using this nonterminal may not be able either to bind
the vector to a specific list of frozen variables or to allow
the variable as is in the derived fact. The CSQ discovery
module can not handle this situation. For most
descriptions, magic-sets rewriting solves the problem.
We demonstrate how and we formally define the set
of non-problematic descriptions.
For example, let us fire template (NT6.1) of (D6)
on the base facts produced by query (Q3). Assume
also that (NT6.2) already derived Cond( A). Then
we derive that Cond( holds, with set of underlying
facts fsubstring(o, 'alma')g, provided that
the constraint " A contains o" holds. The constraint
should follow the fact until A binds to some list of
frozen variables. We avoid the mess of constraints using
the following magic-sets rewriting of (D6).
(D21) answer( R) :- (QT21.1)
Table
subset( R,
mg Cond( A), in($Position,X, A),
substring(X,$S), Cond(
When rules (NT21.1) and (NT21.2) fire the first
subgoal instantiates variable A to [f,l,d,o,m] and
they derive only Cond([f,l,d,o,m]). Thus, magic-
sets caused A to be bound to the only vector of inter-
est, namely [f,l,d,o,m]. Note a program that derives
facts with unbound vectors may not be problematic
because no metapredicate may use the unbound
vector variable. However we take a conservative approach
and consider only those programs that produce
facts with only bound vector variables. Magic-sets
rewriting does not always ensure that derived facts
have bound vectors. In the rest of this section we describe
sufficient conditions for guaranteeing the derivation
of facts with bound vectors only. First we provide
a condition (Theorem 4.1) that guarantees that
a program (that may be the result of magic rewriting)
does not derive facts with unbound vectors. Then we
describe a class of programs that after being magic
rewriteen satisfy the condition of Theorem 4.1.
Theorem 4.1 A program will always produce facts
with bound vector variables if in all rules "
\Gammatail" tail has a non-metapredicate subgoal that
refers to V , or in general V can be assigned a binding
if all non-metapredicate subgoals in tail are bound. 2
Intuitively, after we magic-rewrite a program it will
deriving facts with unbound vectors only if a
nonterminal of the initial program derives uninstan-
tianted vectors and in the rules that is used it does
not share variables with predicates or nonterminals s
that bind their arguments (otherwise, the magic predicate
will force the the rules that produce uninstan-
tianted vectors to focus on bindings of s.) For ex-
ample, specification (MS6) does not derive uninstan-
tianted vectors because the nonterminal Cond, that
may derive uninstantianted variables, shares variables
with
Table
A). [8] provides a formal criterion for deciding
whether the bottom-up evaluation derives facts
that have vector variables. This criterion is used by
the following algorithm that derives CSQs given a target
query and a description.
Algorithm 1
Input: Target query Q and Description D
Output: A set of CSQs s
Check if the program derives
facts with vector variables (see [8])
Reorder each template R in D such that
All predicate subgoals occur in
the front of the rule
A nonterminal N appears after M if N
depends on M for grounding.
Metapredicates appear at the end of the rule
Rewrite D using Magic-sets
Evaluate bottom-up the rewritten description D
as described in [8]
Note, template R can always be reordered. The proof
appears in [8].
4.2 Retaining Only "Representative"
CSQs
A large number of unneeded CSQs are generated by
templates that use vectors and the subset metapred-
icate. For example, template (QT12.1) describes for
a particular A all CSQs that have in their head any
subset of variables in A. It is not necessary to generate
all possible CSQs. Instead, for all CSQs that are
derived from the same expansion e, of some template
t, where e has the form
metapredicate listi,
and V does not appear
in the hpredicate and metapredicate listi we generate
only the representative CSQ that is derived by mapping
V to the same variable list as W . 5 All represented
CSQs, i.e., CSQs that are derived from e by
mapping V to a proper subset of W are not gener-
ated. For example, the representative CSQ (Q15) and
the represented CSQ (Q18) both are derived from the
expansion (E22) of template (QT12.1).
in($Position,X, A), substring(X,'B'),
subset( R,
The CSQ discovery module generates only (Q15) and
not (Q18) because (Q15) has fewer attributes than
(Q18) and is derived by by mapping the vector R to
the same vector with A, i.e., to [F1,L1,D,O1,M1].
Representative CSQs often retain unneeded attributes
and consequently Representative plans, i.e., plans containing
representative CSQs, retrieve unneeded at-
tributes. The unneeded attributes are projected out
by the plan refinement module.
Theorem 4.2 Retaining only representative CSQs
does not lose any plan, i.e., if there is an algebraically
optimal plan p s that involves a represented query s
then p s will be discovered by the CBR. 2
The intuitive proof of this claim is that for every
plan p s there is a corresponding representative plan
r derived by replacing all CSQs of p s with their rep-
resentatives. Then, given that the plan refinement
component considers all plans represented by a representative
plan, we can be sure that the CBR algorithm
does not lose any plan. The complete proof appears
in [8].
Evaluation: Retaining only a representative CSQ of
head arity a eliminates 2 a \Gamma 1 represented CSQs thus
5 In general, the hlist of predicates and metapredicates i may
contain metapredicates of the form in(hpositioni,hvariable i i,
m. In this case, the template describes all
CSQs that output a subset of W and a superset of
hvariableimg. The CSQ discovery module out-
puts, as usual, the representative CSQ and annotates it with the
set S that provides the "minimum" set of variables that represented
CSQs must export. In this paper we will not describe
any further the extensions needed for the handling of this case.
eliminating an exponential factor from the execution
time and from the size of the output of the CSQ discovery
module. Still, one might ask why the CSQ discovery
phase does not remove the variables that can
be projected out. The reason is that the "projection"
step is better done after plans are formed because
at that time information is available about the other
CSQs in the plan and the way they interact (see Section
6). Thus, though postponing projection pushes
part of the complexity to a later stage, it eliminates
some complexity altogether. The eliminated complexity
corresponds to those represented CSQs that in the
end do not participate in any plan because they retain
too few variables.
4.3 Pruning Non-Maximal CSQs
Further efficiency can be gained by eliminating any
CSQ Q that has fewer subgoals than some other CSQ
checks fewer conditions than Q 0 . A CSQ
is maximal if there is no CSQ with more subgoals and
the same set of exported variables, modulo variable
renaming. We formalize maximality in terms of subsumption
[10]:
Definition 4.1 (Maximal CSQs) A CSQ s m is a
maximal CSQ if there is no other CSQ s that is subsumed
by s m . 2
Evaluation: In general, the CSQ discovery module
generates only maximal CSQs and prunes all others.
This pruning technique is particularly effective when
the CSQs contain a large number of conditions. For
example, assume that g conditions are applied to the
variables of a predicate. Consequently, there are 2
CSQs where each one of them contains a different
proper subset of the conditions. By keeping "maximal
CSQs only" we eliminate an exponential factor of 2 g
from the output size of the CSQ discovery module.
Theorem 4.3 Pruning non-maximal CSQs does not
lose any algebraically optimal plan. 2
The reason is that for every plan p s involving a
non-maximal CSQ s there is also a plan pm that involves
the corresponding maximal CSQ s m such that
pm pushes more selections and/or joins to the wrapper
than p s , since s m by definition involves more selections
and/or joins than s.
5 Plan Construction
In this section we present the plan construction
module (see Figure 2.) In order to generate a (rep-
resentative) plan we have to select a subset S of the
CSQs that provides all the information needed by the
target query, i.e., (i) the CSQs in S check all the sub-goals
of the target query, (ii) the results in S can be
joined correctly, and (iii) each CSQ in S receives the
constants necessary for its evaluation. Section 5.1 addresses
(i) with the notion of "subgoal consumption."
Section 5.2 checks (ii), i.e., checks join variables. Section
5.3 checks (iii) by ensuring bindings are avail-
able. Finally, Section 5.4 summarizes the conditions
required for constructing a plan and provides an efficient
plan construction algorithm.
5.1 Set of Consumed Subgoals
We associate with each CSQ a set of consumed sub-goals
that describes the CSQs contribution to a plan.
Loosely speaking, a component query consumes a sub-goal
if it extracts all the required information from
that subgoal. A CSQ does not necessarily consume
all its subgoals. For example, consider a CSQ s e that
semijoins the emp relation with the dept relation to
output each emp tuple that is in some department in
relation dept. Even though this CSQ has a subgoal
that refers to the dept relation it may not always consume
the dept subgoal. In particular, consider a target
query Q that requires the names of all employees
and the location of their departments. CSQ s e does
not output the location attribute of table dept and
thus does not consume the dept subgoal with respect
to query Q. We formalize the above intuition by the
following definition:
Definition 5.1 (Set of Consumed Subgoals for
a CSQ) A set S s of subgoals of a CSQ s constitutes
a set of consumed subgoals of s if and only if
1. s exports every distinguished variable of the target
query that appears in S s , and
2. s exports every join variable that appears in S s
and also appears in a subgoal of the target query
that is not in S s .Theorem 5.1 Each CSQ has a unique maximal set
of consumed subgoals that is a superset of every other
set of consumed subgoals. 2
The proof of the uniqueness of the maximal consumed
set appears in [8]. Intuitively the maximal set describes
the "largest" contribution that a CSQ may
have in a plan. The following algorithm states how
to compute the set of maximal consumed subgoals of
a CSQ. We annotate every CSQ s with its set of maximal
consumed subgoals, C s .
Algorithm 2
Input: CSQ s and target query Q
Output: CSQ s with computed annotation C s
Insert in C s all subgoals of s
Remove from C s subgoals that have a
distinguished attribute of Q not exported by s
Repeat until size of C s is unchanged
Remove from C s subgoals that:
Join on variable V with subgoal g
of Q where g is not in C s , and
Join variable V is not exported by s
Discard CSQ s if C s is empty.
This algorithm is polynomial in the number of the
subgoals and variables of the CSQ. Also, the algorithm
discards all CSQs that are not relevant to the target
query:
Definition 5.2 (Relevant CSQ) A CSQ s is called
relevant if C s is non-empty. 2
Intuitively, irrelevant CSQs are pruned out because in
most cases they do not contribute to a plan, since they
do not consume any subgoal. Note, we decide the relevance
of a CSQ "locally," i.e., without considering
other CSQs that it may have to join with. By pruning
non-relevant CSQs we can build an efficient plan construction
algorithm that in most cases (Section 5.2)
produces each plan in time polynomial in the number
of CSQs produced by the CSQ discovery module.
However, there are scenarios (see the extended version
[8]) where the relevance criteria may erroneously
prune out a CSQ that could be part of a plan. We
may avoid the loss of such plans by not pruning irrelevant
CSQs and thus sacrificing the polynomiality of
the plan construction algorithm. In this paper we will
not consider this option.
5.2 Join Variables Condition
It is not always the case that if the union of consumed
subgoals of some CSQs is equal to the set of
the target query's subgoals then the CSQs together
form a plan. In particular, it is possible that the join
of the CSQs may not constitute a plan. For exam-
ple, consider an online employee database that can be
queries for the names of all employees in a given divi-
sion. The database can also be queried for the names
of all employees in a given location. Further, the name
of an employee is not uniquely determined by their location
and division. The employee database cannot
be used to find employees in a given division and in a
given location by joining the results of two queries -
one on division and the other on location. To see this,
consider a query that looks for employees in "CS" in
"New York". Joining the results of two independent
queries on division and location will incorectly return
as answer a person named "John Smith" if there is a
"John Smith" in "CS" in "San Jose" and a different
"John Smith" in "Electrical" in "New York".
Intuitively, the problem arises because the two independent
queries do not export the information necessary
to correctly join their results. We can avoid this
problem by checking that CSQs are combined only if
they export the join variables necessary for their correct
combination. The theorem of Section 5.4 formally
describes the conditions on join variables that guarantee
the correct combination of CSQs.
5.3 Passing Required Bindings via
Nested Loops Joins
The CBR's plans may emulate joins that could not
be pushed to the wrapper, with nested loops joins
where one CSQ passes join variable bindings to the
other. For example, we may compute (Q13) by the
following steps: first we execute (Q23); then we collect
the department names (i.e., the D bindings) and
for each binding d of D, we replace the $D in (Q24)
with d and send the instantianted query to the wrap-
per. We use the notation /$D in the nested loops plan
(P25) to denote that (Q24) receives values for the $D
placeholder from D bindings of the other CSQs - (Q23)
in this example.
answer 23
answer 24
The introduction of nested loops and binding passing
poses the following requirements on the CSQ discovery
ffl CSQ discovery: A subgoal of a CSQ s may contain
placeholders /$hvari, such as $D, in place of
corresponding join variables (D in our example.)
Whenever this is the case, we introduce the structure
/$hvari next to the answer s that appears
in the plan. All the variables of s that appear
in such a structure are included in the set B s ,
called the set of bindings needed by s. For ex-
ample, fg. CSQ discovery
previously did not use bindings information while
deriving facts. Thus, the algorithm derives useless
CSQs that need bindings not exported by any
other CSQ.
The optimized derivation process uses two sets
of attributes and proceeds iteratively. Each iteration
derives only those facts that use bindings
provided by existing facts. In addition, a
fact is derived if it uses at least one binding that
was made available only in the very last itera-
tion. Thus, the first iteration derives facts that
need no bindings, that is, for which B s is empty.
The next iteration derives facts that use at least
one binding provided by facts derived in iteration
one. Thus, the second iteration does not derive
any subgoal derived in the first iteration, and so
on. The complete algorithm that appears in [8]
formalizes this intuition.
The bindings needed by each CSQ of a plan impose
order constraints on the plan. For example, the existence
of D in B 24 requires that a CSQ that exports D is
executed before (Q24). It is the responsibility of the
plan construction module to ensure that the produced
plans satisfy the order constraints.
Evaluation The pruning of CSQs with inappropriate
bindings prunes an exponential number of CSQs
in the following common scenario: Assume we can put
an equality condition on any variable of a subgoal p.
Consider a CSQ s that contains p and assume that n
variables of p appear in subgoals of the target query
that are not contained in s. Then we have to generate
versions of s that describe different binding pat-
terns. Assuming that no CSQ may provide any of the
variables it is only one (out the 2 n ) CSQs useful.
5.4 A Plan Construction Algorithm
In this section we summarize the conditions that are
sufficient for construction of a plan. Then, we present
an efficient algorithm that finds plans that satisfy the
theorem's conditions. Finally, we evaluate the algo-
rithm's performance.
Theorem 5.2 Given CSQs s corresponding
heads answer i (V i
sets of maximal
consumed subgoals C i and sets of needed bindings
, the plan
is correct if
ffl consumed sets condition: The union of maximal
consumed sets [ i=1;:::;n C i is equal to the target
query's subgoal set.
ffl join variables condition: If the set of maximal
consumed subgoals of CSQ s i has a join variable
V then every CSQ s j that contains V in its set of
maximal consumed subgoals C j exports V .
ffl bindings passing condition: If
there must be a CSQ s exports V . 2
The proof is based on the theory of containment mappings
appropriately extended to take into consideration
nested loops [8].
The plan construction algorithm in the extended
version of the paper [8] is based on Theorem 5.2. The
algorithm takes as input a set of CSQs derived by the
discovery process described later, and the target
query Q. At each step the algorithm selects a CSQ s
that consumes at least one subgoal that has not been
consumed by any CSQ s 0 considered so far and for
which all variables of B s have been exported by at
least one s 0 . Assuming that the algorithm is given m
CSQs (by the CSQ discovery module) it can construct
a set that satisfies the consumed sets and the bindings
passing conditions in time polynomial in m. Neverthe-
less, if the join variables condition does not hold the
algorithm takes time exponential in m because we may
have to create exponentially many sets until we find
one that satisfies the join variables condition. How-
ever, the join variables condition evaluates to true for
most wrappers we find in practice (see following dis-
cussion) and thus we usually construct a plan in time
polynomial in m.
For every plan p there may be plans p 0 that are
identical to p modulo a permutation of the CSQs of p.
In the worst case there are
is the number of CSQs in p. Since it is useless to generate
permutations of the same plan, The algorithm
creates a total order OE of the input CSQs and generates
plans by considering CSQ s 1 before CSQ s 2 only
the CSQs are considered in order by
OE. Note, a query s 2 must always be considered after
a query s 1 if s 1 provides bindings for s 2 . Hence, OE
must respect the partial order OE
OE
provides bindings to s 2 .
The plan construction algorithm first sorts the input
CSQs in a total order that respects the PO b
OE.
Then it procedes by picking CSQs and testing the conditions
of Theorem 5.2 until it consumes all subgoals
of the target query. The algorithm capitalizes on the
assumption that in most practical cases every CSQ
consumes at least one subgoal and the join variables
condition holds. In this case, one plan is developed in
time polynomial in the number of input CSQs. The
following lemma describes an important case where
the join variables condition always holds.
Lemma 5.1 The join variables condition holds for
any set of CSQs such that
1. no two CSQs of the set have intersecting sets of
maximal consumed subgoals, or
2. if two CSQs contain the subgoal g(V
their sets of maximal consumed subgoals then they
both export variables
Condition (1) of Lemma 5.1 holds for typical wrappers
of bibliographic information systems and lookup
services (wrappers that have the structure of (D12)),
relational databases and object oriented databases -
wrapped in a relational model. In such systems it is
typical that if two CSQs have common subgoals then
they can be combined to form a single CSQ. Thus, we
end up with a set of maximal CSQs that have non-intersecting
consumed sets. Condition (2) further relaxes
the condition (1). Condition (2) holds for all
wrappers that can export all variables that appear in
a CSQ. The two conditions of Lemma 5.1 cover essentially
any wrapper of practical importance.
6 Plan Refinement
The plan refinement module filters and refines constructed
plans in two ways. First, it eliminates plans
that are not algebraically optimal. The fact that CSQs
of the representative plans have the maximum number
of selections and joins and that plan refinement
pushes the maximum number of projections down is
not enough to guarantee that the plans produced are
algebraically optimal. For example, assume that CSQs
are interchangeable in all plans, and the set
of subgoals of s 1 is a superset of the set of subgoals of
exports a subset of the variables exported
by s 2 . The plans in which s 2 participates are algebraically
worse than the corresponding plans with s 1 .
Nevertheless, they are produced by the plan construction
module because s 1 and s 2 may both be maximal,
and do not represent each other because they are produced
by different template expansions. Plan refinement
must therefore eliminate plans that include s 2 .
Plan refinement must also project out unnecessary
variables from representative CSQs. Intuitively, the
necessary variables of a representative CSQ are those
variables that allow the consumed set of the CSQ to
"interface" with the consumed sets of other CSQs in
the plan. We formalize this notion and its significance
by the following definition (note, the definition is not
restricted to maximal consumed sets):
Definition 6.1 (Necessary Variables of a Set of
Consumed Subgoals:) A variable V is a necessary
variable of the consumed subgoals set S s of some CSQ
s if, by not exporting V , S s is no longer a consumed
set. 2
The set of necessary variables is easily computed:
Given a set of consumed subgoals S, a variable V of S
is a necessary variable if it is a distinguished variable,
or if it is a join variable that appears in at least one
subgoal that is not in S.
Due to space limitations the complete plan refinement
algorithm and its evaluation appear in [8]. Its
main complication is due to the fact that unecessary
variables cannot always be projected out when the
maximal consumed sets of the CSQs intersect.
7 Evaluation
The CBR algorithm employs many techniques to
eliminate sources of exponentiality that would otherwise
arise in many practical cases. The evaluation
paragraphs of many sections in this paper describe the
benefit we derive from using these techniques. Remember
that our assumption that every CSQ consumes
at least one subgoal led to a plan construction
module that develops a plan in time polynomial to the
number of CSQs produced by the CSQ detection mod-
ule, provided that the join variables condition holds.
This is an important result because the join variables
condition holds for most wrappers in practice, as argued
in Subsection 5.4.
The CBR deals only with Select-Project-Join
queries and their corresponding descriptions. It produces
algebraically optimal plans involving CSQs, i.e.,
plans that push the maximum number of selections,
projections and joins to the source. However, the CBR
is not complete because it misses plans that contain
irrelevant CSQs (see Definition 5.2 and the discussion
of Section 5.1.) On the other hand, the techniques
for eliminating exponentiality preserve completeness,
in that we do not miss any plan through applying one
of these techniques (see justifications in Sections 4.2,
4.3.)
8 Related Work
Significant results have been developed for the resolution
of semantic and schematic discrepancies while
integrating heterogeneous information sources. How-
ever, most of these systems [11, 12, 4, 13] do not
address the problem of different and limited query
capabilities in the underlying sources because they
assume that those sources are full-fledged databases
that can answer any query over their schema. 6 The
recent interest in the integration of arbitrary information
sources, including databases, file systems, the
Web, and many legacy systems, invalidates the assumption
that all underlying sources can answer any
query over the data they export and forces us to re-solve
the mismatch between the query capabilities provided
by these sources. Only a few systems have addressed
this problem.
HERMES [11] proposes a rule language for the
specification of mediators in which an explicit set of
parameterized calls can be made to the sources. At
run-time the parameters are instantiated by specific
values and the corresponding calls are made. Thus,
HERMES guarantees that all queries sent to the wrappers
are supported. Unfortunately, this solution reduces
the interface between wrappers and mediators
to a very simple form (the particular parameterized
6 The work in query decomposition in distributed databases
has also assumed that all underlying systems are relational and
equally able to perform any SQL query.
calls), and does not fully utilize the sources' query
power.
DISCO [14]describes the set of supported queries
using context-free grammars. This technique reduces
the efficiency of capabilities-based rewriting because
it treats queries as "strings."
The Information Manifold [15] develops a query capabilities
description that is attached to the schema
exported by the wrapper. The description states
which and how many conditions may be applied on
each attribute. RQDL provides greater expressive
power by being able to express schema-independent
descriptions and descriptions such as "exactly one condition
is allowed."
TSIMMIS suggests an explicit description of the
wrapper's query capabilities [5], using the context-free
grammar approach of the current paper. (The
description is also used for query translation from the
common query language to the language of the underlying
source.) However, TSIMMIS considers a restricted
form of the problem wherein descriptions consider
relations of prespecified arities and the mediator
can only select or project the results of a single CSQ.
This paper enhances the query capability description
language of [5] to describe queries over arbitrary
schemas, namely, relations with unspecified arities and
names, as well as capabilities such as "selections on
the first attribute of any relation." The language also
allows specification of required bindings, e.g., a bibliography
database that returns "titles of books given
author names." We provide algorithms for identifying
for a target query Q the algebraically optimal CSQs
from the given descriptions. Also, we provide algorithms
for generating plans for Q by combining the
results of these CSQs using selections, projections, and
joins.
The CBR problem is related to the problem of determining
how to answer a query using a set of materialized
views [16, 6, 7, 17]. However, there are significant
differences. These papers consider a specification
language that uses SPJ expressions over given relations
specifying a finite number of views. They cannot
express arbitrary relations, arbitrary arities, binding
requirements (with the exception of [7]), or infinitely
large queries/views. Also, they do not consider generating
plans that require a particular evaluation order
due to binding requirements.
[6] shows that rewriting a conjunctive query is in
general exponential in the total size of the query and
views. [17] shows that if the query is acyclic we can
rewrite it in time polynomial to the total size of the
query and views. [6, 7] generate necessary and sufficient
conditions for when a query can be answered
by the available views. By contrast, our algorithms
check only sufficient conditions and might miss a plan
because of the heuristics used. Our algorithm can be
viewed as a generalization of algorithms that decide
the subsumption of a datalog query by a datalog program
(i.e., the description). Recently [18] proposed
Datalog for the description of supported queries. It
also suggested an algorithm that essentially finds what
we call maximal CSQs.
9 Conclusions and Future Work
In this paper, we presented the Relational Query
Description Language, RQDL, which provides powerful
features for the description of wrappers' query
capabilities. RQDL allows the description of infinite
sets of arbitrarily large queries over arbitrary schemas.
We also introduced the Capabilities-Based Rewriter,
CBR, and presented an algorithm that discovers plans
for computing a wrapper's target query using only
queries supported by the wrapper. Despite the inherent
exponentiality of the problem, the CBR uses
optimizations and heuristics to produce plans in reasonable
time in most practical situations.
The output of the CBR algorithm, in terms of the
number of derived plans, remains a major source of
exponentiality. Though the CBR prunes the output
plans by deriving a plan only if no other plan pushes
more selections, projections or joins to the source, it
may still be the case that the number of plans is exponential
in the number of subgoals and/or join vari-
ables. For example, consider the case where our query
involves a chain of n joins and each one of them can
be accomplished either by a left-to-right nested loops
join, or a right-to-left nested loops join, or a local join.
In this case, CBR has to output 3 n plans where each
of the plans employs one of the three join methods.
Then, the mediator's cost-based optimizer would have
to estimate the cost of each one of the plans and choose
the most efficient. We could modify the CBR to generate
all of these plans or only some of them, depending
on the time to be spent on optimization.
Currently, we are looking at implementing a CBR
for IBM's Garlic system [1]. We are also investigating
tighter couplings between the mediator's cost-based
optimizer and the CBR. Finally, we are investigating
more powerful rewriting techniques that may replace
a target query's subgoals with combinations of semantically
equivalent subgoals that are supported by the
wrapper.
Acknowledgements
We are grateful to Mike Carey, Hector Garcia-
Molina, Anand Rajaraman, Anthony Tomasic, Jeff
Ullman, Ed Wimmers, and Jennifer Widom for many
fruitful discussions and comments.
--R
Towards heterogeneous multimedia information systems: The Garlic approach.
Object exchange across heterogeneous information sources.
Amalgame: a tool for creating interoperating persistent
The Pegasus heterogeneous multi-database system
A query translation scheme for the rapid implementation of wrappers.
Answering queries using views.
Answering queries using templates with binding patterns.
Principles of Database and Knowledge-Base Systems
Principles of Database and Knowledge-Base Systems
HERMES: A heterogeneous reasoning and mediator system.
An approach to resolving semantic heterogeneity in a federation of autonomous
Integration of Information Systems: Bridging Heterogeneous Databases.
Scaling heterogeneous databases and the design of DISCO.
Query processing in the information manifold.
Computing queries from derived relations.
Query folding.
Answering queries using limited external processors.
--TR
--CTR
Stefan Huesemann, Information sharing across multiple humanitarian organizations--a web-based information exchange platform for project reporting, Information Technology and Management, v.7 n.4, p.277-291, December 2006
Yannis Papakonstantinou , Vinayak Borkar , Maxim Orgiyan , Kostas Stathatos , Lucian Suta , Vasilis Vassalos , Pavel Velikhov, XML queries and algebra in the Enosys integration platform, Data & Knowledge Engineering, v.44 n.3, p.299-322, March
Alin Deutsch , Lucian Popa , Val Tannen, Physical Data Independence, Constraints, and Optimization with Universal Plans, Proceedings of the 25th International Conference on Very Large Data Bases, p.459-470, September 07-10, 1999
Andreas Koeller , Elke A. Rundensteiner, A history-driven approach at evolving views under meta data changes, Knowledge and Information Systems, v.8 n.1, p.34-67, July 2005 | query containment;heterogeneous sources;query rewriting;cost optimization;mediator systems |
285120 | A Parallel Algorithm to Reconstruct Bounding Surfaces in 3D Images. | The growing size of 3D digital images causes sequential algorithms to be less and less usable on whole images and a parallelization of these algorithm is often required. We have developed an algorithm named Sewing Faces which synthesizes both geometrical and topological information on bounding surface of 6-connected3D objects. We call such combined information a skin. In this paper we present a parallelization of Sewing Faces. It is based on a splitting of 3D images into several sub-blocks. When all the sub-blocks are processed a gluing step consists of merging all the sub-skins to get the final skin. Moreover we propose a fine-grain approach where each sub-block is processed by several parallel processors. | Introduction
Over the past decade, 3D digitalization techniques such as the Magnetic Resonance Imaging
have been extensively developed. They have opened new research topics in 3D digital image
processing and are of primary importance in many application domains such as medical imaging.
The classical notions of 2D image processing have been extended to 3D (pixels into voxels, 4-
connectivity into 6-connectivity, etc) and the 2D algorithms have to be adapted to 3D problems
([4], [2]). In this process the amount of data is increased by an order of magnitude (from n 2
pixels in a 2D image to n 3 voxels in a 3D image, where n is the size of the image edges) and in
consequence the time complexity of 3D algorithms is also increased by an order of magnitude.
In order to still get efficient algorithms in terms of running time and to deal with growing size
images, these algorithms have to be parallelized.
Among the many problems in 3D image processing, we focus in this paper on the problem
of the reconstruction of bounding surfaces of 6-connected objects in 3D digital images.
A 3D digital image is characterized by a 3D integer matrix called block; each integer I(v)
of the block defines a value associated with a volume element or voxel v of the image. An image
describes a set of objects such as organs in medical images. The contour of an object is composed
of all the voxels which belong to the object but which have at least one of their adjacent voxels
in the background. From this set of voxels we compute the bounding surface of the object. It is
a set of closed surfaces enclosing the object.
We have developed in [6] a sequential algorithm for bounding surfaces reconstruction. The
objective of this paper is to present a parallel version of this algorithm. This parallelization is
based on a decomposition of the 3D block into sub-blocks. On each sub-block a fragment of
the bounding surface is computed. Once all the fragments have been determined a final step
consists in merging them together in order to retrieve the complete bounding surface.
The paper is organized as follows. Section 2 recalls the principles of bounding surfaces
reconstruction. Section 3 presents some basic notions of 3D digital images while section 4
briefly recalls our sequential algorithm for bounding surfaces reconstruction. Section 5 discusses
the sub-blocks decomposition, then sections 6 and 7 respectively present the coarse-grain and
fine-grain parallelizations of the algorithm. In section 8 we briefly show how to transform
a reconstructed surface into a 2D mesh. Finally section 9 presents experimental results and
compares our approach with related works. Some of the notions introduced in this paper are
illustrated with figures. Since they are not always easy to visualize in 3D they will be presented
using the 2D analogy.
Surface reconstruction
The closed surfaces that bound an object can be determined in two different ways :
ffl using a method by approximation, the surface is reconstructed by interpolating the discretized
data. The Marching Cubes [5] developed by Lorensen and Cline is such a method
it builds a triangulation of the surface. Various extensions of the method have been
proposed, either by defining a heuristic to solve ambiguous cases [9] or by reducing the
number of generated triangles. Faster reconstructions have been developed. Some are
based on parallelized versions of the algorithm [7]. Others use the octree abstract data
type [3] which reduces the number of scanned voxels.
ffl using an exact method, the surface is composed of faces shared by a voxel of the object
and a voxel of the background. Such a method has been proposed by Artzy et al. [1].
The efficiency of the various reconstruction algorithms is strongly related to the type of scan
used to determine the surface. Hence the surface reconstruction can be realized either by a
complete search among all the voxels of the block or by a contour following for which only the
voxels of the object contour are scanned. The contour following approach yields more efficient
algorithms whose time complexity is proportional to the number of voxels of the contour instead
of the number of voxels of the whole block. The Marching Cubes algorithm is based on a
whole-block scanning while the method proposed in [1] relies on a contour following.
The determination of the bounding surface of an object is useful to visualize the object but
also to manipulate it, using techniques such as a distortion of a surface, a transformation of
a surface into a surface mesh, a derefinement of a surface by merging adjacent coplanar faces,
a reversible polyhedrization of discretized volumes. In the former case, the surface needs only
to be defined by geometrical information, i.e. the list of its triangles in case of approximation
methods or the list of its faces in case of an exact method. In the latter case however, the surface
must be defined not only by geometrical information but also by topological information, i.e.
information stating how the faces are connected together. Note that it is of course possible to
recover the topological information from the geometrical one. For each face, one have to scan all
the other faces defining the surface in order to find its adjacent faces, i.e. the ones which share
one edge with it. If the surface contains n faces then this topological reconstruction is O(n 2 ).
To avoid this quadratic operation the topological information must be collected together with
the geometrical information.
Figure
1: A block
The algorithm we have developed in [6] reconstructs the bounding surface of any 6-connected
object of a 3D digital image. It is called Sewing Faces and its characteristics are the following
ffl it is an exact method. It extracts faces belonging to a voxel of the object and a voxel of
the background.
ffl it is based on a contour following. Its time complexity is therefore proportional to the
number of voxels of the contour.
ffl it synthesizes both geometrical and topological information. In this case the reconstructed
surface is named a skin. The topological information is synthesized using sews stating how
two adjacent faces of the bounding surface are connected together.
ffl its time and space complexity are both linear according to the number of faces of the skin,
as proved in [6].
3 Notions of 3D digital images
A 3D block (see figure 1) can be seen as a stack of adjacent voxel slices pushed together according
to any one of the three axes x, y or z. A voxel is made of six faces (whose types can be numbered
as shown in figure 2), and twelve edges. Each face has an opposite face in a voxel; for instance
face of type 2 is opposite to face of type 5 (cf. figure 2). In the following we call face i a face of
type i.
Two faces that share one edge are adjacent. Two voxels that share one face are 6-adjacent;
if they share only one edge they are 18-adjacent (see figure 3). In the following we call object
in a block a set of 6-connected voxels.
be any two voxels of set '. If there exists a path x
are 6-adjacent, then ' is 6-connected.
If block B contains more than one object, that is to say if B is made of several 6-connected
components ' i , we call this set of objects a composed object and we denote it by \Theta : \Theta = S
The voxels which are not in object \Theta are in the background.
A boolean function is defined on block B. It is denoted by \Theta B (v) and states whether or not
voxel v belongs to object \Theta. There are several ways to define function \Theta B (v), depending on the
z z
y
Figure
2: Type of voxel faces
(b)
face shared by u and v
edge shared
by u and v
(a)
Figure
3: (a) : u and v are 6-adjacent, (b) : u and v are 18-adjacent
type of digitalized data. If the block is already thresholded, we may define \Theta B
user-defined. If the block is not segmented, we may use a function \Theta B
true . On the contrary, if we want to define the object as the complement of
the background, we may define \Theta B Other more sophisticated definitions
are possible.
If an object contains n holes, its bounding surface is made of (1 n) borders that are not
connected together. Each border is a closed surface made of adjacent faces sewed together.
Figure
4 illustrates this notion with a 1-hole object using the 2D analogy.
Since objects are 6-connected sets of voxels, there exist three different types of sews between
two adjacent faces of a border. These three types of relations have been presented by Rosenfeld
[8] and are named 1-sew, 2-sew and 3-sew (see figure 5). They depend on the adjacency relation
between the voxel(s) supporting the two faces.
Figure
4: A 2D object with one hole and its two borders
3-sew
2-sew
1-sew
Figure
5: Three types of sews
be a 6-connected object with n holes. Each border \Upsilon ' i
of ' i is a pair
ffl F is the set of faces which separate 6-connected component ' i from the background by a
closed
ffl R is the sewing relation. It is a set of 4-tuples (f expressing that faces f 1 and f 2
are sewed together through their common edge e using a sew of type s or 3).
Using this definition the notions of skin are introduced as follows.
Definition 3 The skin of object ' i characterized by n holes is the union of all its borders and
is denoted by S ' i
;j .
The skin of composed object
' i is the union of the skins S ' i
and is denoted by S \Theta :
S \Theta
Since the skin of a composed object is a union of borders, it is also defined as a pair (F; R)
as introduced by definition 2.
4 Sewing Faces : an algorithm for skin reconstruction
For each border \Upsilon to be reconstructed, the starting point of Sewing Faces is a pair (v; i) where
v is a voxel of the object contour and i is a face of v belonging to border \Upsilon. Such a pair is called
a starting-voxel and can be either given by the user or determined by a dichotomous search
algorithm that depends on the type of the 3D image.
The principles of the sequential version of Sewing Faces are the following. From the starting
voxel, the algorithm first computes its faces that belong to the bounding surface and then detects
among its adjacent voxels (either 6 or 18-adjacent) the ones that belong to the contour. For each
of these voxels it determines the faces that are also included in the bounding surface and realizes
their sews with adjacent faces of the skin. The process is then iterated for all the adjacent voxels
that also belong to the contour. Step-by-step the bounding surface is reconstructed based on a
contour following.
The algorithm uses a hash table to memorize the faces that have already been added to the
skin, and a stack to store the voxels to be examined. It uses two main functions GetF aces (v; i)
and T reat Seq: (v) that are now described. They rely on the notion of neighbor of a voxel
according to a given face defined by its type (cf. figure 2).
Definition 4 Let v be a voxel whose face of type i is shared with voxel u. Voxel u is called the
neighbor of v by its face i and is denoted by n(v;
Function GetF aces (v; i) determines the faces of v belonging to the skin when its face of
does. It runs as follows :
(1) add 1 face of type i of v to F .
(2) add to F any of the four faces of type j of v adjacent to face of type i such that n(v;
2 \Theta
or n(v;
B, i.e. such that the neighbor of v by face j is not in the object.
(3) if one face has been added to the border during steps (1) or (2) then :
(a) add to F face of type k of v which is opposite to i if n(v;
2 \Theta or n(v;
(b) add to R the 1-sews (i; e that have been added to F
during steps (1), (2) or (3)(a).
(c) push v in the stack of voxels to be treated. Hence voxel v is not entirely treated
at this step. In particular its 2-sews ans 3-sews with adjacent voxels have not been
detected yet.
(d) add to the hash table the faces of v added to the skin.
Function T reat Seq: (v) determines all the 6-adjacent or 18-adjacent voxels of v such that one
or more of their faces belong to the skin, and sews these faces to those of v when they share
one edge, i.e. realizes the 2 and 3\Gammasews. The data structure used by T reat Seq: (v) is an array
which indicates for each face type, the reference in F of the corresponding face of voxel
v if this face has already been added to the skin. The corresponding entry is empty if the face is
not in F or if the face has not already been added to the skin. T reat Seq: (v) is defined as follows
operation add does not add a face twice in F , i.e. it checks whether the given face already belongs to F and
actually adds it only if it does not belong to F .
and one of its adjacent faces in v (call it j) belong to the border, face k opposite to i and adjacent
to j also belongs to the border if the neighbor of v by face k is not in the object.
reat Seq: (v)
get from the hash table
for each edge e of v shared by two faces
such that faces v
do
/* w is not in object \Theta B */
/* in this case face j of voxel u */
/* belongs to the skin */
else
/* w is in object \Theta B */
/* in this case the opposite face to face */
/* i in voxel w belongs to the skin */
endif
endfor
Using these two functions the sequential version of Sewing Faces, denoted by SF Seq: ( ) is
defined by:
for each starting voxel (v; i) do
GetF aces (v; i)
while not empty (stack) do
reat Seq: (top (stack))
5 Sub-blocks decomposition
Let us suppose that the blocks we deal with contain one byte long integers. The memory size
required for blocks of size 128 3 , 512 3 or 1024 3 is respectively 2MB, 128MB or 1GB. To allow any
computer to run Sewing Faces on such large-size data, the whole block must be decomposed into
sub-blocks and the algorithm must be processed on these sub-blocks. The size of the sub-blocks
depends on the available amount of memory. In the general case, each voxel intensity being g
bytes long, a c bytes memory space available for the sub-block storage can hold sub-blocks up
to size (l; l; l) with l = (g=c) 1=3 .
Using a parallel machine, such a decomposition into sub-blocks is also very useful since the
bounding surfaces reconstruction algorithm may be run simultaneously on the different sub-blocks
assigned to different processors of the architecture.
5.1 Sub-blocking
Splitting the block into sub-blocks is the sub-blocking operation. A sub-block has six faces, each
face is a voxel slice. It is fully defined by the coordinates of its origin in the block and by its
size according to axes x, y and z.
sub-block block
gluing faces
GFGF
GF
GF
GF
GF
GF
GF
GF
GFGFGFGFGFGFGF
Figure
The gluing faces and forbidden faces of a sub-block
On each sub-block, Sewing Faces builds a sub-skin. All the sub-skins must finally be merged
to get the whole skin. In order to be able to glue all the sub-skins together, the overlap between
two adjacent sub-blocks must be two-slices wide.
be two sub-blocks whose origins in block B are (B:x 1
whose sizes are (b 1 :z). If there exist two
axes ff; fi 2 x; z such that B:ff if the third axis fl is such that
or such that are said
to be adjacent.
On any sub-block two particular types of faces are emphasized: the gluing faces and the
forbidden faces, as illustrated in figure 6 using the 2D analogy. They correspond to the overlap
between sub-blocks and are therefore defined only "inside" the whole block and not on its own
faces.
Definition 6 Let b be a sub-block and f b be one of its faces such that f b is not included into a
face of block B. Face f b is called a forbidden face. A slice which is adjacent to a forbidden face
is called a gluing face.
The execution of Sewing Faces on sub-block b reconstructs the sub-skin associated with b.
This sub-skin is similar to a skin except on the "border" of sub-block b where some sews are
missing. The missing sews connect a face of a voxel lying on one of its gluing faces with a face
of a voxel belonging to a sub-block adjacent to b. Notice that since the sews of type 1 involve
only one voxel, they can always be detected even if they are on a gluing face. The missing sews
are therefore only of type 2 or 3. They must be detected and memorized to be treated during
the final gluing phase. To do so the complete neighboring of the gluing faces voxels must be
examined. This explains the presence of the forbidden faces adjacent to the gluing faces.
Since the voxels of the forbidden faces are not treated in sub-block b, any forbidden face must
be shared by an adjacent sub-block where it is considered as a gluing face. These adjacency
relations between sub-blocks guarantee that the gluing process of the sub-skins will be possible.
They are characterized as follows.
GF
GF
GF GF GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF GF GF
GF
mB.size.y
B.size.y
-n
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF GF GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF
GF GF
GF
GF
GF
GF
Figure
7: A valid sub-blocking
Definition 7 A set fb of sub-blocks such that :
does not belong to a forbidden face of b i , and
ffl 8v such that are adjacent
is a valid sub-blocking (with respect to the object).
Notice that a valid set of sub-blocks need not to cover the whole block. If part of the image
contains only background voxels, it is unnecessary to process it. Figure 7 shows a valid sub-
blocking using the 2D analogy. In the general case it is easy to automatically split a block into
a valid sub-blocking whose sub-blocks can be separately processed.
5.2 Gluing phase
Once the sub-skins on the different sub-blocks have been reconstructed, they must be merged
together to get the final skin. This process requires the realization of the missing sews. We call
the whole process (merging and sewing) the gluing phase.
For any sub-block b i the sub-skin is characterized by pair
is the set of
faces belonging to the sub-skin of b i and R b i
is the corresponding set of fully realized sews.
During the sub-skin computation the missing sews, called half-sews, are memorized. They are
defined as follows.
Definition 8 A half-sew is a quintuple (e; s; f; coord; t) where
ffl e is the edge to be sewed;
ffl s is the type of sew to realize;
ffl f is the face of set F sharing edge e;
ffl coord are the coordinates of voxel v that owns f ;
ffl t is the type of the unknown face that share edge e with face f .
For each sub-block this information is memorized using either a hash table or a list ordered
according to field coord.
The gluing process is realized in two steps. The first one consists in merging together the
different sets F b i
computed on sub-blocks
. The second step concerns set RB
defining the sews between all the faces of FB . This step is realized by building full-sews from
pairs of half-sews describing the two parts of a sew.
Definition 9 A full-sew is composed of two half-sews
characterized by :
define the coordinates of the two adjacent voxels involved in the sew;
At the end of the gluing process, the skin describes the geometry and the topology
of all the borders that were pointed out by the starting voxels given at the beginning.
5.3 Fragmentation of the objects
The following problem occurs when decomposing a block into sub-blocks. A 6-connected object
contained in the block is fragmented on the different sub-blocks and on a given sub-block the
fragment of the object may not be 6-connected. If this problem is not carefully taken into account
the skin reconstruction will be incorrect.
Figure
8 illustrates such a situation using the 2D analogy. The initial block is splitted into
two sub-blocks with an overlap of two slices : the gluing and forbidden faces. The 6-connected
object is distributed on the two sub-blocks in such a way that on sub-block 2 there are two
non-connected fragments of the object. Therefore if the algorithm on sub-block 2 runs with
only one starting voxel, then only one of the two fragments will be reconstructed. Since it is
not realistic to expect the user to give as many starting voxels as there are fragments of the
objects on the different sub-blocks, this problem must be automatically solved by the method.
This is realized on the gluing faces. When a voxel of the contour lies on a gluing face, its
adjacent voxels belonging to the forbidden face may also belong to the contour and are therefore
considered as new starting voxels for the adjacent sub-block. In figure 8 let the two voxels drawn
starting voxel
starting voxel
new
starting
voxel
Figure
8: Fragmentation of a 6\Gammaconnected object into non 6\Gammaconnected fragments
in dark grey be the starting voxels on each sub-block. During its processing, sub-block 1 detects
that the sub-skin it is reconstructing gets out in sub-block 2. A new starting voxel (drawn in
black) is then pushed by sub-block 1 on the stack of sub-block 2 and no part of the skin is lost.
also send to sub-block 2 as other starting voxel the voxel mentioned on the
figure. Sub-block 2 will discard this voxel since it has already been treated while constructing
the small fragment on top of figure 8 from it own starting voxel. Vice versa, sub-block 2 will
send two new starting voxels to sub-block 1. They will also be discarded by sub-block 1 since
they have already been treated. This mecanism guarantees that the whole skin will finally be
reconstructed. As a consequence only one starting-voxel (v; i) (as described in section 4) per
sub-block is required at the beginning of the algorithm.
6 Coarse-grain parallelization
The coarse-grain parallelization of Sewing Faces is based on the notions of sub-blocking and
gluing phase. The idea is to decompose the 3D block of data into sub-blocks as defined in
section 5. The sub-skins on each sub-block are computed simultaneously on different processors.
Once all the sub-skins have been determined the final gluing phase is realized.
The computation of a sub-skin is similar to the sequential version, except for the voxels of
the gluing faces that may induce missing sews. For any such voxel the following steps must be
realized
(1) store the corresponding half-sew (as defined in definition 8) in order to later compute the
full-sew;
(2) determine which voxel v 0 of the forbidden face should be treated to realize this sew;
(3) determine which sub-block b j owns v 0 in a slice that is not a forbidden
on the stack of voxels of sub-block b j as a new starting voxel. This allows to
solve the fragmentation problem introduced in section 5.3.
These four points are realized by function HalfSews ( ). Notice that in order to realize point
(3), each sub-block has to know the origin and the size of all the other sub-blocks. Point (4)
can be realized either with a message passing mecanism on a distributed memory processor or
using shared memory on a share memory machine.
The parallel version of the algorithm, denoted by SF Par: (), is obtained by changing function
reat Seq: (v) into T reat Par: (v) as follows.
reat Par: (v)
get from the hash table
for each edge e of v shared by two faces
such that faces v
do
if e defines a missing sew then
HalfSews
else
/* w is not in object \Theta B */
/* in this case face j of voxel u */
/* belongs to the skin */
else
/* w is in object \Theta B */
/* in this case the opposite face to face */
/* i in voxel w belongs to the skin */
endif
endif
endfor
When all the sub-blocks have terminated their execution, each sub-skin is characterized by a
and the gluing process can begin. As explained in section 5.2 the half sews associated
with each sub-block are memorized into ordered lists. The gluing phase consists therefore in
scanning these lists in order to complete the sews. Due to the ordering of the lists according to
the coordinates of the supporting voxels, this process is linear relatively to the length of these
lists.
The number of voxels to be treated in each sub-block obviously depends on the objects that
are considered and also on the sub-blocking. Sub-blocking may be data-driven : the whole
image is scanned in order to detect the objects before splitting the block. This method is used
by [7] to achieve load-balancing in Marching Cubes parallelization. Such a whole block scanning
is terribly cost-effective compared to the contour following used by Sewing Faces to get good
time performance. Moreover we believe that a good load balancing is strongly related to the
domain of application. For a given type of images, for instance brain images obtained from
magnetic resonance imaging techniques, all the images to be treated are rather similar relatively
to the question of load-balancing. Therefore we think that before using Sewing Faces in a given
field, a preliminary study must be conducted on a set of standard images in order to detect
the appropriate sub-block decomposition that will result in effficent load-balancing. Such a
decomposition may be for example to split the block either into cubic sub-blocks or into slices.
7 Fine-grain parallelization
The fine-grain parallelization of Sewing Faces consists in allowing several processors to deal
with the same sub-block. Obviously input data and data structures must be shared by all the
processors running Sewing Faces on the same sub-block. In consequence the main difficulty of
such a parallelization is to prevent data from corruption.
Let us see in detail where corruption problems may arise :
possible data-corruption because it is read-only;
ffl hash table : when array faces new
is written in the hash table, if its previous version
old
currently stored in the hash table contains values that are not stored in
new
then the two arrays must be merged. Moreover 1-sews between faces
that (faces new
v [i] and faces old
new
v [j] and faces old
(faces new
v [j] and
old
new
v [i] and faces old
must be added to R. During this merging
step, access in hash table to array faces v [ ] by other processors must be forbidden using a
semaphore-like method;
ffl stack of voxels : no possible data-corruption since the voxels are just pushed to or poped
from the common stack. Depending on the stack implementation, semaphore-like operations
may be used to guarantee the correctness of the stack information (such as its
set F of faces : it is simply implemented using an array. When a new skin face f is
detected during one GetF aces ( ) execution, the next available face number in F must be
read and incremented using indivisible instruction or any other semaphore-like operation;
ffl set R of sews : no data curruption is possible because R is a write-only file;
ffl set of half-sews : it is implemented as a write-only disk file, therefore no data corruption
is possible. Moreover, the gluing process can easily deal with duplicated half-sews by
omitting them when the case arise.
The fine-grain parallelization of Sewing Faces solves the load balancing problem in the general
case. All the processors executing the algorithm on the same sub-block share one stack of voxels,
one hash table, etc. As a result they all finish their execution at the same time when the
stack of voxels is empty. When the fine-grain approach is used alone without any coarse-grain
parallelization, the load-balancing is always optimal.
The fine-grain parallelization may be combined with a coarse-grain decomposition using
a cluster of share-memory processors. The different sub-blocks are assigned to the different
machines of the cluster. On each share memory machine the different processors may compute
the sub-skin associated with one sub-block through the shared stack of voxels.
Embedding
So far we have only realized topological operations (adding faces into set F , adding sews into
set R), without considering real coordinates of the vertices. Therefore the extracted surface is a
topological surface. In order to visualize it, it must be transformed it into a geometrical surface,
i.e. into a 2D mesh. Such a process is called an embedding. A face embedded in the 3D space
becomes a facet. To convert all the faces into facets, a starting point is required : the real
Table
1: Execution time related to the sequential version SF
Size of the 3D block Number of faces Time (s.)
260 316656 6.0
300 421968 8.0
coordinates of the four vertices of a given face f of each border. From the type of face f and
from its sews types, it is easy to deduce from which face arises each of the four faces sewed with
f . We can compute the coordinates of the four faces adjacent to f . And so on. We thus obtain
the real coordinates of all the faces of FB and we get all the facets. If the three dimensions of
the basic parallelepiped representing one voxel are integer values, embedding of the skin does
not require any computation with real values. The embedding process is O(f) where f is the
number of faces of the skin.
9 Results
We have already proved [6] that the sequential version of Sewing Faces is linear in time and
space according to the number of faces in the skin. Linearity is still achieved by the coarse-grain
and the fine-grain approach. In tables 1, 2 and 3 we focus on the execution time of the coarse-grain
version versus the sequential version. The input data consist of digital balls of growing size
(from 230 3 to 300 3 ). Each image contains only one object whose skin is made of only one border.
The starting voxels are automatically detected by a dichotomous algorithm. Sub-blockings are
obtained by splitting each block in 8 sub-blocks b i=0;7 with identical sizes. Tests summarized
in tables 1, 2, and 3 were realized on a Intel Pentium 133 MHz computer running under Linux.
Indicated times are user + system times.
Table
results obtained with the sequential version of Sewing Faces. The
number of faces and the elapsed time (in seconds) are indicated. Indicated times include the
block loading, the research of the starting voxel and the skin building.
Table
2 shows the elapsed time of the coarse-grain version of Sewing Faces on any elementary
sub-block. The measured time includes the sub-block loading, the research of the starting voxel,
the sub-skin building and the half-sews detection. The third column indicates the elapsed time
related to the gluing process. Finally the last column shows the theoretical time obtained on a
multi-processor architecture where all sub-blocks are processed altogether at the same time. It
is obtained by adding the gluing time and the elementary sub-block time.
Table
3 points out the time saved by the coarse-grain approach and shows it as a percentage.
The last column indicates the speed-up factor due to the coarse-grain approach and underlines
the fact that using 8 processors we get an speed-up factor of about 5.
Let us now study the influence of the sub-blocking on the parallel computation time. The
efficiency obviously depends on the number and the structure of the sub-blocks. If the number of
Table
2: Execution time related to the coarse-grain parallel version of SF Par: ( )
Size of the 3D block Time to compute on a sub-block Time of the gluing phase Total time (s.)
300 1.1 0.5 1.6
Table
3: Speed-up obtained with the coarse-grain parallelization
Size of the 3D block time saved (%) Speed-up
260 80.00 5.0
300 80.00 5.0
sub-blocks is too large, the gluing phase will become more time-consuming due to the increasing
number of missing sews. This problem has been studied with a synthetic block of size 225 \Theta
300 \Theta 225 representing a bar of size 210 \Theta 300 \Theta 210. The block is decomposed into vertical
sub-blocks of equal width. In order to analyze the influence of the number of sub-blocks on the
efficiency of the parallel version of Sewing Faces, we have executed it on the above-mentioned
block, increasing the number of sub-blocks. This experiment has been realized on a SGI R4400.
Figure
9 shows the experimental results. It indicates that the computation time decreases with
the number of sub-blocks until we reach 12 sub-blocks. With more than 12 sub-blocks the
computation time remains almost the same. We may take advantage of the lack of penality
in terms of computation time when increasing the number of sub-blocks, to execute Sewing
Faces on more than 12 sub-blocks in case of memory limitation. The experiment has also been
conducted on brain images (cf. figure 11) using cubic sub-blocks of equal size. The previous
results are confirmed : above a certain number of sub-blocks, 27 in this case as indicated in
figure 10, the computation time does not decrease anymore due to the increasing number of
missing sews to glue. Figure 12 shows the result for a decomposition into two sub-blocks.
Conclusions
We have proposed in this paper a parallel version of a reconstruction surface algorithm. Our
goal is not only to increase time performances but also to deal with large 3D images that are now
time
# of sub-blocks
Figure
9: Parallel time computation for the bar
currently available in practical fields such as medical imaging. This parallelization is based on
a sub-block decomposition. In order to compute the skin using a contour following approach, a
two-slices wide overlapping between adjacent sub-blocks is required. Moreover sub-blocks must
communicate with their neighbors to guarantee the computation of the whole skin. The initial
choice of the data structures used by the sequential version of Sewing Faces (one stack of voxels
and one hash table for voxel faces) allows to easily and fully parallelize it, using a coarse-grain
and/or a fine-grain approach. Moreover the notions of border, composed object and sub-block
overlapping cause the parallel version of Sewing Faces to be very flexible. Notice moreover that
the sub-block based algorithm may also be useful on a mono processor machine to deal with
blocks of data that are too large to fit in memory. Experimental results have shown the efficiency
of the parallel version of the algorithm, both on synthetic balls data and on brain images.
--R
Octrees for Faster Isosurface Generation.
Digital Topology
Marching Cubes
Sewing Faces
Wu Digital Surfaces.
Allen Van Gelder and Jane Wilhelms Topological Considerations in Isosurface Generation.
--TR | computer graphics;parallel applications;3D digital images;coars and fine-grain parallelization;bounding surfaces reconstruction |
285191 | Linearly Derived Steiner Triple Systems. | We attach a graph to every Steiner triple system. The chromatic number of this graph is related to the possibility of extending the triple system to a quadruple system. For example, the triple systems with chromatic number one are precisely the classical systems of points and lines of a projective geometry over the two-element field, the Hall triple systems have chromatic number three (and, as is well-known, are extendable) and all Steiner triple systems whose graph has chromatic number two are extendable. We also give a configurational characterization of the Hall triple systems in terms of mitres. | Introduction
In November of 1852 Steiner [32] posed an infinite series of questions concerning
what are now known as Steiner triple systems. The second of these
questions asked whether or not it was always possible, given a Steiner triple
system on n points, to introduce n(n 4-subsets of the underlying
set of the given triple system with the property that no one of these
4-subsets contained a triple and, on the other hand, each of the 3-subsets of
the underlying set not a triple were in one of the chosen 4-subsets (necessarily
unique). In other words he asked whether or not every Steiner triple system
extends to a Steiner quadruple system. Triple systems that do extend are
now known as derived triple systems. 1
Despite an enormous effort, especially in the last twenty years, not a great
deal of progress has been made in answering this second question, but it is
known that all Steiner triple systems on fifteen or fewer points are derived
[10, 25] - a result obtained by computer attack.
We here introduce a rather restricted notion of "derived" for Steiner triple
systems. The ideas were inspired, indeed flowed naturally from, the binary
view of Steiner triple systems taken in [2]. We call a triple system possessing
such an extension to a Steiner quadruple system linearly derived . Although
The author wishes especially to thank Paul Camion and Pascale Charpin. The
research atmosphere that they have created at Projet Codes, INRIA surely contributed to
this investigation, the bulk of which took place during the Spring of 1995 while the author
was a visitor there.
Apparently Pl-ucker in 1835 had not only defined triple systems but had even asked
about extending them to quadruple systems and Pl-ucker's work may have been the inspiration
for Steiner's paper. See [37].
not all Steiner triple systems are linearly derived, all the geometric 2 systems
are - and many, many others too.
If a Steiner triple system is linearly derived so are any subsystems it may
possess. Hence whenever one finds a Steiner triple system that is not linearly
derived one knows that any system of which it is a subsystem cannot be
linearly derived.
To find a "linear extension" of a Steiner triple system one restricts one's
attention to those 4-subsets of the underlying set of the system that are
symmetric differences of two triples - that is, those 4-subsets that are the
support of the binary sum of the incidence vectors of two triples. We call
such 4-subsets linear 4-subsets.
Being linearly derived is related to the existence of quadrilaterals (the
unique four-line configurations on six points) and non-Fano planes (the
unique six-line configurations on seven points) in the triple system. In par-
ticular, the non-Fano planes of the system are in one-to-one correspondence
with those linear 4-subsets of the system that can be seen in three distinct
ways as the symmetric difference of two triples.
In order to more easily express and understand the results we present,
we attach a graph to each Steiner triple system. The vertices of the graph
are the linear 4-subsets of the system with two joined by an edge if the two
4-subsets intersect in three points. The geometric systems of points and lines
of projective spaces over F 2
are characterized as those Steiner triple systems
whose graph is monochromatic (that is, there are no edges).
Among the quadrilateral-free Steiner triple systems one finds the Hall
triple systems (which can be viewed as generalizations of the geometric systems
of points and lines of the affine spaces over F 3
We show that these
systems have graphs with chromatic number three and that they are linearly
derived. In fact, any quadrilateral-free Steiner triple system whose graph
has chromatic number three will be linearly derived, but it may be that the
Hall triple systems are characterized among the quadrilateral-free systems as
those whose graph has chromatic number three.
There are Steiner triple systems whose graphs have chromatic number
two; they are linearly derived and seem to form an interesting class of systems,
which, as far as we know, have never been considered. A linear extension
Following [5] we call a Steiner triple system geometric if it is the system of lines of
a projective geometry over the binary field or an affine geometry over the ternary field.
of such a system can be constructed from the quadrilaterals and non-Fano
planes of the systems - in fact, from the quadrilaterals alone. Indeed, one
of the most surprising outcomes - to the author at least - of this investigation
was the realization that if a Steiner triple system contains enough
Fano planes and proper non-Fano planes, then it is linearly derived. This
result, Theorem 4.3, may be viewed as a generalization of the Stinson-Wei
[33] result characterizing the geometric binary Steiner triple systems in terms
of the number of quadrilaterals.
Probably the vast majority of systems have graphs with chromatic number
greater than three. Since the maximum possible degree of the graph of
a Steiner triply system is eight, it follows from Brooks's theorem [6, Theorem
8.4] that the chromatic number of the graph is at most eight. We do not
have an example of a linearly derived Steiner triple system whose graph has
chromatic number greater than three. The cyclic system of order 5 (on 13
points) has chromatic number greater than three and is not linearly derived;
the quadrilateral-free system of order 6 (on 15 points) also has chromatic
number greater than three.
Mitres, the unique five-line configurations on seven points with two "par-
allel" lines, play a role in the discussion to follow. We also give a configurational
characterization of the Hall triple systems couched in terms of mitres.
Just as the geometric binary systems are characterized as those systems with
the maximum number of quadrilaterals, the Hall triple systems are characterized
as those systems with the maximum number of mitres.
Only an elementary knowledge of design theory (such as can be found,
for example, in Chapter 1 of [4]) and graph theory are necessary in order to
read this paper. Any of the standard references on design theory and graph
theory will contain the necessary background. Moreover, we define most
notions as they are introduced. The only departure from the nomenclature
of the vast existing literature on Steiner triple systems is that we use "order"
in the design-theory sense; it does not denote the number of points n of the
triple system, but rather (n \Gamma 3)=2.
Linear 4-subsets and their graph
Recall that a 4-subset of the underlying set of a Steiner triple system is
linear if it is the symmetric difference of two triples. The name comes from
the binary view of Steiner triple systems: a linear 4-subset is the support of
a weight-four vector in the binary code of the triple system that is the sum
of the incidence vectors of two triples.
Linear 4-subsets come in three flavors according to whether they arise
just one way, two ways, or three ways as a symmetric difference (or sum).
Clearly three is the maximum number, since a 4-set splits in exactly three
ways as the disjoint union of two 2-subsets. We will distinguish these linear
4-subsets by calling them singly-linear, doubly-linear and triply-linear,
respectively.
A triangle in a Steiner triple system is simply a 3-subset of the underlying
set which is not a triple of the system. A Steiner triple system on n points
triangles. A triangle can be expanded to a linear 4-subset in three ways,
clearly, but the resulting linear 4-subsets might not differ.
In fact, a triangle is a subset of one, two, or three linear 4-subsets so
triangles, too, come in three flavors: a Type I triangle is contained in exactly
one linear 4-subset; a Type II triangle is in exactly two linear 4-subsets; a
Type III triangle is in exactly three linear 4-subsets.
It is a simple matter to check that all four 3-subsets of a linear 4-subset
are triangles, that all four 3-subsets of a triply-linear 4-subset are of Type I,
and that all four 3-subsets of a doubly-linear 4-subset are of Type II. But
a singly-linear 4-subset could contain both Type II and Type III triangles.
Sometimes a singly-linear 4-subset contains only Type II triangles. In fact a
singly-linear 4-subset contains only Type II triangles if and only if its valency
in the graph is four. Of course, in a quadrilateral-free Steiner triple system
all linear 4-subsets are singly-linear and contain only Type III triangles. In
such systems, the valency of every vertex is eight.
denote the set of singly-linear 4-subsets, L 2
denote the set of
doubly-linear 4-subsets, and L 3
denote the set of triply-linear 4-subsets. Sim-
ilarly, \Delta I
will denote the set of Type I triangles, \Delta II
will denote the set of
Type II triangles, and \Delta III
will denote the set of Type III triangles.
The number of linear 4-subsets of a Steiner triple system on n points
depends, in general, not only on n but on the system. There are, however,
ways to form the symmetric difference of two triples. The following lemma
summarizes the easily proven relationships among the cardinalities of the sets
just described and some simple consequences of these relationships.
Lemma 2.1 1. jL
2.
3.
4.
5.
A quadrilateral of a Steiner triple system is the four-line configuration 3
containing exactly six points. Here is the configuration:
s
s
s
s
Doubly-even 4-subsets are obviously related to quadrilaterals but the notions
are different: a quadrilateral gives rise to three doubly-linear 4-subsets
and the triply-linear 4-subsets complicate the relationship still further. On
the other hand a quadrilateral-free Steiner triple system is the same as a
Steiner triple system all of whose linear 4-subsets are singly-linear. In fact,
"linearity" is a somewhat finer notion. Here is the easily proven relationship
between quadrilaterals and linearity:
Proposition 2.2 The number of quadrilaterals in a Steiner triple system is3 jL 2
and, in particular, the number of doubly-linear 4-subsets of a Steiner triple
system is divisible by 3.
3 We are using "line" rather than "triple" here to conform with common usage. The
four-line configurations of a Steiner triple system have been catalogued and counted by
Grannell, Griggs and Mendelsohn [13] and the number of quadrilaterals determines the
number of all other four-line configurations.
Corollary 2.3 The number of singly-linear 4-subsets of a Steiner triple system
is divisible by 3.
Proof: By Lemma 2.1 we have that
and the result is obvious. 2
A triply-linear 4-subset of a Steiner triple system corresponds precisely to
a configuration isomorphic to a non-Fano plane. Such a configuration is the
six-line configuration on exactly seven points and can be viewed as the usual
Fano plane with a line removed; as a linear space, it plays an important role
in coordinatization problems - see Delandtsheer, [9, Page 197]. Here is
the picture, the triply-linear 4-subset being the complement of the missing
line (the vertices of the large triangle plus the central point):
s
s
s
s
Observe that the triply-linear 4-subset given by the non-Fano plane is the
support of the binary sum of the incidence vectors of the six lines of the
configuration.
In order to easily state some of the results we wish to discuss we attach a
graph 4 to each Steiner triple system. Since no other graphs will appear in the
paper we simply call it the graph of the Steiner triple system. The vertices
of the graph are the linear 4-subsets and two vertices will be connected by
an edge if the two linear 4-subsets intersect in three points. The vertices of
the graph corresponding to the elements of L 3
are clearly isolated vertices
and there are no edges between vertices corresponding to elements of L 2
There will be edges between elements of L 2
and elements of L 1
, however.
4 There are at least two other ways to attach a graph to a Steiner triple system. Perhaps
the most interesting of these other graphs is the "block graph", which is a strongly-
regular graph. For an account of this graph the reader may wish to consult Peeters, [28,
Section 3.1]. The graph we are introducing is quite different and should not be confused
with the others.
In fact, the valency of an element of L 2
is clearly four. Because every Type
III triangle of a Steiner triple system gives rise to a 3-clique of the graph,
implies, by Lemma 2.1, that the graph is at least 3-chromatic; in
particular, the graph of any quadrilateral-free Steiner triple system is at least
3-chromatic. As we shall see, the Hall triple systems do have 3-chromatic
graphs.
Since every triangle is contained in a linear 4-subset and no linear 4-
subset contains a triple of the system, it is at least conceivable that one
might find among the linear 4-subsets the means to extend the triple system
to a quadruple system. In terms of the graph of the system, an extension of
a given Steiner triple system found among the linear 4-subsets, is the same
thing as a stable 5 subset of the vertices of its graph with cardinality precisely24
Because a collection of linear 4-subsets will form an extension provided there
are enough with no two intersecting in three points, the graph of a Steiner
triple system can have a stable subset of vertices of cardinality at most 624
We call a Steiner triple system that posesses such an
extension linearly derived and the collection of 1n(n linear
4-subsets a linear extension. Since L 2 [L 3
is always a stable subset of the
graph of a Steiner triple system, we have the following easy
Proposition 2.4 For a Steiner triple system on n points
In the case of equality L 2 [L 3
is a linear extension and the system is linearly
derived.
Remarks: 1) In fact, equality does occur for some Steiner triple systems. As
we shall see, Corollary 4.2, they are precisely those systems whose graph has
5 A stable subset of a graph is a set of vertices with no two connected by an edge; such
a subset is also sometimes called an independent set of vertices. A stable subset of a graph
is the same as a clique of the complementary graph.
6 In general it is "hard" - in the technical, computer-science sense - to compute
the maximum number of vertices in a stable subset of a graph and hard to compute the
chromatic number also. For an expository discussion of these matters see Knuth [18].
chromatic number one or two. 2) Since the triangles of Type I are in a unique
linear 4-subset - which is, in fact, triply-linear - any linear extension of a
Steiner triple system must contain all triply-linear subsets; that is, L 3
must
be a subset of any linear extension.
Observe that a linear 4-subset is always among the points of the subsystem
generated by any one of its triangles. It follows that a linear extension of
any Steiner triple system automatically yields linear extensions of all of its
subsystems. Since the cyclic Steiner triple system of order 5 (on 13 points)
is, as we shall see, not linearly derived, there will be many triple systems
without linear extensions.
In the next section we investigate the geometric Steiner triple systems
and show not only that they are linearly derived but that their graphs have
the smallest possible chromatic numbers.
3 Linear extensions of geometric systems
The Steiner triple systems we will deal with in this section are either the
systems of points and lines of a projective geometry over the binary field
or Hall triple systems - i.e., Steiner triple systems each of whose triangles
is contained in a subsystem isomorphic to the affine plane of order 3. We
begin with a characterization, in terms of the graph, of the binary geometric
systems. This result, although different, is equivalent to Theorems 3.1 and
3.2 in [33].
Proposition 3.1 If a Steiner triple system has no singly-linear 4-subsets
then it has precisely 1n(n\Gamma1)(n\Gamma3) linear 4-subsets and these 4-subsets form
a linear extension. Moreover, the triple system is the system of points and
lines of a projective geometry over the binary field and its graph is monochromatic
Proof: By Lemma 2.1 jL
is empty so is L 2
. It
follows that all linear subsets are triply-linear and that all triangles are of
Type I. Hence each triangle is contained in a unique linear 4-subset and we
have the necessary linear extension. The extension is simply given by L 3
and
the graph has no edges, i.e., is monochromatic. That the system must be
a geometric binary system follows rather easily. One can simply apply the
Theorem 3.2] or see below - since the number of
quardilaterals is maximal by Proposition 2.2, or use [15, 16].2
Corollary 3.2 The graph of a Steiner triple system is monochromatic if and
only if it is a geometric binary system.
Proof: It follows easily from the discussion presented with the description
of the graph that, if the graph is monochromatic, then jL and we have
the geometric system. On the other hand, in a geometric binary system of
points and lines of a projective geometry every triangle generates a Fano
plane and every linear 4-subset is triply-linear. In particular the graph is
monochromatic.2
The characterization of the binary geometric systems due to Stinson and
Wei [33, Theorems 3.1 and 3.2] is the following:
A Steiner triple system on n points has at
quadrilaterals with equality if and only if it is the geometric system of points
and lines of a projective geometry over the binary field.
From the point of view of [2], such a characterization should have a
"ternary" analogue - and indeed it does. Since the characterization is via
local properties, however, it is not the systems of points and lines of affine
geometries over the ternary field that are characterized but, instead, the Hall
triple systems - that is, systems that are locally like the affine plane of order
3. In the ternary analogue, quadrilaterals are replaced by mitres. 7 A mitre is
the five-line configuration with exactly seven points and two disjoint triples:
s
s
Here is the characterization:
7 The terminology is due to Andries Brouwer.
Theorem 3.3 A Steiner triple system on n points has at most12
mitres with equality if and only if it is a Hall triple system.
Proof: We first describe a pre-mitre - which can be viewed as a mitre
with one of the disjoint triples removed. Here is the picture:
s
s
s
s
Now, the four-line configuration above may or may not complete to a mitre.
The important point is that every mitre gives rise to exactly two such configurations
and that the number of such configurations is easily counted (see
[13] where pre-mitres are simply the four-line configurations of type C 15
There are precisely 1n(n pre-mitres in every Steiner triple system
on n points. This immediately gives the inequality with equality if and only
if every pre-mitre completes to a mitre. Since every triangle embeds in a
pre-mitre - in at least three ways, in fact - starting with a triangle and
using the fact that all pre-mitres complete to a mitre, it is an easy exercise
to show that every triangle is contained in a subsystem isomorphic to an
affine plane of order 3, that is, that the triple system is a Hall triple system.
On the other hand, given a Hall triple system every pre-mitre must be in an
affine plane of order 3 and hence must complete to a mitre, since the three
points that should be collinear must be on one of the two lines parallel to
the base of the pre-mitre.2
The proof above also gives the following
Corollary 3.4 A Steiner triple system in which every pre-mitre completes
to a mitre must be a Hall triple system.
We next want to show that the geometric systems of points and lines of
an affine geometry over the ternary field are linearly derived. We show more:
that Hall triple systems are linearly derived and, in fact, we show that the
graph of a Hall triple system is 3-chromatic, which is an apparently stronger
property.
First note that a Hall triple system cannot contain a quadrilateral since
affine planes of order 3 do not contain quadrilaterals. Thus, in this case we
have that both L 2
and L 3
are empty, with all linear 4-subsets singly-linear
- in contrast to the binary geometric systems where all linear 4-subsets
were triply-linear. That is, Hall triple systems are quadrilateral-free. There
are certainly many, many other quadrilateral-free Steiner triple systems, the
first non-Hall system occuring at order system #80 of the eighty triple
systems on 15 points. Andries Brouwer proved that quadrilateral-free
Steiner triple systems existed for all n j 3 (mod 6); this was independently
shown by Griggs, Murphy and Phelan in [14], where a brief history of the
problem is given. Griggs has constructed systems for many
and, moreover, conjectures that quadrilateral-free Steiner triple systems exist
for all admissible orders except 2 and 5.
Observe that the graph of a quadrilateral-free Steiner triple system is
regular of valency 8, contains 3-cliques, but does not contain a 4-clique. Its
chromatic number is, of course, at least 3. We first show that, if it is 3, then
it is linearly derived.
Proposition 3.5 In a quadrilateral-free Steiner triple system whose graph
has chromatic number 3, the linear 4-subsets split into three disjoint subsets,
each of which yields a linear extension of the triple system.
Proof: It follows from Lemma 2.1 that a quadrilateral-free Steiner triple
system on n points has precisely8 n(n
linear 4-subsets. Since the cardinality of a stable subset of its graph is at
whenever the graph is 3-chromatic the linear 4-subsets must split into three
stable subsets of cardinality each corresponding to one of
the three colors and each yielding a linear extension.2
We next show that the graphs of Hall triple systems have chromatic number
three by splitting their linear 4-subsets into three disjoint linear exten-
sions. We do not have an example of a quadrilateral-free Steiner triple system
whose graph is 3-chromatic that is not a Hall triple system.
Theorem 3.6 The linear 4-subsets of a Hall triple system on n points split
into three disjoint subsets, each of cardinality 1n(n \Gamma 1)(n \Gamma 3), with each
yielding a linear extension. In particular, the graph of a Hall triple system
is 3-chromatic.
Proof: Simple counting shows that there are precisely fifty-four 4-subsets of
an affine plane of order 3 that do not contain a line. All of these must be
linear 4-subsets since an affine plane of order 3 is quadrilateral-free and hence
contains fifty-four linear 4-subsets. It is well-known (see, for example, [22,
Section 7] for a direct proof from first principles or use the small Mathieu
design to get the desired splitting) that these fifty-four 4-subsets split into
three extensions - which are therefore necessarily linear extensions. We now
apply an idea first used in [3]. It can be applied to our present situation either
through [1] or [17] to split the linear 4-subsets of any Hall triple system into
the required three disjoint pieces. Briefly stated, the idea is that the linear
4-subsets all occur in subsystems of affine planes of order 3 and that one can
piece together the extensions of these planes to get linear extensions of the
Hall triple system.2
Thus we have shown that the geometric binary system of points and lines
of a projective geometry over the binary field has a linear extension as does
the geometric ternary system of points and lines of an affine space over the
ternary field. Moreover, we have characterized the binary geometric system
in terms of the graph and shown that, in this context, the Hall systems
are a natural generalization of the system of points and lines of the affine
geometry in the sense that they have linear extensions also and that their
graphs have chromatic number three. We do not have any other examples of
quadrilateral-free Steiner triple system whose graph has chromatic number 3.
We are indebted to Wendy Myrvold for the following computer-generated,
but easily hand-checked, proof that the graph of the unique quadrilateral-free
Steiner triple system of order 6 (on 15 points) has chromatic number greater
than 3.
Example: The unique quadrilateral-free Steiner triple system on 15 points
can be taken to be the cycles of the following three triples:
There are twenty-one cycle classes of (necessarily singly-linear) 4-subsets.
The reader can check, step-by-step, that at least four colors are necessary
by examining the vertices in the following order: f1; 2; 8; 12g, f2; 4; 8; 12g,
Of course, one must first check that the above 4-subsets are linear, but that
too is easily done by hand.
4 Chromatic number two
There are Steiner triple systems whose graphs have chromatic number 2. We
characterize these in the following result. It is a case in which the inequality
of Lemma 2.1 is an equality - thus also an extremal case.
Theorem 4.1 The graph of a Steiner triple system has chromatic number 2
if and only if it has singly-linear 4-subsets and the number of singly-linear 4-
subsets equals the number of doubly-linear 4-subsets. Moreover, in this case,
the graph, when its isolated points are removed, is a regular bipartite graph
of valency 4 and the Steiner triple system has at least two linear extensions,
one consisting of the singly-linear and triply-linear 4-subsets and the other of
the doubly-linear and triply-linear 4-subsets.
implies the existence of 3-cliques in the graph of a
Steiner triple system, for the graph to be 2-chromatic we must have
it is a simple matter to check directly from the definitions that,
putting L 3
aside, the graph on
is bipartite with parts L 1
and L 2
and is regular of valency 4. It follows immediately from Lemma 2.1 that
and since there are no edges among the vertices in L 2
this set forms a
linear extension. Since there are no edges among the vertices in L 1
either,
also yields a linear extension. It is also clear that when jL
is empty and all the above obtains.2
Remark: If the bipartite subgraph of the graph of the Steiner triple system
is disconnected, then there will definitely be more than two linear extensions
since, for each connected component, one can choose either of the parts.
Examples: 1) The Steiner triple systems #2, #3 and #16 in the usual ordering
([23]) of the 80 on 15 points satisfy the conditions of the Theorem and
hence have the announced linear extensions. 8 We are indebted to Vladimir
Tonchev for making the necessary computations for both this example and
the one that follows (which is easily done by hand). In fact, he computed
the data listed in Section 5 giving, for each of the 80 systems, not only the
number of linear 4-subsets of each type but also, by Proposition 2.2, the number
of quadrilaterals. The number of triply-linear 4-subsets is the number
of non-Fano planes contained in the Steiner triple system as six-line configurations
on seven points - as we have already remarked. For the systems
of full 2-rank (#23 9 through #80 except for #61) none of these non-Fano
planes complete to a Fano plane. For the systems #1 through #22 and #61
one can use the 2-rank to compute easily the number of non-Fano planes
that do not complete to a Fano plane - since the number of Fano planes is
is the corank, a Fano plane being a maximal subsystem in
this case. For example, #61 has no such non-Fano planes while #2, which
is of 2-rank 12 and hence has seven Fano planes, has eight non-Fano planes
which do not complete to a Fano plane. The general results presented above
show that four of the eighty Steiner triple systems of order 6 are linearly
derived and explicitly give the linear extensions. But all of them are known
to be derived and the history of the proof of that fact is interesting. As recently
as 1980 the possibility of a computer attack on the eighty systems was
8 These are precisely those systems which are both mitre-free and possess singly-linear
4-subsets, [7, page 215].
9 In [23, Page 42] it is incorrectly reported (see [24], where this error and several misprints
are recorded) that system #23 has a subsystem; in fact, it has none, but it does
have four non-Fano planes. The same reference missed the occurrence - apparently by
a misreading of the beautiful classification done by hand over seventy-five years ago by
White, Cole and Cummings [38] - of non-Fano planes in nine of the systems, each of
which has a single non-Fano plane. This omission was also observed by Grannell and
Griggs, [12, Page 187], who give the nine systems and the single non-Fano plane in each
one.
deemed "not feasible" (see [29, Page 113]) but Grannell and Griggs [12]
rather quickly made progress via computer and then the remaining fourteen
cases were despatched with a computer attack by Diener, Schmitt and de
Vries [10]. See the Introduction to [10] for a slightly more detailed history
and the body of that paper for a short description of the methods used. 2)
The two Steiner triple systems of order 5 (on 13 points) have the following
"linear" structure:
System Singly \Gamma Doubly \Gamma T riply \Gamma Quadrilaterals
linear linear linear
The cyclic system is mitre-free [7, page 221]. Wendy Myrvold generated,
by computer, the following easily hand-checked proof that the chromatic
number of the cyclic system is at least 4. The system can be taken to be the
cycles of
and, by examining the vertices of the graph in the following order, one
sees that at least four colors are needed: f2; 3; 4; 8g, f2; 3; 4; 6g, f3; 4; 6; 8g,
These systems
are known to be derived (see [25]) but not linearly. In fact, Myrvold
ran her clique program on the cyclic system which gave 52 as the cardinality
of the maximal stable subset. It follows from this computer result that no
Steiner triple system containing a subsystem isomorphic to the cyclic system
of order 5 can be linearly derived.
Corollary 4.2 For a Steiner triple system on n points we have that
with equality if and only its graph has chromatic number 1 or 2. Moreover,
in the case of equality both linear extensions.
Remark: It is at least theoretically possible for a Steiner triple system to
have lots of quadrilaterals, but be non-Fano-free - that is, have no non-
Fano planes. It seems unlikely that such systems exist since the non-Fano
planes of a system produce many of its quadrilaterals in known examples.
We do, however, have the following, perhaps empty, result, which, in any
case, does give a sharper upper bound on the number of quadrilaterals in the
non-Fano-free case than that given, in general, by the Stinson-Wei result.
Suppose a Steiner triple system on n points contains no non-Fano planes.
Then the number of quadrilaterals is at most72
with equality if and only if the chromatic number of its graph is 2. Moreover,
in the case of equality, both L 1
and L 2
yield linear extensions.
The proof is simple: Since L 3
is empty and jL
Proposition 2.2 implies the inequality. We have equality if and only if
Proposition 2.2 and Lemma 2.1. The
graph is bipartite, the parts being L 1
and L 2
and either part yields a linear
extension. One can phrase this characterization entirely in terms of quadri-
laterals. Observe first that in any Steiner triple system each quadrilateral has
three 4-subsets of its six points that do not contain a line of the quadrilateral.
These subsets will be either doubly-linear or triply-linear. In the following
result we call these subsets the 4-subsets given by the quadrilateral.
A Steiner triple system without non-Fano planes has a graph with chromatic
number 2 if and only if the collection of those 4-subsets given by the
quadrilaterals of the system forms an extension.
We now suppose given an arbitrary Steiner triple system on n points.
Let F denote the number of Fano planes contained in the system and N the
number of proper non-Fano planes, that is, non-Fano planes which do not
complete to a Fano plane of the system. Then the number of triply-linear
4-subsets is 7F + N . Since every non-Fano plane which does not complete
to a Fano plane gives rise to 6 doubly-linear 4-subsets, we have the following
consequence of Lemma 2.1, Proposition 2.2 and Proposition 2.4:
Theorem 4.3 Suppose given a Steiner triple system on n points containing
F Fano planes and N proper non-Fano planes Then
and the number of quadrilaterals of the system is at least 7F +3N . Moreover,
equality implies that the Steiner triple system has a graph with chromatic
number 1 or 2 and precisely 7F
the geometric system of points and lines of a projective geometry over the
binary field or else N 6= 0, jL
and L 1 [L 3
providing
linear extensions.
Proof: Since jL 1
and hence the inequality. Proposition 2.2 underbounds the number of quadri-
laterals. Equality forces jL and the rest of the Theorem
Remarks: 1) Observe that each of the three systems on 15 points that have
graphs with chromatic number two - #2, #3 and #16 - satisfy the bound
of the Theorem, that is F system #1 does also - as do all the
geometric binary systems. 2) Systems of of order 8 (on 19 points) cannot
furnish further examples of equality since the right-hand-side of the inequality
is not an integer, but, for systems of order 9 (systems on 21 points), the
right-hand-side is an integer presumably further examples
do exist in this case. It would be very interesting to have a suitably easy
construction of Steiner triple systems on n points, where n j 0;
with F
in any Steiner triple system on n points, it is, at least in principle, possible
that its graph might be highly disconnected consisting of the isolated vertices
comprising L 3
, a bipartite subgraph with one part L 2
and the other part jL 2 j
vertices from L 1
, with the other components of the graph, in the case in
which of chromatic number 3. Should that happen, then, as
in the Proposition 3.5 and the result above, we could extract at least six
linear extensions consisting of one-third of the vertices from the 3-chromatic
components, either one of the two parts of the bipartite subgraph and the
isolated vertices. The cyclic system of order 5 shows that this is not always
LINEAR STRUCTURE OF SYSTEMS OF ORDER SIX
possible since, as we have seen, the chromatic number of its graph is at least
4. In fact, for such a highly disconnected graph to arise one must have
singly-linear 4-subsets all of whose 3-subsets are of Type II since each vertex
corresponding to elements of L 2
is of valency four and therefore those vertices
from L 1
making up the other part of the bipartite subgraph must also have
valency four. Moreover, it is impossible to have more than jL 2 j such singly-
linear 4-subsets and, when there is such a set of singly-linear 4-subsets, all
remaining singly-linear 4-subsets must have all their 3-subsets of Type III.
Usually, this makes it easy to check that such a decomposition of the graph
cannot occur. Neither of the two triple systems of order 5 have graphs with
such a decomposition for just this reason. We have no examples of such a
decomposition with jL it is possible that none exist.
5 Linear structure of systems of order six
We record below results of computations made by Vladimir Tonchev and
Robert Weishaar; for further information see [36]. The corank is simply
is the 2-rank of the given system (the rank over the binary
field of the incidence matrix of the system). The number of Fano planes
contained in a system is simply c is the corank. Since the
number of non-Fano planes is the same as the number of triply-linear 4-
subsets, one has that the number of proper non-Fano planes is
where T is the number of triply-linear 4-subsets and c is the corank. The
number of quadrilaterals - which we have recorded - is simply
Proposition 2.2, where D is the number of doubly-linear 4-subsets.
LINEAR STRUCTURE OF SYSTEMS OF ORDER SIX 19
Number
linear linear linear
6 QUASI-LINEARLY DERIVED SYSTEMS? 21
6 Quasi-linearly derived systems?
It is widely believed that every Steiner triple system is derived. Since the
problem can be reduced to those systems of full 2-rank (see [2, Corollary 6.2])
it follows from Corollary 6.3 of [2] that, if all Steiner triple systems are
derived, then a suitable set of quadruples always can be found among the
binary linear span of incidence vectors of the triples.
In this work we have demanded that the quadruples be found among the
binary sums of the incidence vectors of two triples. One could investigate
what further systems can be shown to be derived by looking at the binary
sums of the incidence vectors of either two or four triples (using only those
yielding, of course, vectors of weight four). In fact, one could envisage a
measure of how difficult it is to find a derivation - using six or fewer triples,
eight or fewer triples, etc. We have not tried to do this.
In the case of four triples we do have the study of Grannell, Griggs
and Mendelsohn [13] of four-line configurations to aid us. Of the sixteen
possible four-line configurations only three, C
and C 15
give weight-four
vectors
appears to be the most interesting candidate, being a pre-
mitre. The weight-four vector produced by such a pre-mitre will have all
four of its 3-subsets triangles if and only if it does not complete to a mitre.
Only those, of course, could be used in trying to find what might be called
a quasi-linear extension of the triple system. Since we know, Corollary 3.4,
that whenever each pre-mitre complete to a mitre we have a Hall triple
system - which has a linear extension - in any other case there will be
4-subsets coming from pre-mitres that could be used to find a quasi-linear
extension. We do not have an example of a Steiner triple system that has
no linear extension but does have a quasi-linear extension, but one presumes
that such systems exist.
Although it is believed that all Steiner triple systems are derived, one still
does not have a suitably easy construction of derived triple systems. It might
conceivably be true that for all admissible n ? 13 there is a linearly derived
Steiner triple system and, if one could find an easy construction, one would
have an alternative to Hanani's recursive techniques for providing derived
10 The support of such a weight-four vector might, in fact, be linear. We have already
noted that a triply-linear 4-subset is the support of the binary sum of the incidence vectors
of the six lines of the non-Fano plane to which it corresponds.
7 AN HISTORICAL ASIDE 22
Steiner triple systems for all admissible orders.
7 An historical aside
In [13, Page 52] Grannell, Griggs and Mendelsohn wonder about the
fact that certain results on four-line configurations had been neglected in the
literature of Steiner triple systems. The same question could be asked about
the results we have here described since the ideas seem so natural, flowing
easily from a either a geometric or coding-theoretic view of the subject, and
the results so easily discovered. We would like to suggest a possible explanation
different from that proposed by Grannell, Griggs and Mendlesohn: an
insularity - and unreceptiveness to ideas coming from disparate disciplines
- that seems to be particularly acute among some workers investigating
Steiner triple systems.
Thoralf Skolem complained [31, Page 274] - perhaps because of the
announcement by Hanani at the 1958 International Conference of Mathematicians
- about the inattention paid by workers in the field to his appendices
to Netto's book. Indeed, his ideas on constructing triple systems 11 seem
to have been ignored for over a half century - except by members of the
Belgian school. When Skolem, probably best known for his work during the
20s and 30s on the foundations of set theory, complained in 1958, his ideas
had been in print for well over a quarter of a century in the Second Edition,
published in 1927, of the Lehrbuch der Combinatorik by Eugen Netto,
[26, Note 16]. It was nearly a quarter of a century after his complaint that
Lindner, [21, Page 184], observed that Skolem's ideas led to an "incredibly
simple" proof of the existence of Steiner triple systems on 6n
in 1987 that proof entered the English monograph literature: Steeet and
Street, [34, Chapter 5]. 12
Ever since its very infancy - in the work of Pl-ucker on cubic curves in
see de Vries [37] - Steiner systems have been intimately related to
11 A direct and easy proof of existence of Steiner triple systems of all admissible orders
can be constructed from [27, 30, 31]. See, for example, [34, Chapter 5] for an efficient and
easily understood direct construction.
I am indebted to D. R. Stinson for pointing out these two sources to me. Neither of
them took account of the work of O'Keefe; the Streets became aware of O'Keefe's work
only after the publication of their book.
geometry and its configurational aspects. 13 Yet, in [29, Pages 105-106] -
the same paper that asserted the infeasibility of the use of computers in determining
whether or not the eighty systems of order six were derived - the
contributions made by finite geometry to the study of Steiner triple systems
are summarily dismissed as "of minor consequence" for the question of deriv-
ability. This despite not only the history of the subject but also the fact that
the fundamental work of Teirlinck [35] - and its elaboration by Doyen,
Hubaut and Vandensavel [11] - had already shown the importance the
geometric systems had for Steiner systems of orders other than the geometric
ones.
Insularity may be more prevalent in combinatorial mathematics (see, for
example, the historical essay by Crapo, [8]) but it clearly can also appear
in the most developed fields with even world-class mathematicians being
unreceptive to new ideas (see Lang, [19, 20]).
--R
On the binary codes of Steiner triple systems.
Graph Theory with Applications.
Dimensional linear spaces.
Hans Ludwig de Vries.
Ranks of incidence matrices of Steiner triple systems.
Derived Steiner triple systems of order 15.
A small basis for four- line configurations in Steiner triple systems
On an infinite class of Steiner systems constructed from affine spaces.
The sandwich theorem.
Mordell's review
Mordell's review
A survey of embedding theorems for Steiner systems.
Transitive Erweiterungen endlicher Permutationsgrup- pen
Small Steiner triple systems and their properties.
Small Steiner triple systems and their properties - Errata
On the Steiner systems S(3
Lehrbuch der Combinatorik.
Verification of a conjecture of
Ranks and structure of graphs.
A survey of derived triple systems.
On certain distributions of integers in pairs with given differences.
Some remarks on the triple systems of Steiner.
Combinatorische Aufgabe.
Some results on quadrilaterals in Steiner triple systems.
Combinatorics of Experimental Design.
On projective and affine hyperplanes.
Steiner triple systems of order 15 and their codes.
Hans Ludwig de Vries.
Complete classification of the triad systems on fifteen elements.
--TR
--CTR
J. D. Key , H. F. Mattson, Jr., Edward F. Assmus, Jr. (19311998), Designs, Codes and Cryptography, v.17 n.1-3, p.7-11, Sept. 1999 | graphs;chromatic number;derived triple systems;steiner triple systems |
285263 | Achieving bounded fairness for multicast and TCP traffic in the Internet. | There is an urgent need for effective multicast congestion control algorithms which enable reasonably fair share of network resources between multicast and unicast TCP traffic under the current Internet infrastructure. In this paper, we propose a quantitative definition of a type of bounded fairness between multicast and unicast best-effort traffic, termed "essentially fair". We also propose a window-based Random Listening Algorithm (RLA) for multicast congestion control. The algorithm is proven to be essentially fair to TCP connections under a restricted topology with equal round-trip times and with phase effects eliminated. The algorithm is also fair to multiple multicast sessions. This paper provides the theoretical proofs and some simulation results to demonstrate that the RLA achieves good performance under various network topologies. These include the performance of a generalization of the RLA algorithm for topologies with different round-trip times. | Introduction
Given the ubiquitous presence of TCP traffic in the Internet,
one of the major barriers for the wide-range deployment
of reliable multicast is the lack of an effective congestion
control mechanism which enables multicast traffic to share
network resources reasonably fairly with TCP. Because it is
crucial for the success of providing multicast services over
the Internet, this problem has drawn great attention in the
reliable multicast and Internet community. It was a central
topic in the recent Reliable Multicast meetings [8], and many
proposals have emerged recently [7, 15, 13, 1, 18, 3, 19]. In
this introductory section, we first give an overview of the
previous work and then we discuss our experience with the
problem and introduce our approach.
The basic problem can be described as the following.
Consider the transport layer of a multicast connection with
This material is based upon work supported by the U.S. Army
Research Office under grant number DAAH04-95-1-0188.
one sender and multiple receivers over the Internet. The
sender has to control its transmission rate based on the loss
information obtained from all the receivers. We assume that
there is TCP background traffic; the end-to-end loss information
is the only mechanism to indicate congestion; the
participating receivers have time-varying capacities; and different
receivers can be losing different information at different
times. The objective of the control algorithm is to avoid
congestion and to be able to share network resources reasonably
fairly with the competing TCP connections. The
previously proposed multicast flow/congestion control algorithms
for the Internet can be broadly classified into two
categories: 1) Use multiple multicast groups. 2) A single
group with rate-based feedback control algorithms.
The first category includes those proposals using forward
error correction or layered coding [18, 19]. They require
setting up multiple multicast groups and require co-ordination
between receivers, which are not always possible.
Some limitations of this type of control are identified in [13].
Many of the proposed rate-based schemes share a common
framework: The sender updates its rate from time to
time (normally at a relatively large interval on the order of
a second) based on the loss information obtained. It reduces
its rate multiplicatively (usually by half, the same as TCP
does) if the loss information indicates congestion, otherwise
it increases its rate linearly. Different proposals differ in
their ways of determining the length of the update interval
and acquisition of loss information; they have different criteria
to determine congestion, etc. It is largely agreed that,
with no congestion, the rate should be increased linearly
with approximately one packet per round-trip time, which
is the same as TCP does. A critical aspect of these rate-based
algorithms is when to reduce the rate to half, or how
congestion is determined from the loss information from all
the receivers. Most of the proposed algorithms are designed
with the objective of identifying the bottleneck branches 1
of the multicast session and reacting only to the losses on
the bottleneck links [13, 7]. Some also try to be fair to TCP
[1, 15]. The algorithms have to be adaptive as well, i.e.,
be able to migrate to new bottlenecks once they come up
and persist for a long time. In the following, we discuss two
examples in detail.
The loss-tolerant rate controller (LTRC) proposal is based
on checking an average loss rate against a threshold [13].
The algorithm tries to react only to the most congested
paths and ignore other loss information. The algorithm
1 By bottleneck branches, we refer to the branches where the band-width
share of multicast traffic is the smallest among all multicast
paths, assuming equal share of all connections going through the path.
identifies congestion and reduces the sender rate if the reported
loss rate (an exponentially-weighted moving average)
from some receiver is larger than a certain threshold. The
rate is not reduced further within a certain period of time after
the last reduction. It is not clear how to choose the loss
threshold values for an arbitrary topology with any number
of receivers to drive the system to the desired operating
region.
The monitor-based flow control (MBFC) is a double-
threshold-check scheme [15]. That is, a receiver is considered
congested if its average loss rate during a monitor period is
larger than a certain threshold (loss-rate threshold), and the
sender recognizes congestion only if the fraction of the receiver
population considered congested is larger than a certain
threshold (loss-population threshold). Using the loss-
population threshold to determine whether to reduce the
rate or not is a means to average the QoS over all receivers,
and is not aimed to work with the slowest receiver. As a
special case, with the loss-population threshold set to minimum
(one congested receiver is counted as congestion), the
MBFC reduces to the case of tracing the slowest receiver,
but, again, it is difficult to derive a meaningful threshold
value to be able to single out the most congested receiver.
If the threshold value is too small, there could be excessive
congestion signals because different receivers could experience
congestion at different times.
There are many other proposals which we cannot cover
in this introduction. Although many of the proposals are
claimed to be TCP-friendly based on the simulation results
for certain network topologies, none have provided a quantitative
description of fairness of their algorithms to TCP and
a proof of their algorithms' ability to guarantee fairness.
We have carried out extensive simulations to study the
interaction of TCP traffic with other forms of rate-controlled
traffic in both unicast and multicast settings, with both
drop-tail and RED (random early drop) gateways 2 . We
summarize our major observations here and the details are
discussed in the rest of this paper.
First of all, there is no consensus on the fairness issue
between reliable multicast and unicast traffic, let alone a
useful quantitative definition. Should a multicast session be
treated as a single session which deserves no more band-width
than a single TCP session when they share network
should the multicast session be given
more bandwidth than TCP connections because it is intended
to serve more receivers? If the latter argument is
creditable, how much more bandwidth should be given to
the multicast session and how do we define "fairness" in
this case? This paper addresses this problem and proposes
an algorithm which allows a multicast session to obtain a
larger share of resources when only a few of the multicast
receivers are much more congested than others.
However, we believe that a consensus on the definition
of relative fairness between multicast and unicast traffic is
achievable once an algorithm shown to be "reasonably fair"
to TCP is accepted by the Internet community. The toughest
barrier to designing a fair multicast congestion control
algorithm is that most of the current Internet routers are still
of drop-tail type. A drop-tail router uses a first in first out
buffer to store arriving packets when the outgoing
link is busy. The FIFO buffer has a finite size and the arriving
packet is dropped if the buffer is already full. Since drop-tail
routers do not distinguish packets from different traffic
flows, they do not enforce any fairness for the connections
Routers and gateways are used interchangeably in this paper. See
[5] for definitions for drop-tail and RED gateways.
sharing resources through them. Also with drop-tail routers,
the packet loss pattern is very sensitive to the way packets
arrive at the router and is difficult to control in general.
Since TCP packets tend to arrive at the router in clusters
[21], any rate-based algorithm with an evenly-spaced packet
arrival pattern may experience a very different loss rate from
that of the competing TCP connections through a drop-tail
gateway. Therefore, rate-based algorithms adjusting source
transmission rate based on average loss rate cannot be fair to
TCP in general. But, it has been pointed out that rate-based
schemes are better suited for multicast flow/congestion control
than window-based schemes [15]. This is true in general
in terms of scalability and ease of design. However, if our design
objective is to be fair to window-based TCP, rate-based
schemes have difficulty, if not an impossibility, in achieving
the goal without help from the networks.
For the algorithms assuming the same loss rate for the
competing connections, RED gateways can be used to achieve
the goal. The RED gateway is proposed as an active router
management scheme which enables the routers to protect
themselves from congestion collapse [5]. A RED gateway
detects incipient congestion by computing the average queue
size at the gateway buffer. If the average queue size exceeds
a preset minimum threshold but below a maximum thresh-
old, the gateway drops each incoming packet with a certain
if the maximum threshold is exceeded, all arriving
packets are dropped. RED gateways are designed so
that, during congestion, the probability that the gateway
notifies a particular connection to reduce its window (or
rate) is roughly proportional to that connection's share of
the bandwidth through the gateway. Therefore, RED gateways
not only keep the average queue length low but ensure
fairness and avoid synchronization effects [6]. For our work,
the most important fact about the RED gateway is that all
connections going through it see the same loss probability.
RED gateways also make fair allocation of network resources
for connections using different forms of congestion control
possible. Adoption of RED gateways will greatly ease the
multicast congestion control problem, but the current Internet
still uses mostly drop-tail gateways. Therefore, it is
important to design an algorithm which works for drop-tail
gateways and might work better for RED gateways.
However, even with RED gateways, it is still very difficult
to locate the bottlenecks of a multicast session based
on loss information alone (refer to section 3.2 for details).
Many proposals for reliable multicast flow control do try
to locate the bottleneck links using some threshold-based
mechanism, such as the LTRC (loss-tolerant rate controller)
discussed above, but it is very difficult to choose a universal
threshold which works for all kinds of network topolo-
gies. [16] has shown that a loss-threshold-based additive-
increase-multiplicative-decrease multicast control algorithm
is not fair to TCP with RED gateways.
Based on the above observations, we decided to choose
a window-based approach to design a usable mechanism to
do multicast congestion control in the current Internet in-
frastructure. Specifically, we propose a random listening
algorithm which does not require locating the bottleneck
link. The algorithm is simple and possesses great similarity
to TCP; it ensures some reasonable fairness, defined later as
"essential fairness", to TCP with RED gateways or drop-tail
gateways in a restricted topology to be defined in the next
section. Although the scheme inherits many of the identified
drawbacks of TCP (some of them are alleviated in our multicast
scheme), it might be the only way that the multicast
sessions can potentially share bandwidth reasonably fairly
with TCP connections with drop-tail routers.
The rest of the paper is organized as follows: We propose
a quantitative definition for fairness between multicast
and unicast traffic in section 2. Our algorithm is presented
in section 3, and we prove that it is essentially fair to TCP
in section 4. In section 5 we present some simulation results
indicating the performance of our algorithm sharing
resources with TCP. We also briefly discuss a generalization
of the algorithm which works for topologies with different
round-trip times and its performance. Section 6 concludes
the paper by addressing some possible future work.
Our design of the multicast congestion control algorithm
is motivated by the design of the TCP congestion control
scheme [9, 12]. We summarize the basic properties of the
TCP scheme in the following:
Probing extra bandwidth: increase the congestion window
by one packet per round-trip time until a loss is
seen.
ffl Responsive to congestion: reduce the congestion window
to half upon a detection of congestion (i.e., a
packet loss).
ffl Fair: by using the same protocol, the TCP connections
between the same source and destination pair
(along the same route) share the bottleneck bandwidth
equally in the steady state. 3
Similarly we list our design objectives for the multicast
congestion control algorithm to be:
ffl Able to probe and grab extra bandwidth.
ffl Responsive to congestion.
Multicast Fairness: multiple multicast sessions between
the same sender and receiver groups should share the
bandwidth equally on average over the long run.
ffl Fair to TCP: the multicast traffic has to be able to
share the bandwidth reasonably fairly with TCP in
order to be accepted by the Internet community. This
is a complex issue to be addressed in the rest of this
section.
Note that our performance goals, including definitions of
fairness, are focused on the average behavior in the steady
state, assuming connections last for a long time. Our work
in this paper is based on this assumption. We do not try
to guarantee fairness to short-lived connections, but our algorithm
does provide opportunities for them to be set up
and to transmit data. This is a reasonable decision because
the multicast session is presumably cumbersome with many
links involved and thus it is impossible to react optimally to
every disturbance, especially short-lived ones.
2.1 Observations
We observe that TCP fairness is defined and achieved only
for the connections between the same sender and receiver,
that is, the paths have to have the same round-trip times
Generally speaking, the TCP connections share bandwidth
equally as long as they have equal round-trip times and the same
number of congested gateways on their path. But a slight difference
in the round-trip times could result in very different outcome in
bandwidth share due to the phase effect discussed in [5]; therefore, we
restrict the fairness definition to the connections between the same
source and destination pair along the same route which is the best
way of ensuring equal round-trip times.
and the same number of congested gateways. It is well-recognized
that the unfairness of the TCP congestion algo-
rithm, such as biases against connections with multiple congested
gateways, or against connections with longer round-trip
times and against bursty traffic, is exhibited in networks
with drop-tail gateways [5]. This observation leads us to define
the relative fairness between multicast and TCP traffic
on a restricted topology only, where the sender has the same
round-trip time to all the receivers in the multicast group.
As we mentioned in the introductory section, there is
no consensus on the issue of fairness between multicast and
unicast traffic. But the following is obvious: An ideal situation
is to be able to design a multicast algorithm which
can control and adjust the bandwidth share of the multi-cast
connection to be equal to some constant c times that
of the competing TCP connection, with c being controllable
by tuning some parameters of the algorithm. On the other
extreme, the minimum requirements of reasonable fairness
should include the following: 1) Do not shut out TCP com-
pletely. 2) The throughput of the multicast session does
not diminish to zero as the number of receivers increases.
Anything in between the ideal and the minimum could be
reasonable provided that the cost to achieve it is justifiable.
Loosely speaking, a useful definition for "essentially fair"
could be the following: when sharing a link with TCP, the
multicast session should get neither too much nor too little
bandwidth; that is, some kind of bounded fairness. We
quantify this definition next.
2.2 Concepts
First we introduce a restricted topology, referred to through-out
this paper, on which the fairness concepts are defined.
We also introduce the notation used throughout the paper.
Consider a multicast session fS
one sender S and N receivers . The sender also
has connections to each R i along the same
path (a branch of the multicast tree) [see figure 1], where
could be zero (no competing TCP connection). Imagine
a virtual link (or a logical connection), L i , between S and
R i . Note that the virtual links might share common physical
paths. We assume that the round-trip times, RTT i , of
are equal on the average. 4 Denote the minimum link
capacity (or available bandwidth) along L i by - i (pkt/sec).
We define the "soft bottleneck " of a multicast session, denoted
by Lsb , as the branch with the smallest - i
That is,
g. We say that the multicast
is "absolutely fair " to TCP if the multicast session operates
with an average throughput equal to min i f- i
the steady state. In other words, "absolute fairness" requires
that the multicast session be treated as a single session
and equally share the bottleneck bandwidth with competing
TCP connections on its soft bottleneck paths.
As we mentioned before, absolute fairness is difficult to
achieve based on loss information alone. By relaxing the definition
somewhat, we introduce an important concept called
"essential fairness ". We say that a multicast session is "es-
sentially fair " to TCP if its average throughput, denoted by
4 Notice that the round-trip time includes both queueing delay and
propagation delay. Therefore, it is time varying. In our analysis in
this paper, we assume a nice property of round-trip time: it is uniformly
distributed between pure propagation delay and propagation
delay plus maximum queueing delay. It is based on the single bottle-neck
queue model.
5 In contrast, a "hard bottleneck " would be the link with minimum
. Also, there could be multiple soft bottlenecks
with equal
Networks
R R R R
between S and R
L has bottleneck link capacity - ,
connections.
Dashed line : virtual link L
L 's might share common physical path in the network.
multicast connection, and m TCP connections.
Figure
1: A restricted topology.
- RLA , in the long run is bounded by a -
b - TCP , where - TCP is the average throughput of the competing
TCP connections on the soft bottleneck path, and a; b
are functions of N such that a - b ! N . Absolute fairness
is a special case of essential fairness with
g. b=a can serve as an indication
of the tightness of the fairness measure. The flexibility
of allowing an interval of fairness is necessary because absolute
fairness might not be achievable in some networks
whereas a fairness measure is needed. It is a reasonable representation
of the vague term "reasonably fair", and would
appear to be acceptable by many applications. The merit
of the essential fairness concept lies in its boundedness, so
the networks and applications can have some idea of what
they can expect. Our definition can be used to measure and
compare the fairness of existing multicast algorithms. We
will prove later that the random listening algorithm we propose
in this paper is essentially fair to TCP and it achieves
more tightly bounded fairness with RED gateways than with
drop-tail gateways.
In summary, we have defined three key concepts for multicast
sharing with unicast traffic on a restricted topology:
soft bottleneck, absolute fairness and essential fairness. The
definitions can be easily extended to the case with multiple
multicast sessions between the same sender and receiver
group. In the next section, we present a random listening
algorithm which achieves the design objectives described in
the beginning of this section.
3 Random Listening Algorithm (RLA)
In this paper, we focus on the congestion control problem.
We assume that the sender has infinite data to send and the
receivers are infinitely fast, so that the network is always
the bottleneck. Hereafter, we refer to a congested receiver,
meaning the path between the sender and the receiver is
experiences packet drops. We also define a
congestion signal as an indication of congestion according to
the algorithm; congestion probability as the ratio of the number
of congestion signals the sender detected to the number
of packets the sender sent; congestion frequency as the average
number of congestion signals the sender detected per
time unit. TCP considers packet losses as indications for
congestion. In particular, one or multiple packet drops from
one window of packets in TCP are considered as one congestion
signal since they usually cause one window cut (or
cause retransmission timeout) [4]. The number of window
cuts is equal to the number of congestion signals in TCP in
the ideal case without timeout event.
Our simulation experience convinced us that, with drop-tail
gateways, algorithms might have to be "TCP-like" in
order to be TCP-friendly. By TCP-like, we refer to the essential
feature of the congestion window adjustment policy:
increasing by one every round-trip time with no congestion
and reducing by half upon congestion. But TCP-like alone is
not enough to ensure TCP-friendliness. More importantly,
to be fair to TCP, we have to make sure that the multi-cast
sender and the competing TCP senders are consistent
in their way of measuring congestion. RED gateways ensure
that the competing connections sharing the same bottleneck
link experience the same loss probability no matter what
type of congestion control algorithms are used. However,
the situation with drop-tail gateways is more complicated.
We will show that, with some added random processing time
to eliminate phase effects, we can design our algorithm to
ensure that the competing connections see the same congestion
frequency. This problem is examined in detail next.
3.1 TCP's Macro-effect with Drop-tail Gateways
With drop-tail gateways, a basic feature of TCP traffic is
that the sender increases its transmission rate to fill up
the bottleneck buffer until it sees a packet loss. Then the
sender's transmission rate is sharply reduced to allow the
buffer to be drained. The TCP congestion control policy
results in a typical behavior of the buffer in front of the bottleneck
router: the buffer occupancy periodically oscillates
between empty (or almost empty) and full. Although this
periodicity is neither necessarily of equal interval nor deter-
ministic, depending upon the behavior of the cross traffic
other than TCP, it is certainly the macroscopic behavior of
the network routers carrying TCP traffic. We call the period
starting from a low occupancy to a full buffer and then
dropping back to a low occupancy, a "buffer period ".
Through simulation, we find that the buffer period normally
lasts much longer than two round-trip times, 2RTT ,
and the buffer-full period 6 , during which the buffer is full or
nearly full, normally lasts around 2RTT or less in the steady
state. During the buffer-full period within each buffer pe-
riod, a sender could lose more than one packet. In our algorithm
to be presented, we group the losses within 2RTT
as one congestion signal. This way we approximately make
sure of one congestion signal per buffer period if any packet
is dropped. The reason for doing so is that it is not desirable
to reduce a window multiple times due to closely spaced
packet drops. TCP actually considers multiple packet drops
within one window as one congestion signal.
Another phenomenon is that, with drop-tail gateways,
the packet drop pattern is very sensitive to the packet arrival
pattern. In particular, we find that the packet drop
pattern is very sensitive to the interval between two consecutive
packet arrivals at the bottleneck buffer. If the interval
is slightly smaller than the service time of the bottleneck
server, the next packet is more likely to be dropped when the
buffer is nearly full. Otherwise, if it is slightly larger than
the bottleneck server service time, the next packet is less
likely to be dropped because one packet will leave the buffer
in between. This is one type of phase effect identified in [5].
Phase effects do not take place in competing TCP connections
when the round-trip times are exactly the same. But
in a multicast session which consists of multiple links with
different instantaneous round-trip times, adding a random
6 This time interval roughly corresponds to the "drop period" defined
in [5].
amount of processing time is necessary to avoid the phase
effect. Therefore, a uniformly distributed random processing
time up to the bottleneck server service time is added
in our simulation with drop-tail gateways. The phase effect
might not be significant in the real Internet because of mixing
of different packet sizes, in which case our algorithm is
expected to work well, sharing bandwidth reasonably fairly
with TCP traffic.
With drop-tail gateways and added randomness to eliminate
the phase effect, our TCP-like multicast algorithm is
designed to make sure that the multicast sender sends packets
in a fashion similar to the TCP senders. Then all senders
have a similar chance to encounter packet drops in a restricted
topology with equal round-trip times, provided that
the congestion window sizes are large enough. That is, both
the multicast and TCP senders see roughly the same number
of congestion signals over a long period of time, or the congestion
frequencies should be the same on the average over
a large number of simulations. For the connections with
smaller window sizes, they might experience fewer packet
drops, which results in a desirable situation for our problem
of bandwidth allocation between multicast and unicast
traffic. The reason is explained in section 5.
3.2 Rationale for Random Listening
If the objective is to achieve absolute fairness between multi-cast
and TCP traffic, we have to locate the soft bottlenecks
of the multicast session and react only to the congestion
signals from the soft bottleneck paths. However, it is diffi-
cult, if not impossible, to locate the soft bottlenecks based
on the loss information alone. For the TCP connections to
achieve the same average throughput, the larger the round-trip
time is, the larger the window and hence smaller loss
probability required. Therefore, for a multicast session with
different round-trip times between the sender and receivers,
it is not reasonable to expect the bottleneck would be the
branch with the largest loss probability. Although, for the
restricted topology with equal round-trip times, the soft bottlenecks
are the branches with the largest loss probability,
it is still difficult to locate them based on loss information
alone. This is because either the sender or the receiver has
to calculate a moving average of the loss probability for each
receiver and the sender has to react to only the loss reports
from the bottlenecks which have the largest loss probability.
But, since losses are rare and stochastic events, a certain
interval of time and enough samples are needed to make the
loss probability estimate significant. It would take too long
to locate the soft bottlenecks correctly; the wrong action
based on the non-bottleneck branches could cause undesirable
performance results. Based on these observations, we
decided to trade off the absolute fairness (requiring locating
the soft bottlenecks) with fast response.
Now examine figure 1. The multicast sender is receiving
congestion signals from all congested receivers. Obviously
the sender does not want to reduce its window upon each
congestion signal. Otherwise, as the number of receivers
increases, the number of congestion signals will increase, and
the throughput of the multicast session will decrease as the
number of receivers increases.
Suppose that the sender knows how many receivers, say
n, are reporting congestion. An appealing solution would
be to reduce the window every n congestion signals. To
see why, consider a simple flat tree topology as in figure
1 with all the receivers, links and background TCP connections
identical and independent, and all connections starting
at the same time. Then the buffer periods are synchronized
and the sender receives n congestion signals in each buffer
period. Obviously it is desired that the sender only reduce
its window once every buffer period. This deterministic approach
is certainly a possible solution here. But in a more
realistic network with not everything identical, where buffer
periods are asynchronous and congestion signals come at
different times, the sequence of congestion signals arriving
at the sender could be very irregular, and thus the deterministic
approach would not work well. On the other hand,
in such a statistical environment, a random approach could
be a good candidate to produce good average performance.
This is the rationale we use to propose a random listening
scheme to handle a complex stochastic stream of congestion
feedback signals.
The basic idea is that, upon receiving a congestion signal,
the sender reduces its window with probability 1=n, where n
is the number of receivers reporting frequent losses. There-
fore, on the average, the sender reduces its window every n
congestion signals. If all the receivers experience the same
average congestion, the sender reacts as if listening to one
representative of them. If the sender detects one receiver
experiencing the worst congestion (on the soft bottleneck)
and the others in better condition with less frequent congestion
signals, it reduces the window less frequently than the
TCPs on the soft bottleneck branch, resulting in a larger
average window size of the multicast sender than that of the
TCP connections on this branch. But we can prove that
the multicast bandwidth share is bounded in terms of the
bandwidth share. Based on this idea, we propose a
random listening algorithm to be presented next, with its
performance to be discussed in the rest of this paper.
3.3 The Algorithm
The design closely follows the TCP selective acknowledgment
procedure (SACK) [12]. We focus on the congestion
control part of the algorithm. Here we only outline the essential
part of the algorithm. The complete algorithm is
implemented using Network Simulator (NS2) [17], and more
information is available at [20].
The important variables are summarized below. Their
meaning and maintenance are the same as in TCP unless
specified differently here. The items preceded by a bullet
are new to our algorithm.
smoothed round-trip times between the sender
and receiver i.
moving average of the window size.
ffl num trouble rcvr : a dynamic count of the number of
receivers which are reporting losses frequently.
dynamically adjusted threshold to determine
the probability of reducing the window upon a
congestion signal. For a restricted topology,
1=num trouble rcvr.
number uniformly distributed in (0, 1),
generated when a decision as to whether to reduce the
window or not is needed.
ffl last window cut : the time when the cwnd was reduced
to half last time.
ffl cperiod start i : the starting time of a congestion period
(i.e., the period in which packets are dropped) at
receiver i. This is used to group the losses within two
round-trip times into one congestion signal.
ffl min last ack : the minimum value of the cumulative
ACK sequence number from all receivers. All packets
up to this sequence number are received by all receivers
reach all : the maximum packet number which is
correctly received by all receivers. It could be different
from min last ack because selective acknowledgment
is used.
ffl rexmit thresh : if the number of receivers requesting
a retransmission of a lost packet is larger than
rexmit thresh, the retransmission is multicasted. Otherwise
the retransmission is unicasted.
The skeleton of the RLA is the following:
1. Loss detection method. Our multicast receivers use
selective acknowledgments using the same format as
receivers [12]. A loss is detected by the
sender via identifying discontinuous ack sequence numbers
or timeout. To accommodate out-of-order delivery
of data, the sender considers a packet P is lost if
a packet with a sequence number at least three higher
than P is selectively ACKed.
2. Congestion detection method. A congestion period starts
when a loss is detected and the cperiod start i is beyond
is reset to the
current time. The losses within 2\Lambdasrtt i of cperiod start i
are ignored.
3. Window adjustment upon a congestion detection. Upon
a congestion detected from receiver i by the above
method:
ffl update num trouble rcvr. If it is a rare loss from
a receiver not considered as a troubled receiver
(see rule 6 below), skip the following steps.
ffl if last window cut is beyond 2 awnd srtt i , 7
cwnd / cwnd=2. forced-cut.
ffl else, generate a uniform random number -,
it.
else cwnd / cwnd=2. randomized-cut.
4. Window adjustment upon ACKs. Once a packet is
ACKed by all the receivers, cwnd / cwnd+ 1=cwnd.
5. The window lower bound moves when max reach all
increases, but the window upper bound should never
exceed min last ack plus available receiver buffer size.
6. Update of num trouble rcvr: a congested receiver is
considered as a troubled receiver only if the receiver's
congestion probability is larger than a certain thresh-
old, which is set to 1=(j min congestion interval).
here is a constant, and is recommended to be set
to 20. min congestion interval is the smallest of the
exponentially-weighted moving average of the interval
lengths between congestion signals from all receivers.
This setting is justified in the proof in the next section.
trouble rcvr is the dynamic count of the number
of troubled receivers. A detailed implementation instruction
is available at [20].
7 In ideal TCP with deterministic losses, cwnd has a maximum
size of W and a minimum of W=2. cwnd is halved every W=2 round
trips, or RTT W=2 seconds. Here for the multicast connection using
random listening approach, to avoid ignoring too many consecutive
congestion signals due to the randomness of the algorithm, we choose
to force the reduction of the congestion window if the previous window
cut happens at least 2 awnd round trips ago. The threshold value is
ad hoc but works well from our simulation experience: Basically we
don't want cwnd to grow too large or the forced-cut to happen too
often.
Note that there are two different treatments to a congestion
signal: forced-cut and randomized-cut. The forced
actions are intended to protect the system by damping the
randomness. Without the forced-cut step, the algorithm
can possibly result in too long of a continuous increment of
cwnd, which is not desirable.
We also implemented a retransmission scheme 8 to recover
loss packets and a fast-recovery mechanism to prevent
a suddenly widely-open window which is undesirable
because it can cause congestion and a burst of packet losses.
Many details in the implementation are not described here
and can be found in [20]. Many of them are just straight-forward
extensions from the TCP algorithm. We believe it
is beneficial to keep it as similar to TCP as possible. Then
any changes to TCP or in networks to improve TCP performance
can be easily incorporated and are likely to improve
the performance of our algorithm as well.
4 Fairness of the RLA
In this section, we prove that our RLA is essentially fair to
TCP. That is, with the restricted topology where a multicast
session is sharing resources with unicast TCP connections,
the multicast session gets a bandwidth share which is c times
the share of a competing TCP connection on the soft bottleneck
branch, where c is a bounded constant. We present
a simple proof based on some gross simplifications of the
system and the algorithm. Although a sophisticated proof
based on advanced stochastic processes, similar to the proof
for the TCP case [14], is possible, we choose a simple approach
which is easier to understand and better illustrates
our idea. We also prove the multicast fairness property of
the RLA, one of the design objectives mentioned in section
2, using a simple two-session model.
We first present a simple estimation of TCP's performance
adopted from [14], then we use the same idea and
result to prove our theorems. The key part of the proofs
is to show that the RLA results in an average window size
bounded from above and below by functions of the congestion
probability (the ratio of the number of congestion
signals to the number of packets sent, see section 3) on the
soft bottleneck branch. Since on each common link, the
RLA sender and the competing TCP senders see the same
loss probability with RED gateways [6], or the same congestion
frequency with drop-tail gateways with phase effects
eliminated (see section 3.1), a relation between congestion
probabilities of the two types of traffic can be derived, based
on which the bandwidth shares can be calculated.
4.1 Estimation of TCP Throughput
We consider TCP SACK here and use the approximation
technique introduced in [14]. Although there are many subtleties
in the implementation of fast retransmission and fast
recovery, etc., we list the most important parts of the algorithm
relevant to congestion control here.
The sender maintains cwnd and ssthresh, with the same
meaning as defined in section 3. The sender also estimates
the round-trip time and calculates the timeout timer based
8 There could be different ways of doing retransmission as long as
the retransmission traffic does not interfere too much with the normal
transmission. In our implementation, the sender waits until it hears
from all the receivers and it retransmits a lost packet by multicast
if the number of receivers requesting it is larger than a threshold
(rexmit thresh) and by unicast otherwise. The receiver can also
trigger an immediate retransmission of a lost packet by unicast if it
sets a field in the packet.
on the estimation. The TCP congestion window cwnd evolves
in the following way:
(1) Upon receiving a new ACK:
cwnd
else cwnd / cwnd
congestion avoidance phase.
(2) Upon a loss detection:
set ssthresh / cwnd=2, and cwnd / cwnd=2:
(3) Upon timeout,
set ssthresh / cwnd=2, and cwnd / 1,
denotes the integer part of x.
We consider cwnd as a random process and are interested
in its average value in the steady state since it is roughly proportional
to the average throughput of the TCP connection.
Assume perfect detection of packet losses, and that the slow-start
and the timeout events can be ignored in the steady
state analysis [10]. Suppose we run the algorithm for a long
time and the resulting congestion probability (the number
of window cuts over the number of packets sent) is p. The
resulting average window size can be approximated in the
following way [14]. Denote the cwnd right after receiving
the acknowledgment with sequence number t by W t . Then
in the steady state, the random process W t evolves as fol-
lows: given W t , with probability
and with probability p, W
between jumps upon acknowledgment arrivals. Now considering
the average drift of W t if W denoted by D(w),
we have
. The drift is positive if w ! w
and negative otherwise. Then the stationary distribution of
must have most of its probability in the region around
w . This gives an ad hoc approximation of the average window
The unit is in terms of packet. Throughout this paper, we
call this approximation "proportional average (PA) window
size". It can be shown that W is a good approximation
to the time average of the random process W t and in fact
is proportional to it. We adopt this simple approximation
approach in our analysis since it is adequate for our purpose.
Also note that the above simple derivation gives a result
similar to the popular formula for TCP throughput estima-
(packets), as in [11], with
a slightly different constant. Comparison of the two formulas
shows that the average throughput is roughly proportional
to the ratio of the average window size to the average
round-trip time. Both formulas only work for the cases with
small loss probability. Therefore, in the rest of our paper,
we only consider the cases with (used in [11]), called
moderate congestion. The performance of TCP (and TCP-like
algorithms) deteriorates in heavy congestion because of
frequent timeout events. Maintaining fairness is then not as
important an issue as long as no one is completely shut out.
4.2 Estimation of RLA Throughput
Applying the above drift analysis technique to our RLA algorithm
proposed in section 3, we can derive the following
proposition.
Proposition: Consider a restricted topology (with TCP
background traffic and the RLA used by the multicast sender)
and n receivers persistently reporting congestion. The congestion
probabilities seen by the multicast sender from the
n receivers are p i , congestion is moderate so
that Denote the proportional average
of the congestion window size by W . Then W satisfies
the following:
Due to space limitations, we cannot show the complete
proof which is available in reference [20]. The basic idea and
methodology are illustrated using the following simple case
of two receivers with independent loss path (see figure 2(a)).
(a) independent losses
G
(b) common losses
Figure
2: Two simple cases with two receivers only.
In figure 2(a), the sender sees independent congestion signals
from receivers 1 and 2, denoted by R1 and R2, respec-
tively. We assume that all traffic is persistent and thus, in
the steady state, Therefore, if the sender detects a congestion signal at
time t0 , it cuts its window by half with probability 1; if the
outcome turns out to be ignoring the congestion signal, the
lost packet will be ACKed later (at time t1 ) and will cause
the congestion window to increase by 1
W , based on the fourth
rule of the RLA (see section 3.3). In our proof, we ignore the
possible difference between the congestion window sizes at
time t0 and t1 since 1
is small for a relatively large window
size. Now for each packet sent by the sender, the possible
outcomes are (w.p. stands for with probability):
1. No congestion signal from both receivers, w.p.
then W
W .
2. Cause one congestion signal from R1, w.p.
then W
2 , or
W / Ww.p. 13. Cause one congestion signal from R2, w.p.
then W
W w.p. 1, or
Cause two congestion signals from both receivers w.p.
then W
W w.p. 1, or
2 , or
4 .
The positive drift of W isW
and the negative drift of W is
The neutral point gives the approximation for the average
window size to be
It is easy to check out by simple algebraic manipulation
that, for any pmax ? 0, W ?
holds.
To prove the upper bound in equation 2 with
loss of generality assume
the following
holds:
that f(p1) is an increasing
function of p1 for 1. Therefore, for p1 !
5%, x larger than 0.03 is sufficient for x - f(p1) to hold.
This condition is ensured in the RLA algorithm by controlling
the way the variable "num trouble rcvr" is dynamically
counted. The RLA algorithm counts a receiver as a "trou-
bled receiver" only if the interval lengths between the congestion
signals are smaller than j min congestion interval,
that is, its average congestion probability is larger thanj pmax . We recommend in our algorithm to take
or 1
which leaves more room than the above 0.03
bound. Protocol designers can choose a proper value for j
based on the above analysis.
Note that we did not consider the forced-cut action in
the RLA algorithm, which is rarely invoked (as shown in
the simulation results). The effect of the forced-cut could
be a slightly smaller average window size which does not
affect our results in any significant way.
With more complex algebraic manipulation involved, the
results can be extended to a case with n receivers with independent
loss paths. Using the same approach, for a topology
with common losses only (see figure 2(b) for an illustration
of the case with two receivers), we can prove equation 2
holds. The general case stated in the Proposition can be
proved using the above results and the following Lemma:
Lemma: A higher degree of correlation in loss due to common
path results in a larger average congestion window size
if the RLA is used.
The proof is omitted due to space limitations. Intuitively,
for the same congestion probability, correlation in the congestion
signals results in more window increments and less
window cuts on the average. This is because congestion signals
come in groups in the correlated case. This has the
potential of causing a deep cut in cwnd at once, while the
independent congestion signals come one at a time but more
frequently, and cause potentially more window cuts.
We deliberately choose the bounds in the form of equation
because
is related to the average window
size of the competing TCP connections. Now we are ready
to proceed to show that the RLA is essentially fair to TCP.
Theorem I: Consider a restricted topology with RED gate-
ways. If there are n receivers persistently reporting congestion
and the largest congestion probability is less than 5%,
the RLA algorithm is essentially fair to TCP with
3n. 9
Due to space limitations, here we only outline the major
steps of the proof. First, since RED gateways ensure that
on each link, all competing connections see the same loss
9 See section 2 for the definition of a and b.
probability [6], we denote the largest loss probability, occurring
at the soft bottleneck branches, by p l . We can derive
the relations between p l and the corresponding congestion
probabilities, p RLA
c for the multicast connection, and p TCP
c
for the TCP connection. Then, using these relations and
the Proposition, we can derive the following inequality:3
Second, we have to consider the round-trip times in order
to estimate the throughput. Here we have to notice that
in the multicast RLA, a packet is considered acknowledged
only if the sender has received ACKs from all the receivers.
Then the round-trip time for each packet in the RLA is
always the largest among the round-trip times on all the
links when the packet is sent. Denote the average round-trip
time for RLA by RTT RLA and that for TCP by RTT (recall
each of the branches in the restricted topology (figure 1) has
equal average round-trip times). Using the approximation
that the round-trip time is equal to a fixed propagation delay
plus a varying queueing delay, we can derive the following:
Finally, combining the bounds for average window size
and average round-trip times, we have the following:
RTT RLA
RTT
That is, the long term average throughput of the multi-cast
sender is no less than a third of the TCP throughput
on the soft bottleneck branch, and no more than
3n times
that. Therefore, the RLA is essentially fair to TCP according
to the definition of essential fairness in section 2.
Theorem II: Consider a restricted topology with drop-tail
gateways and the phase effect eliminated. If there are n
receivers persistently reporting congestion and the largest
congestion probability is less than 5%, the RLA is essentially
fair to TCP with
This theorem can be proved similarly to theorem I, using
the fact that, with drop-tail gateways and the phase
effect eliminated, the competing RLA and TCP traffic see
the same congestion frequency. The proof is omitted due to
space limitations.
4.3 Remarks
In the above two theorems, we proved that the RLA is essentially
fair to TCP with the restricted topology with equal
round-trip times. Note that the bounds in the theorems are
widely separated for sizable n; this is because they work for
all situations including the cases with extremely unbalanced
congestion branches. The algorithm actually delivers desirable
performance in the following way: if all the troubled receivers
have the same degree of congestion, the RLA results
in a throughput no larger than four times that of the competing
TCP throughput for any n (this can be proved [20]);
on the other extreme, if there is one most congested receiver
and the other receivers experience only minor congestion
just enough to be counted as troubled receivers, the
actual throughput of the RLA is close to the upper bound
which is in the order of n for drop-tail gateways. That is,
the multicast connection on the soft bottleneck branch gets
times the smallest throughput among the competing
TCP connections. This might be desirable because this
single bottleneck is slowing down the other receivers.
If this is not desirable, the RLA can implement an option
to drop this slow receiver. For the situations in between
the above two extreme cases, the RLA gives reasonable per-
this is demonstrated in the simulation results in
section 5.
In summary, the RLA achieves a higher share of band-width
than the TCPs on the soft bottleneck branches when
only a few receivers in the multicast session are much more
congested than others. This is reasonable because the multicast
session serves more receivers and it should suffer less
on a single highly congested bottleneck.
4.4 Multicast Fairness of RLA
The RLA is fair in the sense that the senders of competing
multicast sessions between the same sender and receiver
group will have the same average cwnd in the steady state.
Consider a simple case with two competing sessions with n
receivers in each session on the same topology of the form in
figure 1. The cwnd's of the two senders are correlated random
processes. The problem can be modeled as a randomly
moving particle on a plane, with x and y axes being the cwnd
of sender 1 and 2, respectively (see figure 3). This model is a
generalization of the deterministic model for unicast congestion
control used in [2], where the authors proved that the
linear increase/multiplicative decrease scheme converges to
the fair operating point. Our algorithm is a generalization of
the unicast algorithm to multiple receivers and introduces
randomness. We will show that although the cwnd does
not converge to a single point, the desired operating point
(equal share of the bottleneck bandwidth, see figure 3) is a
recurrent point and most of the probability mass would be
focused on the general area of this point.
fairness line
desired operating point
pipe 3
pipe 3
Y
Figure
3: Fairness of RLA to each other.
In our analysis below, we assume there is no feedback
delay. 10 Since the two sessions share exactly the same path,
the two senders get the same congestion signals. The senders
are informed of congestion by receiver i if cwnd1
exceeds the pipe size of virtual link L i which is the largest
RTT value times the available bandwidth of the link (see figure
3). Otherwise the sender is informed of no congestion.
Focus on the troubled receivers, recalling there are n of them
per session. We order the pipe sizes of these troubled links
as pipek and there are n i
receivers with pipe size pipe i and
In figure 3,
This assumption is necessary to allow us to use a simple and
neat analysis. Our simulation results indicated that the fairness we
claimed here still holds when propagation delay is involved.
in the white region with no congestion
in the lightly
shaded region and the senders receive n1 congestion signals
once they enter this region; and pipe2
pipe3 in the dark shaded region and the senders receive
once they enter this region, and so
on. If there is no congestion, both cwnd's increase linearly
which results in an upward movement of the particle along
the line (see the movement of x0 to x1 in figure 3). In
the congested region with a certain number of congestion signals
fed to the senders, each sender independently generates
a random number to decide whether to increase or cut the
window. In our model, the particle randomly chooses one of
the moving directions which are combinations of increasing
or cutting of each window. For example, in figure 3, assuming
after a round-trip time,
can move to (cwnd1 +1; cwnd2 +1), or (cwnd1=2; cwnd2=2),
or (cwnd1 + 1; cwnd2=2), or (cwnd1=2; cwnd2 1).
Obviously the movement of the particle is Markovian
since the next movement only depends on the current location
of the particle. From this Markovian model, we can
draw several conclusions about the system. First, the desired
operating point (see figure 3) is a recurrent point. This
is because from any starting point, there exists at least one
convergent path to the desired point (e.g.,
along the dotted line in figure 3) with positive
probability. Note that, in our model, there is a positive
probability for the cwnd to grow to infinity, but this does
not happen in the real system because we incorporated a
forced-cut mechanism in the algorithm.
Secondly, the average cwnd's of the two senders are the
same. This is obvious because the two senders get the same
congestion signals and react randomly but identically and
independently, that is, the roles of the two senders are inter-
changeable. In other words, if we switch the x and y axes,
the moving particle follows the same stochastic process, and
the marginal distributions along the axes are the same which
gives the same mean value.
Finally, most of the probability mass would be focused
on the general area of the desired operating point in figure
3. To illustrate the idea, we consider a simple case where all
of the n links have the same pipe size pipe. Then the plane
is divided into two regions: a non-congested region with
pipe and the rest a congested region with
arriving to the sender upon each packet
loss. Since in the RLA, the losses within two RTTs are
grouped into one congestion signal, we consider a discrete-time
version of the system with the time unit being two
round-trip times, i.e., \Deltat = 2RTT . Denote cwnd of the kth
multicast session by Wk , 2. If there is no congestion,
congestion with n congestion signals,
2. W1 and W2 control the
movement of the particle along x and y axes, respectively.
The average drift along the x axis is 2 if W1
or 2
pipe. The
time unit is 2RTT . The average drift along the y axis is
symmetric and can be obtained by replacing W1 with W2
in the above equation. The drift diagram with
is drawn in figure 4; the drift is scaled down by a
factor of 5 to make the picture clear.
The drift diagram shows that the particle controlled by
the two congestion window sizes along the two axes has a
desired operating point
average drift of cwnd1
average
drift
of
Figure
4: Average drift diagram of two competing cwnd's.
trend to move towards the desired operating point. Figure
5 is the density plot of the occurrence of the point
during one simulation run; 11 the higher
numbers of occurrence, the darker the area. It shows that
most of the probability mass is in an area centered around
the desired operating point which is (20, 20) in this case.
Figure
5: Density plot of the occurrence of (cwnd1 ; cwnd2 ).
Performance Evaluation
A version of the RLA is implemented in Network Simulator
simulation purposes to test the RLA performance
under various network topologies. Here we present some
of the simulation results in a four-level tertiary tree network
topology (see figure 6), where the links and nodes are labeled
with the first number index indicating their level and the
second indicating an order in each level.
We describe most of the simulation parameters used in
the simulations shown below. In figure 6, all senders (RLA
or TCP) are located at the root node S, all receivers at leaf
nodes R27. The nodes in between are gateways;
they could be either drop-tail or RED type. All nodes have
a buffer of size 20 packets. In the case of RED gateways,
11 The simulation setup consists of two multicast sessions with 27
receivers in each in a topology of the form shown in figure 1. There is
one TCP session from the sender node to each of the receiver nodes.
All receivers have the same capability. Each path has a delay band-width
product of 60 shared by 2 multicast and 1 TCP sessions. There-
fore, each session is supposed to get an average cwnd of 20.
R
Figure
Four-level tertiary tree.
the minimum threshold is 5 and the maximum threshold is
15. (Other parameters are the default values used in the
standard NS2.0 RED gateway). The one-way propagation
delays of the first three level links are all 5 ms, and those
of the last level links are 100 ms. We tested the situations
with bottleneck links at different levels to study the effect
of independent or correlated losses. All the non-bottleneck
link speeds are set to 100 Mbps. Data packet size is set
to 1000 bytes. All simulations have 27 receivers (except for
the case with different round-trip times), and all receivers
are troubled receivers. In the simulation results presented
here, rexmit thresh is set to 0 (i.e., all retransmissions are
multicasted). All simulations are run for 3000 seconds and
statistics are collected after the first 100 seconds. The simulation
results are briefly summarized in the following.
5.1 Multicast Sharing with TCP
The results for drop-tail gateways are shown in figure 7 and
figure 9 for RED gateways. There are 5 cases with different
soft bottleneck locations. The second row (most congested
links) in the figures indicates the soft bottleneck location.
The corresponding link speeds are set so that the soft bottleneck
bandwidth share is min i
second (recall m i is the number of background TCP connections
between the sender and receiver i). We list the
performance of the RLA, including the average throughput
in packets per second, average congestion window size, average
round-trip time (for those packets correctly received
without retransmissions), the number of congestion signals
the multicast sender detected from all receivers, the number
of window cuts and the number of forced window cuts, over
the entire simulation period (after the first 100 seconds).
We also list the worst and the best case TCP performance
(WTCP and BTCP rows in the figures) among the competing
flows.
As we can see from figure 7, the RLA achieves reasonable
fairness with TCP even with drop-tail gateways. Comparing
cases 1, 2 and 3, we can see that a higher correlation
among the packet losses results in a larger average window
size and a higher throughput. This agrees with our Lemma
in section 4.2. The throughputs of the RLA and TCP connections
in all cases satisfy the essential fairness requirement
with In fact, the bounds are quite loose
for these cases. The actual performance of the RLA algorithm
in most cases is much more "reasonable" than the
bounds indicated, in the sense that in most cases the RLA
can achieve a tighter bounded fairness. With the simulation
setup, the measured essential fairness has bounds
these cases, which is very reasonable performance
and acceptable for many applications.
cwnd
signals
# wnd
cut
forced
cut
cwnd
# wnd
cut
cwnd
# wnd
cut
RTT
RTT
RTT
Links
Congested
Most
Case
A
T144.123247081.889.621.981827.219797022.072223.36880.27065179.20.26980.30.27040.012759017.984243.8405117540.23118.953.700.238570.7(sec)
(sec)
(sec)
thrput
(pkt/sec)
(pkt/sec)
thrput
thrput
(pkt/sec)
Figure
7: Simulation results with drop-tail gateways.
We can also see from figure 7 that the number of window
cuts taken by the RLA sender is roughly 1of the congestion
signals the multicast sender detected, as desired. In figure
8, we consider the congestion signals from each receiver
separately and list the worst, best and average number of
congestion signals the sender detected from each of the receivers
on the links with the same level of congestion. The
number is over the entire simulation period (2900 seconds).
We also list the results for the competing TCP connections.
It demonstrates that the TCP sender and the RLA sender
see roughly the same number of congestion signals on each
branch on average. Therefore, they see the same congestion
frequency, as we argued in section 3.1. Note that the
discrepancies between the RLA and TCP congestion frequencies
are larger in cases 4 and 5. This is because their
congestion window sizes are very different (refer to figure
7 for window sizes). In these cases, a larger window likely
incurs more losses. Although it breaks the assumption of
equal congestion frequencies, it creates a desired balance:
the larger the window, the more losses and then the window
is more likely to be reduced to half and vice versa. This
balance actually helps to achieve a tighter bounded fairness
as we observed in the simulations.7629521082
Worst
Average
Best Worst Best Average
RLA Branch
more
all links
all links
all links
Case
861 861 879 8188319256464057228428997131082less congested
more
Figure
8: Statistics of the number of congestion signals.
Figure
9 shows the corresponding results for the RED
gateways. All simulation setups are the same except that
the gateway type is changed to RED, and we do not use
random overhead in these simulations because RED gateways
eliminate the phase effect. The results show that with
RED gateways, the fairness between multicast and TCP is
closer to absolute, especially in case 1. This is expected as
suggested by the bounds derived in section 4 and is also intuitive
because RED gateways are designed to enforce fairness.
cwnd
signals
# wnd
cut
forced
cut
cwnd
# wnd
cut
cwnd
# wnd
RTT
RTT
RTT
Links
Congested
Most
Case
A
(sec)
cut
(sec)
(sec)thrput
(pkt/sec)
thrput
thrput
(pkt/sec)
(pkt/sec)
Figure
9: Simulation results with RED gateways.
5.2 Multiple Multicast Sessions
To test the multicast fairness property of the RLA algo-
rithm, we have simulated the above scenarios with two overlapping
multicast sessions from the sender to the same re-
ceivers. In all cases, the two multicast sessions share band-width
almost equally and have roughly the same average
window size. In particular, in the topology of case 3 mentioned
above, the two multicast senders achieve throughputs
of 65.1 and 65.9 pkt/sec respectively, and average window
sizes of 19.9 and 20.1 packets respectively.
5.3 Different Round-Trip Times
This paper has focused on the restricted topology with equal
round-trip times where fairness is meaningfully defined. But,
in reality, most multicast sessions comprise receivers located
at different distances from the sender. These cases have to
be addressed properly in order for an algorithm to be ac-
cepted. We have a generalized version of the RLA algorithm
presented in section 3 to work for the cases with different
round-trip times. The basic idea is to set
)=num trouble rcvr. In our experiment, we are using
the function of the form because it has
been shown that, for TCP-like window adjustment policy,
the average throughput is proportional to (RTT
there are no queueing delays [5, 10].
Note that in the case of equal round-trip times, the above
pthresh is the same as in the original RLA. In the case of
different round-trip times, the receiver with a smaller round-trip
time has a much smaller pthresh, that is, a much larger
fraction of the congestion signals is ignored.
In the case of different round-trip times, we are not able
to provide any theoretical proofs of bounded fairness. But
our initial experimental results show the generalized algorithm
is promising in providing a reasonable share of band-width
among multicast and TCP traffic. Here we present a
set of simulation results with the same topology as in the
above simulations but adding the nodes G31 through G39
also as receivers which are of significantly different round-trip
times from the leaf nodes since the level four links have
a one-way propagation delay of 100 ms. Here we show simulation
results for two cases with the bottlenecks at level 2
links or level 3 links respectively. Both cases have a total
of 36 receivers, all troubled receivers. The results are summarized
in figure 10 and they show a reasonable share of
bandwidth between the multicast and TCP traffic.
cwnd # cong
signals cut
forced
cut
cwnd # wnd
cut
cwnd # wnd
cut
Congested
Most
Links
thrput
(pkt/sec)
thrput thrput
(pkt/sec)
(pkt/sec)
RTT
(sec)
RTT RTT
(sec) (sec)
A
R
Figure
10: Results with different round-trip times.
6 Conclusions and Future work
In this paper, we introduced a quantitative definition for
essential fairness between multicast and unicast traffic. We
also proposed a random listening algorithm (RLA) to achieve
essential fairness for multicast and TCP traffic over the Internet
with drop-tail or RED gateways. RLA is simple and
achieves bounded fairness without requiring locating the soft
bottleneck links. Although our RLA is based on the TCP
congestion control mechanism, it is worth noting that the
idea of "random listening" can be used in conjunction with
other forms of congestion control mechanism, such as rate-based
control. The key idea is to randomly react to the congestion
signals from all receivers and to achieve a reasonable
reaction to congestion on the average over a long run. There
are many interesting possibilities which are worth exploring
in this direction.
Due to space limitations, many details of the algorithm
and simulation results are not shown in this paper. Our on-going
work is to carry out more and larger scale simulations
and to refine the algorithm based on the experience gained
from the simulations.
Acknowledgement
This work was inspired by a summer project the first author
worked on at Lucent Bell Labs. She would like to thank Dr.
Zheng Wang for the basic inspiration. Many thanks are also
due to Dr. Sanjoy Paul, Dr. Ramachandran Ramjee, Dr.
Jamal Golestani, all of Bell Labs, for useful discussions.
--R
Notes on FEC supported congestion control for one to many reliable multicast.
Analysis of the increase and decrease algorithms for congestion avoidance in computer networks.
A congestion control mechanism for reliable multicast.
On traffic phase effects in packet-switched gateways
Random early detection gateways for congestion avoidance.
A congestion control architecture for bulk data transfer
Congestion avoidance and control.
The performance of tcp/ip for networks with high bandwidth-delay products and random loss
Tcp selective acknowledgement op- tions
A loss tolerant rate controller for reliable multicast.
The stationary behavior of ideal tcp congestion avoidance.
The direct adjustment algorithm
UCB/LBNL/VINT.
One to many reliable bulk-data transfer in the mbone
Achieving bounded fairness for multicast and TCP traffic in the Internet.
Observations on the dynamics of a congestion control algorithm: The effects of two-way traffic
--TR
Congestion avoidance and control
Analysis of the increase and decrease algorithms for congestion avoidance in computer networks
Observations on the dynamics of a congestion control algorithm
Random early detection gateways for congestion avoidance
Simulation-based comparisons of Tahoe, Reno and SACK TCP
The performance of TCP/IP for networks with high bandwidth-delay products and random loss
--CTR
Dan Rubenstein , Jim Kurose , Don Towsley, The impact of multicast layering on network fairness, ACM SIGCOMM Computer Communication Review, v.29 n.4, p.27-38, Oct. 1999
Dan Rubenstein , Jim Kurose , Don Towsley, The impact of multicast layering on network fairness, IEEE/ACM Transactions on Networking (TON), v.10 n.2, April 2002
Yair Amir , Baruch Awerbuch , Claudiu Danilov , Jonathan Stanton, A cost-benefit flow control for reliable multicast and unicast in overlay networks, IEEE/ACM Transactions on Networking (TON), v.13 n.5, p.1094-1106, October 2005
Homayoun Yousefi'zadeh , Hamid Jafarkhani , Amir Habibi, Layered media multicast control (LMMC): rate allocation and partitioning, IEEE/ACM Transactions on Networking (TON), v.13 n.3, p.540-553, June 2005
Anca Dracinschi Sailer , Serge Fdida, Generic congestion control, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.41 n.2, p.211-225, 5 February
B. Baurens, Groupware, Cooperative environments for distributed: the distributed systems environment report, Springer-Verlag New York, Inc., New York, NY, 2002 | internet;phase effect;RED and drop-tail gateways;flow and congestion control;multicast |
286191 | Efficient Distributed Detection of Conjunctions of Local Predicates. | AbstractGlobal predicate detection is a fundamental problem in distributed systems and finds applications in many domains such as testing and debugging distributed programs. This paper presents an efficient distributed algorithm to detect conjunctive form global predicates in distributed systems. The algorithm detects the first consistent global state that satisfies the predicate even if the predicate is unstable. Unlike previously proposed run-time predicate detection algorithms, our algorithm does not require exchange of control messages during the normal computation. All the necessary information to detect predicates is piggybacked on computation messages of application programs. The algorithm is distributed because the predicate detection efforts as well as the necessary information are equally distributed among the processes. We prove the correctness of the algorithm and compare its performance with respect to message, storage, and computational complexities with that of the previously proposed run-time predicate detection algorithms. | Introduction
Development of distributed applications requires the ability to analyze their behavior
at run time whether to debug or control the execution. In particular, it is sometimes
essential to know if a property is satisfied (or not) by a distributed computation.
Properties of the computation, which specify desired (or undesired) evolutions of the
program's execution state, are described by means of predicates over local variables
of component processes.
A basic predicate refers to the program's execution state at a given time. These
predicates are divided into two classes called local predicates and global predicates.
A local predicate is a general boolean expression defined over the local state of a
single process, whereas a global predicate is a boolean expression involving variables
managed by several processes. Due to the asynchronous nature of a distributed
computation, it is impossible for a process to determine the total order in which
the events occurred in the physical time. Consequently, it is often impossible to determine
the global states through which a distributed computation passed through,
complicating the task of ascertaining if a global predicate became true during a
computation.
Basic predicates are used as building blocks to form more complex class of predicates
such as linked predicates [14], simple sequences [5, 9, 1], interval-constrained
sequences [1], regular patterns [4] or atomic sequences [8, 9]. The above class of properties
are useful in characterizing the evolution of the program's execution state,
and protocols exist for detecting these properties at run time by way of language
recognition techniques [2].
When the property (i.e., a combination of the basic properties) contains no
global predicate, the detection can be done locally without introducing any delays,
without defining a centralized process and without exchanging any control messages.
Control information is just piggybacked to the existing message of the application.
However, if the property refers at least to one global predicate, then all possible
observations of the computation must be considered. In other words, the detection
of the property requires the construction and the traversal of the lattice of consistent
global states representing all observations of the computation. When the property
reduces to one global predicate, the construction of the lattice can be avoided in
some cases. If the property is expressed as a disjunction of local predicates, then
obviously no cooperation between processes is needed in order to detect the property
during a computation. A form of global predicate, namely, the conjunction of local
predicates, has been the focus of research [5, 6, 7, 12, 17] during the recent years. In
such predicates, the number of global states of interest in the lattice is considerably
RR
4 M. Hurfin, M. Mizuno, M. Raynal, M. Singhal
reduced because all global states that includes a local state where the local predicate
is false need not be examined.
Previous Work
The problem of global predicate detection has attracted considerable attention lately
and a number of global predicate detection algorithms have been proposed in the
recent past. In the centralized algorithm of Cooper and Marzullo [3], every process
reports each of its local states to a process, which builds a lattice of the global
computation and checks if a state in the computation satisfies the global predicate.
The power of this algorithm lies in generality of the global predicates it can detect;
however, the algorithm has a very high overhead. If a computation has n processes
and if m is the maximum number of events in any process, then the lattice consists
of O(m n ) states in the worst case. Thus, the worst case time complexity of this
algorithm is O(m n ). The algorithm in [10] has linear space complexity; however, the
worst case time complexity is still linear in the number of states in the lattice.
Since the detection of generalized global predicates by building and searching
the entire state space of a computation is utterly prohibitive, researchers have developed
faster, more efficient global predicate detection algorithms by restricting
themselves to special classes of predicates. For example, a form of global predicate
that is expressed as the conjunction of several local predicates has been the focus of
research [5, 6, 7, 12, 17] recently. Detection of such predicates can be done during
a replay of the computation [12, 17] or during the initial computation [5, 6, 7].This
paper focus on the second kind of solution which allows one to detect the predicate
even before the end of the computation. In the Garg-Waldecker centralized algorithm
to detect such predicates [6], a process gathers information about the local states
of the processes, builds only those global states that satisfy the global predicate,
and checks if a constructed global state is consistent. In the distributed algorithm of
Garg and Chase [7], a token is used that carries information about the latest global
consistent state (cut) such that the local predicates hold at all the respective local
states. The worst case time complexity of both these algorithms is O(mn 2 ) which
is linear in m and is much smaller than the worst case time complexity of the methods
that require searching the entire lattice. However, the price paid is that not
all properties can be expressed as the conjunction of local predicates.
Recently, Stoller and Schneider [16] proposed an algorithm that combines the
Garg-Waldecker approach [6] with any approach that constructs a lattice to detect
Possibly(\Phi). distributed computation satisfies Possibly(\Phi) iff predicate \Phi holds
in a state in the corresponding lattice.) This algorithm has the best features of both
INRIA
Efficient Distributed Detection of Conjunctions of Local Predicates 5
the approaches - it can detect Possibly(\Phi) for any predicate \Phi and it detects a global
predicate expressed as the conjunction of local predicates in time linear in m (the
maximum number of events in any process).
Paper Objectives
This paper presents an efficient distributed algorithm to detect conjunctive form global
predicates in distributed systems. We prove the correctness of the algorithm and
compare its performance with that of the previous algorithms to detect conjunctive
form global predicates.
The rest of the paper is organized as follows: In the next section, we define system
model and introduce necessary definitions and notations. Section 3 presents the
first global predicate detection algorithm and gives a correctness proof. The second
algorithm is presented in Section 4. In Section 5, we compare the performance of
the proposed algorithms with the existing algorithms for detecting conjunctive form
global predicates. Finally, Section 6 contains the concluding remarks.
System Model, Definitions, and Notations
2.1 Distributed Computations
A distributed program consists of n sequential processes denoted by P 1
The concurrent execution of all the processes on a network of processors is called
a distributed computation. The processes do not share a global memory or a global
clock. Message passing is the only way for processes to communicate with one
another. The computation is asynchronous: each process evolves at his own speed
and messages are exchanged through communication channels, whose transmission
delays are finite but arbitrary. We assume that no messages are altered or spuriously
introduced. No assumption is made about the FIFO nature of the channels.
2.2 Events
2.2.1 Definition and Notations
Activity of each process is modeled by a sequence of events (i.e., executed action).
Three kinds of events are considered: internal, send, and receive events. Let e x
the x th event which occurs at process P i . Figure 1 shows an example of distributed
computation involving two processes P 1 and P 2 . In this example, event e 2
1 is a send
event and event e 1
2 is the corresponding receive event. Event e 1
1 is an internal event.
RR
6 M. Hurfin, M. Mizuno, M. Raynal, M. Singhal
Figure
1: A distributed computation
For each process P i , we define an additional internal event denoted as e 0
i that
occurred at process P i at the beginning of the computation. So, during a given
computation, execution of process P i is characterized by a sequence of events:
Furthermore, if the computation terminates, the last action executed at process
(denoted as e m i
by an imaginary internal event denoted as e m i +1
2.2.2 Causal Precedence Relation Between Events
The "happened-before" causal precedence relation of Lamport induces a partial
order on the events of a distributed computation. This transitive relation, denoted
by OE, is defined as follows:
8e x
or
There exists a message m such that
i is a send event (sending of m to P j
e y
j is a receive event (receiving of m from
or
There exists an event e z
such
k and e z
This relation is extended to a reflexive relation denoted .
INRIA
Efficient Distributed Detection of Conjunctions of Local Predicates 7
2.3 Local states
2.3.1 Definition and Notations
At a given time, the local state of a process P i is defined by the values of the
local variables managed by this process. Although occurrence of an event does not
necessarily cause a change of the local state, we identify the local state of a process
at a given time with regard to the last occurrence of an event at this process. We
use oe x
i to denote the local state of P i during the period between event e x
i and event
e x+1
. The local state oe 0
i is called the initial state of process P i .
Figure
2: Local states of processes
2.3.2 Causal Precedence Relation Between Local States
The definition of the causal precedence relation between states (denoted by \Gamma!) is
based on the happened-before relation between events. This relation is defined as
follows:
8oe x
Two local states oe x
i and oe y
are said to be concurrent if there is no causal dependency
between them (i.e., oe x
j and oe y
A set of local states is consistent if any pair of elements are concurrent. In the
distributed computation shown in Figure 2, foe 0
2 g and foe 2,oe 4
2 g are three
examples of consistent sets of local states.
RR
8 M. Hurfin, M. Mizuno, M. Raynal, M. Singhal
2.4 Intervals
2.4.1 Definition and Notations
Since causal relations among local states of different processes are caused by send and
receive events, we introduce the notion of intervals to identify concurrent sequences
of states of a computation. An interval is defined to be a segment of time on a
process that begins with a send or receive event (called a communication event) and
ends with the next send or receive event. Thus, a process execution can be viewed
as a consecutive sequence of intervals.
In order to formally define intervals, we first introduce a new notation to identify
communication events. We use " x
i to denote the x th send or receive event at P i . Thus,
for each " x
i , there exists exactly one e y
i that denotes the same event. Furthermore, the
imaginary event e 0
are renamed as " 0
. If the computation terminates, imaginary
event
i at process P i is renamed as " l i +1
(l i is the number of communication
events that occurred at process P i ).
The x th interval of process P i , denoted by '
i , is a segment of the computation
that begins at "
and ends at " x
. Thus, the first interval at P i is denoted by ' 0
. If
the computation terminates, the last interval of process P i is identified by ' l i
Figure
3: The corresponding set of intervals
We say that interval ' x
i contains the local state oe y
(or oe y
i is contained in ' x
the following property holds: (" x
This relation is denoted
by oe y
. If this relation does not hold, it is denoted by oe y
. By definition,
any interval contains at least one local state.
INRIA
Efficient Distributed Detection of Conjunctions of Local Predicates 9
2.4.2 Causal Precedence Relation Between Intervals
The relation that expresses causal dependencies among intervals is denoted as !.
This relation induces a partial order on the intervals of distributed computation and
is defined as follows:
A set of intervals is consistent if for any pair of intervals in the set, say ' x
2.5 Global States
A global state (or cut) is a collection of n local states containing exactly one local
state from each process P i . A global state is denoted by f oe x 1
g. If a
global state f oe x 1
is consistent, it is identified by
The set of all consistent global states of a distributed computation form a lattice
whose minimal element is the initial global state \Sigma(0; 0;
from the distributed
computation can reach the latter from the former when process P i executes its next
event
i . Each path of the lattice starting at the minimal element corresponds to
a possible observation of the distributed computation. Each observation is identified
by a sequence of events where all events of the computation appear in an order
consistent with the "happen before" relation of Lamport. The maximal element
called the final global state and exists only if all the processes
of the distributed computation have terminated.
Given a computation and a predicate on a global state \Phi, we can use the two
modal operators proposed by Cooper and Marzullo [3] to obtain two different pro-
perties, namely, Possibly(\Phi) and Definitely(\Phi). A distributed computation satisfies
if and only if the lattice has a consistent global state verifying the
predicate \Phi, whereas Definitely(\Phi) is satisfied by the computation if and only if
each observation (i.e., each path in the lattice) passes through a consistent global
state verifying \Phi. In this paper, we focus on the class of global predicates formed
as the conjunction of local predicates and we consider only the first satisfaction
rule: Possibly(\Phi). This rule is particularly attractive to test and debug distributed
executions.
RR
M. Hurfin, M. Mizuno, M. Raynal, M. Singhal
2.6 Conjunctions of Local Predicates
A local predicate defined over the local state of process P i is denoted as L i . Notation
oe x
indicates that the local predicate L i is satisfied when process P i is in the
local state oe x
. Due to its definition, a local predicate L i can be evaluated by process
P i at any time without communicating with any other process.
We extend the meaning of symbol j= to intervals as follows:
i such that (oe y
Let \Phi denote a conjunction of p local predicates. Without loss of generality, we
assume that the p processes involved in the conjunction \Phi are . In this
paper, we write either \Phi or to denote the conjunction.
A set of p local states f oe x 1
p g is called a solution if and only if:? !
p g is a consistent set.
A global state \Sigma(x 1 called a complete solution if this set of local
states includes a solution.
By definition, Possibly(\Phi) is verified if there exists a complete solution. A consistent
set of local states containing less than n local states may be completed to form
a consistent global state (i.e., a consistent set of n elements), and thus, a solution
may be extensible to one or more complete solutions. The goal of a detection algorithm
is not to calculate the whole set of complete solutions but only to determine
a solution. This approach is not restrictive. In order to deal with complete solutions
rather than with solutions, a programmer can simply add to the conjunction,
local predicates L are true in any local state. Consequently,
we will no longer speak about complete solution.
Due to the link between local states and intervals, the following definition of a
solution is obviously consistent with the first one.
A set of p local states f oe y 1
p g is a solution iff there exists a set of
p g is a consistent set.
INRIA
Efficient Distributed Detection of Conjunctions of Local Predicates 11
Let S denote the set of all solutions. If S is not empty, the first solution is the
unique element of S denoted by f oe f 1
p g such that every element
p g of S satisfies the following property:
As the property to detect is expressed as a conjunction of local predicates, this
particular solution, if it exists, is well defined in the computation. The set of intervals
that includes this solution is also well defined. We denote this set of intervals
and we say that this set of intervals is the first one which verifies
\Phi.
3 Detection Algorithms for Conjunction of Local Predicate
3.1
Overview
As mentioned in the previous section, Possibly(\Phi) is verified by detecting a set of
concurrent intervals, each of which verifies its local predicate. We have developed
the following two approaches to resolve this problem:
1. In the first approach, processes always keep track of sets of concurrent intervals.
For each such set, each process checks whether its interval in the set verifies
its local predicate.
2. In the second approach, a process always keeps track of a set of intervals,
each of which verifies its local predicate. For each such set, the process checks
whether all intervals in the set are concurrent.
Thus, algorithms designed for those complementary approaches are dual of each
other. This section described the algorithm corresponding to the first approach in
detail, including its correctness proof. The next section describes an algorithm corresponding
to the second approach.
3.2 The First Algorithm
3.2.1 Dependency Vectors
To identify a set of p concurrent intervals, the algorithm keeps track of causal dependencies
among intervals by using a vector clock mechanism similar to that described
RR
M. Hurfin, M. Mizuno, M. Raynal, M. Singhal
in [13]. Each process P i (1 i n) maintains an integer vector D i [1.p], called the
dependency vector. Since causal relations between two intervals at different processes
are created by communication events (and their transitive relation), values in D i are
advanced only when a communication event takes place at P i . We use D x
i to denote
the value of vector D i when process P i is in interval ' x
i . This value is computed at
the time " x
i is executed at process P i .
Each process P i executes the following protocol:
1. When process P i is in interval ' 0
all the components of vector D i are zero.
2. When P i (1 i p) executes a send event, D i is advanced by setting D i [i] :=
1. The message carries the updated D i value.
3. When P i executes a receive event, where the message contains Dm , D i is
advanced by setting D i [k] := max(D i [k]; Dm [k]) for 1 k p. Moreover,
the dependency vector is advanced by setting D i [i] := D
belongs to the set of p processes directly implicated in the conjunction (i.e.,
When a process P i (1 i p) is in interval ' x
i , the following properties are
observed [15]:
1. D x
it represents the number of intervals at P i that precede interval
2. D x
represents the number of intervals at process P j that causally
precede the interval ' x
3. The set of intervals f' D x
p g is consistent.
4. None of the intervals ' y
j such that y ! D x
(i.e., intervals at P j that causally
precede ' D x
can be concurrent with ' x
. Therefore, none of them can form
a set of intervals with ' x
i that verifies \Phi.
Let D a and D b be two dependency vector clock values. We use the following
notations.
ffl D a = D b iff 8i; D a
ffl D a D b iff 8i; D a [i] D b [i]
INRIA
Efficient Distributed Detection of Conjunctions of Local
ffl D a ! D b iff (D a D b
The following result holds:
3.2.2 Logs
Each process maintains a log, denoted by Log i , that is a queue
of vector clock entries. Log i is used to store the value of the dependency vector
associated with the past local intervals at P i that may be a part of a solution (i.e.,
intervals that verify L i and have not been tested globally yet). Informations about
causal relations of a stored interval with intervals at other processes will be used in
the future when the stored interval will be examined.
When P i is in a local state oe y
contained in an interval ' x
, such that oe y
it enqueues vector clock value D x
i in Log i if this value has not already been stored.
Even if there exists more than one local state in the same interval ' x
i that verifies
the value D x
i needs to be logged only once since we are interested in intervals
instead of states.
3.2.3 Cuts
In addition to the vector clock, each process P i (1 i n) maintains an integer
vector C i [1.p], called a cut and a boolean vector B i [1.p]. Vector C i defines the first
consistent global state which could verify \Phi. In others words, all previous global
states dont satisfy \Phi. By definition, C denotes a set of p intervals that may be the
first solution. If some informations received by P i show that this set is certainly
not a solution, the cut is immediately updated to a new potential solution. At any
time, C i [j] denotes the number of interval of P j already discarded and indicate the
j of the first interval of P j not yet eliminated. If the conjunction is
satisfied during the computation, the cut C will evolve until it denotes the first
solution.
Let C x
denotes the value of C i [j] after the communication event " x
i has been
executed at P i . The value of C i remains unchanged in the interval. Each P i maintains
the values C i in such a way that none of the intervals that precede event " C i [j]
j at
can form a set of intervals that verifies \Phi. Therefore, each process P i (1 i p)
may discard any values D i in Log i such that D i [i] ! C i [i]. Each P i (1 i n) also
maintains the vector B i in such a way that B i [j] holds if the interval ' C i [j]
j at P j is
certain to verify its local predicate. Thus, if the system is not certain whether the
RR
14 M. Hurfin, M. Mizuno, M. Raynal, M. Singhal
interval verifies its local predicate, B i [j] is set to false. To maintain this condition,
the cut C and the B vector must be exchanged among processes. When P i sends a
message, it includes vectors C in the message.
3.3 Descriptions of the Algorithm
A formal description of the algorithm is given in Section 3.4. The algorithm consists
of the following three procedures that are executed at a process
ffl A procedure A that is executed each time local predicate L i associated to P i
ffl A procedure B that is executed when P i (1 i n) sends a message.
ffl A procedure C that is executed when P i (1 i n) receives a message.
In addition to vector clock D i , cut C i , boolean vector B i , and log Log i , each
process maintains a boolean variable not logged yet i . Variable
not logged yet i is true iff the vector clock value that is associated with the current
interval has not been logged in Log i . This variable helps avoid logging the same
vector clock value more than once in Log i .
A: When the local predicate L i becomes true :
Let oe y
i be the local state that satisfies the local predicate. If ' x
i is the interval that
includes state oe y
then the vector clock value D x
i , which is associated with interval
i , is logged in Log i if it has not been logged yet. To indicate that the vector clock
for this interval has already been logged in, process P i sets variable not logged yet i
to false.
Furthermore, if the current interval (denoted by ' x
or ' D x
also the oldest
interval of P i not yet discarded (denoted by ' C x
i [i] is set to true to indicate
that ' C x
sends a message:
Since it is the beginning of a new interval, a process P i (1 i p) advances the
vector clock by setting D i resets variable not logged yet i to true.
If the log is empty, none of the intervals that precedes the new interval can form
a set of intervals that verifies \Phi. In particular, this remark holds for the last interval
that ends just when the current execution of procedure B (i.e., the sending action)
occurs. Consequently, the last interval (and also all the intervals of P i that causally
INRIA
Efficient Distributed Detection of Conjunctions of Local Predicates 15
precede this one) can be discarded by setting C i [i] to the current value of D i [i]. (At
this step of the computation, " D i [i]
i is the identity of the current send event).
Finally, sends a message along with C
receives a message from P j that contains D
Based on the definition of vector clocks, it advances D i and resets variable
not logged yet i to true. From the definition of a cut, at any process P k (1 k p),
none of the intervals that precedes interval ' C i [k]
k or ' C j [k]
k can form a set of concurrent
intervals that verifies \Phi. Thus, C i is advanced to the componentwise maximum
of C i and C j . B i is updated so that it contains more up-to-date of the information
in B i and B j .
process P i deletes log values for intervals that precede ' C i [i]
since
these intervals do not belong to sets of concurrent intervals that verify \Phi.
After this operation, there are two possibilities:
ffl Case 1 Log i becomes empty i.e. Log i does not contain any interval that occurs
after ' C i [i]
i and before ' D i [i]
In this case, none of the intervals at P i represented
by ' y i
i such that y i ! D i [i] can form a set of concurrent intervals that verify
\Phi. The algorithm needs to consider only future intervals, denoted by ' z
i , such
that D i [i] z.
Since none of the intervals ' y k
k such that y k ! D i [k] at other processes P k
can form a set of concurrent intervals with such future intervals ' z
C i is advanced to D i . When process P i executes the receive action, it has no
informations about intervals ' D i [k]
p). Therefore, all components of
vector B i are set to false.
contains at least one entry that was logged after the occurrence
of event " C i [i]
the oldest such logged entry be D log
. From the properties
of vector D and the definition of a cut, at any process P k , 1kp, none of the
intervals preceding ' D log
k or ' C i [k]
k can form a set of concurrent intervals with
' D log
i that verifies \Phi. Thus, C i is advanced to the componentwise maximum
of D log
. Similar to Case 1, if the value C i [k] is modified (i.e., it takes
its value from D log
not certain whether P k 's local predicate held
in the interval ' D log
k . Thus, B i [k] is set to false. If the value C i [k] remains
unchanged, the value B i [k] will also remain unchanged.
RR
M. Hurfin, M. Mizuno, M. Raynal, M. Singhal
Furthermore, since ' D log
verified its local predicate, B i [i] is set to true. At
this point, P i checks whether B i [k] is true for all k. If so, this indicates that
each interval in the concurrent set of intervals 1 f' C i [1]
verifies
its local predicate and thus, \Phi is verified.
3.4 A Formal Description of the Algorithm
Initialization procedure executed by any process P i
if (i p) then
Create(Log i ); not yet logged i := true;
endif
Procedure A executed by process
when the local predicate L i becomes true
if (not yet logged i ) then
logged i := false;
endif
Procedure B executed by any process P i
when it sends a message
if (i p) then
logged i := true;
endif
Append vectors D i , C i , and B i to the message;
Send the message;
Procedure C, executed by any process P i
when it receives a message from P j
We will prove in subsection 3.7 that a set of intervals numbered by C i values always are
concurrent.
INRIA
Efficient Distributed Detection of Conjunctions of Local Predicates 17
Extract vectors D j , C j and B j from the message;
if (i p) then
logged i := true;
while ((not (Empty(Log i ))) and (Head(Log i do
/* Delete all those logged intervals that from the current
/* knowledge do not lie on a solution.
/* Construct a solution that passes through the next local interval.
else
/* Construct a solution that passes through the logged interval.
endif
endif
Deliver the message;
Function Combine Maxima ((C1,B1), (C2,B2))
B: vector [1.p] of boolean;
C: vector [1.p] of integers;
for to p do
case
endcase
3.5 A Simple Example
Since the algorithm is quite involved, we illustrate the operation of the algorithm
with the help of an example. In Figure 4, a local state contained in an interval is
RR
M. Hurfin, M. Mizuno, M. Raynal, M. Singhal
represented by a grey area if it satisfies the associated local predicate. At different
step of the computation, we indicate the values of the main variables used to detect
items with square brackets next to a process interval,
respectively, depict the contents of vectors D and C. Those values remain unchanged
during the entire interval. The value of vector B after execution of a communication
event is indicated between round brackets.
Initial value of interval number at two processes is 0 and C vector is (0 0) at
both processes. When the local predicate holds in interval ' 0
enqueues D 1 vector into Log 1 . Process P 1 also set B 1 [1] to true because it is certain
that ' C 1 [1]
sends message m1, it increments D 1 [1] to 1 and
sends vectors B 1 , C 1 , and D 1 in the message.
When receives message m1, it increments D 2 [2] to 1 and then updates its
B, C, and D vectors. P 2 finds its Log empty and constructs a potential solution
using its D vector and stores it into its C vector. When the local predicate becomes
true in state oe 1
to Log 2 . As the variable not yet logged 2 is false when
process P 2 is in local state oe 2
2 , the vector clock D is not logged twice during the same
interval. When P 2 sends message m2, it increments D 2 [2] to 2 and sends vectors B 2 ,
in the message.
When P 1 receives message m2, it increments D 1 [1] to 2 and then updates its B,
C, and D vectors. After merging with the vectors received in the message, P 1 finds
that C 1 [1] (=1)?Head(Log 1 )[1] (=0) and discards this entry from Log 1 . Since Log 1
is empty, P 1 constructs a potential solution using its D vector and stores it into its
C vector. When the local predicate becomes true in interval oe 2
to Log 1 . When P 1 sends message m3, it increments D 1 [1] to 3 and sends vectors B 1 ,
in the message.
In the meantime, local predicate holds in state oe 3
2 and consequently, P 2 logs
vector D 2 to Log 2 .
When receives message m3, it increments D 2 [2] to 3 and then updates its B,
C, and D vectors. After merging with the vectors received in the message, P 2 finds
that C 2 [2] (=2)?Head(Log 2 )[2] (=1) and discards this entry. Since the next entry
in Log 2 cannot be discarded, P 2 constructs a potential solution using Head(Log 2 )[2]
vector and stores it into its C and B vectors. The potential solution goes through
1 . The fact that this interval satisfies L 1 is known by process P 2 (B 2 [1] is
true). After P 2 sets B 2 [2] to true, it finds that all entries of vector B 2 are true and
declares the verification of the global predicate.
INRIA
Efficient Distributed Detection of Conjunctions of Local Predicates 19
Figure
4: An Example to Illustrate Algorithm 1.
3.6 Extra messages
The algorithm is able to detect if a solution exists without adding extra-message
during the computation and without defining a centralized process. The algorithm
depends on the exchange of computation message between processes to detect the
predicate. As a consequence, not only the detection may be delayed, but also in
some cases the computation may terminate and the existing solution may go unde-
tected. For example, if the first solution is the set consisting of the p last intervals,
g, the algorithm will not detect it. To solve this problem, if a
solution has not been found when the computation terminates, messages containing
vector D, C, and B are exchanged between the p processes until the first solution
is found. To guarantee the existence of at least one solution, we assume that the
set of intervals f' l 1 +1
p g is always a solution. Extra messages are
exchanged only after the computation ends and the first solution has not been detected
yet. To reduce the overhead due to these extra messages between processes,
one can use a privilege (token) owned by only one process at a time. The token
circulates around a ring consisting of the p processes, disseminating the information
about the three vectors. Another solution consists of sending the token to a process
who may know the relevant information (i.e., a process P j such that B i [j] is false).
RR
M. Hurfin, M. Mizuno, M. Raynal, M. Singhal
3.7 Correctness of the algorithm
be two vector timestamps such that the sets of intervals
represented by f' V 1 [1]
are both concur-
rent. Then, the set of intervals represented by f' V 3 [1]
p g is concurrent,
Proof: We show that a pair of intervals (' V 3 [i]
any combination of i and
concurrent. Renumbering the vectors V 1 and V 2 if
necessary, suppose V 3 There are two cases to consider:
1. This case is obvious because, from the assumption, ' V 1 [i]
are concurrent.
2. Suppose on the contrary that ' V 3 [i]
j are not concurrent.
There are two cases to consider:
In this case, "
i . This
contradicts the assumption that ' V 1 [j]
are concurrent.
By applying the same argument as the
case (a), this leads to a contradiction to the assumption that ' V 2 [i]
are concurrent.
are concurrent. 2
The following lemma guarantees that a cut C i always keeps track of a set of
concurrent intervals.
Lemma 2: At any process P i , at any given time, f' C i [1]
p g is a set of
concurrent intervals.
Proof: C i is updated only in one of the following three ways:
1. When a receive event is executed, by executing C:= D.
2. When a receive event is executed by taking maximum of C i and C j (the cut
contained in the message sent by process P j ), and then by taking maximum
of C i and D log
(The oldest value of the dependency vector still in the log).
INRIA
Efficient Distributed Detection of Conjunctions of Local Predicates 21
3. When a send event occurs, entry C i [i] is set to D i [i].
Let update(C x
the update on cut C i at communication event " x
giving
the value C x
i . We define a partial order relation (represented by ";") on all updates
on all cuts C i (1 i p) as follows:
1. If " x
are two consecutive communication events that occur at P i (i.e.,
2. If there exists a message m such that " x
i is the sending of m to P j and " y
i is
the receiving of m from P i , then update(C x
Let SEQ be a topological sort of the partial order of update events. We prove the
lemma by induction on the number of updates in SEQ.
Induction Base: Since the set of intervals f' 0
p g is concurrent, initially the
lemma holds for any C i (i.e., after update(C 0
Induction Hypothesis: Assume that the lemma holds up to t applications of the
updates.
Induction Steps: Suppose st update occurs at process P i and let C x
the cut value after the st update operation denoted as update(C x
i is a receive event and C x
From the definition of vector clocks, for any vector clock value D, the set of
intervals f' D[1]
p g is concurrent. Thus, C x
represents a set of
concurrent intervals.
Case 2: " x
i is a receive event and C x
As " y
j is the corresponding send event, update(C y
Clearly, the
relation update(C
holds. Since from induction hypo-
thesis, both C
represent sets of concurrent intervals, max(C
represents a set of concurrent intervals (from Lemma 1). Since from the definition
of vector clocks, D log
(the oldest entry still in the log) represents a set of
concurrent intervals, C x
represents a set of concurrent intervals (from Lemma
1).
Case 3: " x
i is a send event, C x
i [i], and 8j such that j 6= i, C x
RR
22 M. Hurfin, M. Mizuno, M. Raynal, M. Singhal
Suppose no receive event occurs at process P i before the send event " x
. All the
entries of vector D x
i and vector C x
i [i], are zero. Therefore,
i and the proof is the same as in Case 1.
Suppose now that " y
i is the last receive event who occurs before the sending
event " x
in the log has been discarded since this event occurred.
As the Log is empty, we can conclude that the log was also empty when this
receive event occurred and hence, C y
. As this event is the last receive
event that occurs before " x
i , we conclude that 8j such that j 6= i, D y
and C y
. Then, the proof is the same as in case
1.The following lemma shows that if there is a solution, a cut C i will not miss and
pass beyond the solution.
Lemma 3: Consider a particular cut (identified by an integer vector S) such that
p g is a set of concurrent intervals that verifies \Phi.
i denote any communication event. If, for all communication events " y
j such
that " y
Proof: Proof is by contradiction. Suppose there exists a communication event " x
such that :(C x
and for any communication event " y
j such that " y
i is the first event that advances C i beyond S.
There are two cases to consider:
1. C x
From hypothesis, C
holds. Therefore, entry C i [j] is modified during
execution of event " x
i . This event is necessarily a receive event (" z
k is the
corresponding send event).
Note that ' C x
i denote the same interval. S[j] ! D x
holds. So from the definition of dependency vectors, ' S[j]
i . However,
either (' C x
i . This
contradicts the hypothesis that f' S[1]
p g is a set of concurrent
intervals.
INRIA
Efficient Distributed Detection of Conjunctions of Local Predicates 23
From the assumption, C
k S. Therefore, C x
log
holds. So from the definition of dependency vec-
tors, ' S[j]
. Because of the max operation, D log
so ' S[j]
. Then the proof is the same as in Case 1.
2. C x
i is either a send or a receive event.
From the algorithm, it is clear that this case occurs only if none of the
intervals that occurred between ' C
(including this) and ' D x
verifies
This contradicts the fact that ' S[i]
verifies L i since C
k [i]); D log
i is a receive event and
" z
k is the corresponding send event. From the assumption, C
and C z
k [i] S[i]. Assume that C
k [i]). Then C
S[i] and therefore C x
From the algorithm, it is clear that this case occurs only if none of the
intervals that occurred between ' C
(including this) and ' D log
verifies
This contradicts the fact that ' S[i]
verifies L i since C
(= D log
[i]).The following lemma proves that the algorithm keeps making progress if it has
not encountered a solution.
Lemma 4: Suppose process P i has executed the algorithm at the x th communication
event " x
is in the interval ' x
that the set f' C x
does not verify \Phi. Then, there exists " y
j such that C x
.
Proof: There are two reasons for f' C x
p g not verifying \Phi:
1. ' C x
i does not verify
RR
M. Hurfin, M. Mizuno, M. Raynal, M. Singhal
In this case, at " x
could not find an interval that verifies L i , and therefore,
set to x (i.e., the value of D x
[i]). At the next communication event
updates at least the i th entry of C i by setting C x+1
i [i] to D x+1
2. There exists at least one process P k (1 k p) such that ' C x
k does not
verify
In this case, P k will eventually advance C k [k] to a value greater than C x
(refer to Case 1). This new value computed when event " z
occurs will propagate
to other processes. Extra messages eventually exchanged at the end of the
computation guarantee that there will eventually be a communication event
j at a process P j such that that " z
and C x
.Finally, the following theorem shows that \Phi is verified in a computation iff the
algorithm detects a solution.
Theorem: [1] If there exists an interval ' x
i on a P i such that, during this interval,
holds for all k (1 k p), then f' C x
verifies \Phi.
Conversely, if f' C x
verifies \Phi for an event " x
on some processor
there exists a communication event " y
j such that for all k (1 k p),
holds.
Proof:
[1] Proof is by contradiction. Suppose ' C x
k does not verify L k for some k. We show
that as long as C i [k] is not changed, B i [k] is false. There are two cases to consider:
1.
i does not verify L i
is updated to D i [i] when communication events " x
occurs. If " x
i is a receive
i [i] is set to false at the same time and remains false during the interval
necessarily empty for the entire duration of interval ' C x
modified only when event " x+1
occurs.
i is a send event, the value of B i [i] is unchanged since the last receive event
or since the beginning of the computation if no receive event occurs at process
before the send event " x
i . In both cases, Log i remains empty during this
entire period and B i [i] remains false.
INRIA
Efficient Distributed Detection of Conjunctions of Local Predicates 25
2. i 6= k:
updated C i [k] to C x
there existed a process P j that advanced
to C x
[k], and the value was propagated to P i . P j must have set B j [k]
to false and this information must have propagated to P i . This value was
propagated to P i without going through P k (else P k would have been advanced
to a value greater than C x
i [k]). It is easy to see that B x
i [k] is false: Since
k is the only process that can change B k [k] to true, P i will never see B
true together with C i
[2] Assume that f' C x
verifies \Phi. Message exchanges guarantee that
there will eventually be a communication event " y
j such that for all k, 1 k p,
. When process P k is in interval ' C x
[k] is set to true.
From Lemma 3, once a process P h sets C h [k] to C x
does not change this
value in the future. This implies that all the processes P h that are on the path of the
message exchange from " C x
k to " y
to C x
i [k] and B h [k] to true - none
of such processes P h sets B h [k] to false by advancing C h [k] beyond C x
information is eventually propagated to P j , and so B y
holds for all k, 1 k p.4 The Second Algorithm
In the second algorithm, every process always keeps track of a set of intervals for
all the processes such that each of the intervals verifies its local predicate. For each
such set, the process checks whether all the intervals in the set are concurrent.
4.1 Overview of the Algorithm
4.1.1 Verified Intervals
In this algorithm, only the intervals that verify their associated local predicates are
of interest. We call such intervals verified intervals. A new
i is used to
identify the x th verified interval of process P i . Thus, for
i , there exists exactly
one ' y
i that denotes the same interval.
RR
26 M. Hurfin, M. Mizuno, M. Raynal, M. Singhal
\Omega 0\Omega 1\Omega 0\Omega 1" 0
Figure
5: The corresponding set of verified intervals.
4.1.2 Dependency Vectors
As in the first algorithm, each process P i (1 i n) maintains a dependency vector
track of the identity of the next verified interval that P i will
encounter. Even though P i does not know which interval it will be, it knows that
the next verified interval will be denoted
When has encountered the x th verified interval denoted, by
\Omega
detailed description of Log i is given later.) At
this moment, D i increments D i [i] by one to x to look for the
next verified
. Note that the existence of the verified
i , is not
guaranteed at this moment. The local predicate may not be satisfied anymore during
the computation.
In the first algorithm, vector D remains the same for the entire duration of an
interval. In the second algorithm, on the contrary, vector D may change once during
an interval if this interval is a verified interval.
In order to capture causal relation among verified intervals at different processes,
the following protocol is executed on D i by a process P i (1 i n):
1. Initially, all the components of vector D i are zero.
2. When P i executes a send event, it sends D i along with the message.
3. When P i executes a receive event, where a message contains Dm , D i is advanced
by setting D i [k] := max(D i [k]; Dm [k]) for 1 k p.
INRIA
Efficient Distributed Detection of Conjunctions of Local Predicates 27
Clearly, the following properties hold:
1. D i [i] represents the number of verified intervals at P i whose existence can
be confirmed by P i (i.e., P i has already passed through its (D i [i]) st verified
interval).
2. D i [j](j 6= i) represents the number of verified intervals that have occurred at
process P j and causally precede the current interval of P i .
3. The set of verified intervals
p g is not necessarily
consistent. Yet, by definition, if
exists, it satisfies the
associated local predicate.
4. None of the
j such that y ! D i [j] (i.e., verified intervals at P j that
causally
precede\Omega D i [j]
can be concurrent
. Therefore, none of them
can form a set of intervals
i that verifies \Phi.
4.1.3 Logs
Each process P i maintains a log, denoted by Log i , in the same manner as in the first
algorithm. When P i verifies its local predicate L i , it enqueues the current D i before
incrementing D i [i] by one.
When Log i is not empty, notation D log
i is used to denote the value of the vector
clock at the head of Log i . Necessarily, when the log is not empty, the existence of
\Omega D log
i has already been confirmed by P i .
4.1.4 Cuts
Like the first algorithm, each process P i maintains an integer vector C i and a boolean
vector . The meaning of C i is similar to that of the first algorithm; that
is the next possible interval at P j that may be in a solution and none of the verified
intervals that
precedes\Omega C i [j]
j can be in a solution. Therefore, each process P i may
discard any values D i in Log i such that D i
The meaning of B i is also similar to that in the Algorithm 1. Each P i maintains
in such a way that B i [j] is true if process P i is certain that verified interval
j has been confirmed by P j . Furthermore, process P i is certain that none of the
verified
causally
precede\Omega C i [j]
. Thus, if P i is not certain
RR
28 M. Hurfin, M. Mizuno, M. Raynal, M. Singhal
whether the verified
j has already been confirmed by P j , B i [j] is set to
false.
4.2 Descriptions of the Algorithm
A formal description of the algorithm is given in Section 4.3. As the first algorithm,
the second algorithm consists of three procedures that are executed at a process P i .
Again, we assume that the set of intervals f' l 1 +1
p g is a solution.
Extra messages are exchanged after the computation ends only if the first solution
has not been discovered yet.
When the local predicate L i becomes true:
Let oe y
i be a local state that satisfies the local predicate. P i has entered the verified
i that includes state oe y
i . It logs D i in Log i if D i has not been logged yet
since the beginning of this interval. In order to indicate that the vector clock for
this interval has already been logged, it sets variable not logged yet i to false. The
counter of verified interval D i [i] is incremented by one to reflect that the current
interval is a verified interval.
Furthermore, if the current verified interval is also the oldest verified interval of
discarded (denoted
[i] is set to true to confirm the existence
When P i sends a message:
Since it marks the beginning of a new interval, P i resets variable not logged yet i to
true and then it sends the message along with C
When receives a message from P j that contains D
Since a new interval begins, it resets variable not logged yet i to true. As in the
first algorithm, none of the intervals at any process P k that
precede\Omega C i [k]
can form a set of concurrent intervals that verifies \Phi. Thus, C i is advanced to the
componentwise maximum of C i and C j .
At this moment, B i is also updated. "B i [k] is true" means that P i is certain that
the existence
k has been confirmed. Thus, if C i [k] and at least one
of B i [k] and B j [k] is true, B i [k] is set to true.
deletes all entries from Log i that
precede\Omega C i [i]
i since all those verified
intervals are no more potential components of a solution.
After this operation, there are two cases to consider:
INRIA
Efficient Distributed Detection of Conjunctions of Local Predicates 29
ffl Case 1 Log i becomes empty: In this case, none of the verified intervals at P i
up to this moment forms a set of concurrent verified intervals.
The algorithm needs to consider only verified intervals that will occur in the
future. If such intervals
exist,\Omega D i [i]
i will be the first one. Since all of the verified
k such that y ! D i [k] at other processes P k causally
precede\Omega D i [i]
none of such intervals can be in a solution. Thus, cut C i is advanced to D i .
When process P i executes the receive action, it is not certain whether P k (1
p) has encountered the verified
k . Therefore, all components
of vector B i are set to false.
contains at least one logged interval: Let the oldest of such logged
entry be D log
. From the properties of vector D and the definition of a cut, all
of the verified intervals at any process P k
preceding\Omega C i [k]
or\Omega D log
causally
precede\Omega D log
none of such intervals can be in a solution. Thus, C i is
advanced to the componentwise maximum of D log
Similar to Case 1, if the value C i [k] is modified (i.e., it takes its value from
log
not certain whether P k 's will encounter the verified interval
' D log
k . Thus, B i [k] is set to false. If C i [k] remains unchanged, B i [k] also
remains unchanged to follow other processes' decision.
Furthermore, B i [i] is set to true since P i has confirmed the existence
of\Omega D log
At this moment, with the new information (C may be able
to detect a solution. Thus, P i checks whether B i [k] is true for all k. If so, this
indicates that all the verified intervals in set
have
been confirmed and concurrent with one another since no process has detected
causal relations between any pair of intervals in the set.
4.3 Formal Description of the Algorithm
Initialization procedure executed by any process P i
if (i p) then
Create(Log i ); not yet logged i := true;
endif
RR
M. Hurfin, M. Mizuno, M. Raynal, M. Singhal
Procedure A executed by process
when the local predicate L i becomes true
if (not yet logged i ) then
logged i := false;
endif
Procedure B executed by any process P i
when it sends a message
if (i p) then not yet logged i := true; endif
Append vectors D i , C i , and B i to the message;
Send the message;
Procedure C executed by any process P i
when it receives a message from P j
Extract vectors D j , C j , and B j from the message;
if (i p) then
not yet logged i := true;
while ((not (Empty(Log i ))) and (Head(Log i do
/* Delete all those logged intervals that from the current
/* knowledge do not lie on a solution.
/* Construct a solution that passes through the next local verified interval.
else
/* Construct a solution that passes through the logged verified interval.
endif
endif
Deliver the message;
INRIA
Efficient Distributed Detection of Conjunctions of Local Predicates 31
4.4 Discussion
4.4.1 An Example
To help the readers understand the algorithm, in Figure 5, we illustrate the operation
of the second algorithm for a computation similar to that one used in Figure 4. In
Figure
5, the contents of vector D and C that are indicated next to a process interval
in square brackets, are the values of the vectors just after evaluation of the last local
state of the interval (i.e., just before execution of the communication event).
F
FA@
FA@
F
FA@
FA@
FA@
F
FA@
F
FA@
F
TA@
TA@
Figure
An Example to Illustrate Algorithm 2.
4.4.2 Difference Between Both Approaches
The second algorithm can be considered an optimization of the first one. Interval
counters D and C evolve more slowly in the second algorithm and updates of both
vectors occur less often. For example, vector C is not modified on a send action.
Each algorithm finds the first solution in a different way. In the first algorithm, each
interval of the solution is located via a number of communication events that occur
before the process encounters this interval. In the second algorithm, the delivered
information is the number of validated interval that precede the solution.
The difference between both algorithms is much more on the semantics and the
properties of the control variables rather than on the way they are updated. For
example, update of vector C is made in a similar way in both algorithms. Yet, each
RR
M. Hurfin, M. Mizuno, M. Raynal, M. Singhal
component is managed as a counter of interval (in the first algorithm) or as a counter
of verified interval (in the second algorithm). Both algorithms employ complementary
approaches to find the first solution. In the first algorithm, the corresponding
set of interval is always concurrent (i.e., it satisfies the first criterion of the solution).
In the second algorithm, the elements of the set are always verified intervals (i.e.,
the set satisfies the second criterion of the solution).
A correctness proof of the second algorithm is similar to the proof of the first
algorithm. However, Lemma 1 and Lemma 2 become irrelevant in the second algo-
rithm. Instead, the following lemma becomes useful:
Lemma 5: At any time during execution of P i , if B i [j] (1 j p) is true then
Proof: Process P j has necessarily updated B j [j] to true to account for the fact that
j has been encountered and is the oldest interval not discarded yet. At
the same time, P j used the value at the head of the Log, D log
j , to update all the
other components C j [k] to a value greater or equal to D log
[k] in order to invalidate
all the verified intervals that are in the causal past
. Therefore, as the cut C
never decreases (due to the merge operation made at the beginning of each receive
share with P j the same vision of the values of B[j] and C[j] and
at the same time keep an older value for some component C i [k]. 2
The rest of the proof is the same as that of the first algorithm with the definition
of interval appropriately modified.
5 A Comparison with Existing Work
Previous work in detecting conjunctive form global predicates has been mainly by
Garg and Waldecker [6] and Garg and Chase [7]. Garg-Waldecker algorithm is centralized
[6], where each process reports all its local states satisfying its local predicate
to a checker process. The checker process gathers this information, builds only those
global states that satisfy the global predicate, and checks if a constructed global
state is consistent. This algorithm has a message, storage, and computation complexities
of O(Mp 2 ) where M is the number of messages sent by any process and p
is the number of processes over which the global predicate is defined.
INRIA
Efficient Distributed Detection of Conjunctions of Local Predicates 33
In [7], Garg and Chase present two distributed algorithms for detection of conjunctive
form predicates. In these algorithm, all processes participate in the global predicate
detection on equal basis. The first distributed algorithm requires vector clocks
and employs a token that carries information about the latest global consistent cut
such that the local predicates hold at all the respective local states. The message,
storage, and computation complexities of this algorithm is the same as of Garg-
Waldecker [6] algorithm, namely, O(Mp 2 ). However, the worst case message, sto-
rage, and computation complexities for a process in this algorithm is O(Mp); thus,
the distribution of work is more equitable than in the centralized algorithm. The
second distributed algorithm does not use vector clocks and uses direct dependencies
instead. The message, storage, and computation complexities of this algorithm
are O(Mn) and the worst case message, storage, and computation complexities for
a process in this algorithm are O(M ); thus, this algorithm is desirable when p 2 is
greater than n.
The proposed predicate detection algorithm does not cause transfer of any additional
messages (except in the end provided the predicate is not detected when the
computation terminates). The control information needed for predicate detection is
piggybacked on computation messages. On the contrary, the distributed algorithms
of Garg and Chase may require exchange of as many as Mp and Mn control mes-
sages, respectively. Although the worst case volume of control information exchanged
is identical, namely, O(Mp 2 ), in the first Garg and Chase algorithm and in the proposed
algorithm, the latter results in no or few additional message exchanges. A
study by Lazowska et al. [11] showed that message send and receive overhead can
be considerable (due to context switching and execution of multiple communication
protocol layers) and it is desirable to send few bigger from performance point of
view.
6 Concluding Remarks
Global predicate detection is a fundamental problem in the design, coding, testing
and debugging, and implementation of distributed programs. In addition, it finds applications
in many other domains in distributed systems such as deadlock detection
and termination detection.
This paper presented two efficient distributed algorithms to detect conjunctive
form global predicates in distributed systems. The algorithms detect the first consistent
global state that satisfies the predicate and work even if the predicate is uns-
RR
34 M. Hurfin, M. Mizuno, M. Raynal, M. Singhal
table. The algorithms are based on complementary approaches and the second algorithm
can be considered an optimization of the first one, where the vectors D and C
increase at a lower rate. We proved the correctness of the algorithms. The algorithms
are distributed because the predicate detection efforts as well as the necessary information
are equally distributed among the processes. Unlike previous algorithms to
detect conjunctive form global predicates, the algorithms do not require transfer of
any additional messages during the normal computation; instead, they piggyback the
control information on computation messages. Additional messages are exchanged
only if the predicate remains undetected when the computation terminates.
--R
Consistent Detection of Global Predicates.
Detection of Unstable Predicates in Distributed Pro- grams
Detection of Weak Unstable Predicates in Distributed Programs.
Distributed Algorithms for Detecting Conjunctive Predi- cates
Global Events and Global Breakpoints in Distributed Sys- tems
Detecting Atomic Sequences of Predicates in Distributed Computations.
Linear Space Algorithm for On-line Detection of Global States
File Access Performance of Diskless Workstations
Global conditions in debugging distributed programs.
Virtual Time and Global States of Distributed Systems.
Breakpoints and Halting in Distributed Programs.
A Way to Capture Causality in Distributed Systems.
Faster Possibility Detection by Combining Two Ap- proaches
"le de Nancy-Brabois, Campus scientifique, 615 rue du Jardin Botanique, BP 101, 54600 VILLERS LE S NANCY Unite de recherche INRIA Rennes, Irisa, Campus universitaire de Beaulieu, 35042 RENNES Cedex Unite de recherche INRIA Rho"
--TR
--CTR
Punit Chandra , Ajay D. Kshemkalyani, Distributed algorithm to detect strong conjunctive predicates, Information Processing Letters, v.87 n.5, p.243-249, 15 September
Loon-Been Chen , I-Chen Wu, An Efficient Distributed Online Algorithm to Detect Strong Conjunctive Predicates, IEEE Transactions on Software Engineering, v.28 n.11, p.1077-1084, November 2002
Ajay D. Kshemkalyani, A Fine-Grained Modality Classification for Global Predicates, IEEE Transactions on Parallel and Distributed Systems, v.14 n.8, p.807-816, August
Guy Dumais , Hon F. Li, Distributed Predicate Detection in Series-Parallel Systems, IEEE Transactions on Parallel and Distributed Systems, v.13 n.4, p.373-387, April 2002
Emmanuelle Anceaume , Jean-Michel Hlary , Michel Raynal, Tracking immediate predecessors in distributed computations, Proceedings of the fourteenth annual ACM symposium on Parallel algorithms and architectures, August 10-13, 2002, Winnipeg, Manitoba, Canada
Neeraj Mittal , Vijay K. Garg, Techniques and applications of computation slicing, Distributed Computing, v.17 n.3, p.251-277, March 2005
Scott D. Stoller, Detecting global predicates in distributed systems with clocks, Distributed Computing, v.13 n.2, p.85-98, April 2000
Punit Chandra , Ajay D. Kshemkalyani, Causality-Based Predicate Detection across Space and Time, IEEE Transactions on Computers, v.54 n.11, p.1438-1453, November 2005 | on-the-fly global predicate detection;distributed systems |
286949 | Safe metaclass programming. | In a system where classes are treated as first class objects, classes are defined as instances of other classes called metaclasses. An important benefit of using metaclasses is the ability to assign properties to classes (e.g. being abstract, being final, tracing particular messages, supporting multiple inheritance), independently from the base-level code. However, when both inheritance and instantiation are explicitly and simultaneously involved, communication between classes and their instances raises the metaclass compatibility issue. Some languages (such as SMALLTALK) address this issue but do not easily allow the assignment of specific properties to classes. In contrast, other languages (such as CLOS) allow the assignment of specific properties to classes but do not tackle the compatibility issue well.In this paper, we describe a new model of metalevel organization, called the compatibility model, which overcomes this difficulty. It allows safe metaclass programming since it makes it possible to assign specific properties to classes while ensuring metaclass compatibility. Therefore, we can take advantage of the expressive power of metaclasses to build reliable software. We extend this compatibility model in order to enable safe reuse and composition of class specific properties. This extension is implemented in NEOCLASSTALK, a fully reflective SMALLTALK. | Introduction
It has been shown that programming with metaclasses is of great benefit [KAJ
interesting use of metaclasses is the assignment of specific properties to classes. For example, a class can
be abstract, have a unique instance, trace messages received by its instances, define pre-post conditions on
its methods, forbid redefinition of some particular methods. These properties can be implemented using
metaclasses, allowing thereby the customization of the classes behavior [LC96].
From an architectural point of view, using metaclasses organizes applications into abstraction levels.
Each level describes and controls the level immediately below to which it is causally connected [Mae87].
Reified classes communicate with other objects including their own instances. Thus, classes can send
messages to their instances and instances can send messages to their classes. Such message sending is
named inter-level communication [MMC95].
However, careless inheritance at one level may break inter-level communication resulting in an issue
called the compatibility issue [BSLR96]. We have identified two symmetrical kinds of compatibility issues.
The first one is the upward compatibility issue, which was named metaclass compatibility by Nicolas Graube
[Gra89], and the second one is the downward compatibility issue. Both kinds of compatibility issues are
important impediments to metaclass programming that one should always be aware of.
Currently, none of the existing languages dealing with metaclasses allow the assignment of specific
properties to classes while ensuring compatibility. Clos [KdRB91] allows one to assign any property to
classes, but it does not ensure compatibility. On the other hand, both SOM [SOM93] and Smalltalk
[GR83] address the compatibility issue but they introduce a class property propagation problem. Indeed, a
property assigned to a class is automatically propagated to its subclasses. Therefore, in SOM and Small-
talk, a class cannot have a specific property. For example, when assigning the abstractness property to
a given Smalltalk class, subclasses become abstract too [BC89]. It follows that users face a dilemma:
using a language that allows the assignment of specific class properties without ensuring compatibility, or
using a language that ensures compatibility but suffers from the class property propagation problem.
In this paper, we present a model - the compatibility model - which allows safe metaclass program-
ming, i.e. it makes it possible to assign specific properties to classes without compromising compatibility.
In addition to ensuring compatibility, the compatibility model avoids class property propagation: a class
can be assigned specific properties without any side-effect on its subclasses.
We implemented the compatibility model in NeoClasstalk, a Smalltalk extension which introduces
many features including explicit metaclasses [Riv96]. Our experiments [Led98][Riv97] showed that the
compatibility model allows programmers to fully take advantage of the expressive power of metaclasses.
This effort has resulted (i) in a tool that permits a programmer unfamiliar with metaclasses to transparently
deal with class specific properties, and (ii) in an approach allowing reuse and composition of class properties.
This paper is organized as follows. Section 2 presents the compatibility issue. We give some examples
to show its significance. Section 3 shows how existing programming languages address the compatibility
issue, and how they deal with the property propagation problem. Section 4 describes our solution and
illustrates it with an example. In section 5, we deal with reuse and composition of class specific properties
within the compatibility model. Then, we sketch out the use of the compatibility model for both base-level
and meta-level programmers. The last section contains a concluding summary.
Inter-level communication and compatibility
We define inter-level communication as any message sending between classes and their instances (see Figure
1). Indeed, class objects can interact with other objects by sending and receiving messages. In particular,
an instance can send a message to its class and a class can send a message to some of its instances. We
use Smalltalk as an example to illustrate this issue 1 .
Message Sending
A
Class level
Instance level
Figure
1: Inter-level communication
Two methods allow inter-level communication in Smalltalk: new and class. When one of them is
used, the involved objects belong to different levels of abstraction
ffl An object receiving the class message returns its class. Then, the class method makes it possible to
go one level up. The following two instance methods - excerpted from Visual Works Smalltalk
include message sending to the receiver's class.
message name is sent to the class:
ObjectAEprintOn: aStream
title := self class name.
message daysInYear: is sent to the class:
DateAEdaysInYear
"Answer the number of days in the year
represented by the receiver."
" self class daysInYear: self year
ffl A class receiving the new message returns a new instance. Therefore, the new method makes it
possible to go one level down. The following two class methods include message sending to the
newly created instances.
message at:put: is sent to a new instance:
ArrayedCollection anObject
newCollection j
newCollection := self new: 1.
newCollection at: 1 put: anObject.
"newCollection
message on: is sent to a new instance:
Browser classAEopenOn: anOrganizer
self openOn: (self new on: anOrganizer) with-
TextState: nil
Thus, inter-level communication in Smalltalk is materialized by sending the messages new and class.
Other languages where classes are reified (such as Clos and SOM) also allow similar message sending.
Since these inter-level communication messages are embedded in methods, they are inherited whenever
methods are inherited. Ensuring compatibility means making sure that these methods will not induce any
failure in subclasses, i.e. all sent messages will always be understood. We have identified two kinds of
compatibility: upward compatibility 3 and downward compatibility.
We use the Smalltalk syntax and terminology throughout this paper.
measures we made over a Visual Works Smalltalk image show that inter-level communication is very frequent.
25% of classes include instance methods referencing the class and 24% of metaclasses define methods referencing an instance.
3 Nicolas Graube named this issue metaclass compatibility [Gra89].
<i-foo>
<c-bar>
inheritance instantiation
self class c-bar
Figure
2: Compatibility need to be ensured at a higher level
2.1 Upward compatibility
Suppose A implements a method i-foo that sends the c-bar message to the class of the receiver (see Figure
2). B is a subclass of A. When i-foo is sent to an instance of B, the B class receives the c-bar message. In
order to avoid any failure, B should understand the c-bar message (i.e. MetaB should implement or inherit
a method c-bar).
Definition of upward compatibility:
Let B be a subclass of the class A, MetaB the metaclass of B, and MetaA the metaclass of A.
Upward compatibility is ensured for MetaB and MetaA iff: every possible message that does not lead to
an error for any instance of A, will not lead to an error for any instance of B.
2.2 Downward compatibility
Suppose implements a method c-foo that sends the i-bar message to a newly created instance (see
Figure
3). MetaB is created as a subclass of MetaA. When c-foo is sent to B (an instance of MetaB), B
will create an instance which will receive the i-bar message. In order to avoid any failure, instances of B
should understand the i-bar message (i.e. B should implement or inherit the i-bar method).
<i-bar>
<c-foo>
A
inheritance instantiation
self new i-bar
Figure
3: Compatibility need to be ensured at a lower level
Definition of downward compatibility:
Let MetaB be a subclass of the metaclass MetaA.
Downward compatibility is ensured for two classes B instance of MetaB and A instance of
every possible message that does not lead to an error for A, will not lead to an error for B.
3 Existing models
We will now show why none of the known models allow the assignment of specific properties to classes
while ensuring compatibility.
3.1
When (re)defining a class in Clos, the validate-superclass generic function is called, before the direct
are stored [KdRB91]. As a default, validate-superclass returns true if the metaclass of the new
class is the same as the metaclass of the superclass 4 , i.e. classes and their subclasses must have the same
metaclass. Therefore, incompatibilities are avoided but metaclass programming is very constrained.
inheritance
instantiation B
A
Figure
4: By default in Clos, subclasses must share the same metaclass as their superclass
Figure
4 shows a hierarchy of two classes that illustrates the Clos default compatibility management
policy. Since class B inherits from A, B and A must have the same metaclass.
In order to allow the definition of classes with different behaviors, programmers usually redefine the
validate-superclass method to make it always return true. Thus, Clos programmers can have total freedom
to use a specific metaclass for each class. So, they can assign specific properties to classes, but the trade-off
is that they need to be always aware of the compatibility issue.
3.2 SOM
SOM is an IBM CORBA compliant product which is based on a metaclass architecture [DF94b]. The
SOM kernel follows the ObjVlisp model [Coi87]. SOM metaclasses are explicit and can have many
instances. Therefore, users have complete freedom to organize their metaclass hierarchies.
3.2.1 Compatibility issue in SOM
SOM encourages the definition and the use of explicit metaclasses by introducing a unique concept named
derived metaclasses which deals with the upward compatibility issue [DF94a]. At compile-time, SOM
automatically determines an appropriate metaclass that ensures upward compatibility. If needed, SOM
automatically creates a new metaclass named a derived metaclass to ensure upward compatibility.
instantiation
inheritance
self class c-bar
A>>i-foo
class:
parent: A;
metaclass: MetaB;
A
Derived
<c-bar>
<i-foo>
Figure
5: SOM ensures upward compatibility using derived metaclasses
In fact, it also returns true if the superclass is the class named t, or if the metaclass of one argument is standard-class
and the metaclass of the other is funcallable-standard-class.
Suppose that we want to create a class B, instance of MetaB and subclass of A. SOM will detect an
upward compatibility problem, since MetaB does not inherit from the metaclass of A (MetaA). Therefore,
automatically creates a derived metaclass (Derived), using multiple inheritance in order to support
all necessary class methods and variables 5 . Figure 5 shows the resulting construction. When an instance
of B receives i-foo, it goes one level higher and sends c-bar to the B class. B understands the c-bar message
since its metaclass (i.e. Derived) is a derived metaclass which inherits from both MetaB and MetaA.
inheritance
instantiation
self new i-bar
SOMObject
A <i-bar>
<c-foo>
SOMClass
Figure
example
SOM does not provide any policy or mechanism to handle downward compatibility. Suppose that
MetaB is created as a subclass of MetaA (see Figure 6). The c-foo method which is inherited by MetaB
sends the i-bar message to a new instance. When the B class receives the c-foo message, a run-time error
will occur because its instances do not understand the i-bar message.
3.2.2 Class property propagation in SOM
SOM does not allow the assignment of a property to a given class, without making its subclasses be
assigned the same property. We name this defect the class property propagation problem. In the following
example, we illustrate how derived metaclasses implicitly cause undesirable propagation of class properties.
inheritance
instantiation
class:
parent: A;
metaclass: SoleInstance;
Derived
SoleInstance
Released
A
Figure
7: Class property propagation in SOM
Suppose that the A class of Figure 7 is a released class, i.e. it should not be modified any more. This
property is useful in multi-programmer development environments for versionning purposes. In order to
avoid any change, A is an instance of the Released metaclass. Let B a class that has a unique instance: B
is an instance of the SoleInstance metaclass. But as B is a subclass of A, SOM creates B as instance of
an automatically created derived metaclass which inherits from both SoleInstance and Released. Thus, as
soon as B is created, it is automatically "locked" and acts like a released class. So, we cannot define any
new method on it!
5 The semantics of derived metaclasses guarantees that the declared metaclass takes precedence in the resolution of multiple
inheritance ambiguities (i.e. MetaB before MetaA). Besides, it ensures the instance variables of the class are correctly initialized
by the use of a complex mechanism.
3.3 Smalltalk-80
In Smalltalk, metaclasses are partially hidden and automatically created by the system. Each metaclass
is non-sharable and strongly coupled with its sole instance. So, the metaclass hierarchy is parallel to the
class hierarchy and is implicitly generated when classes are created.
3.3.1 Compatibility issue in Smalltalk-80
Using parallel inheritance hierarchies, the Smalltalk model ensures both upward and downward com-
patibility. Indeed, any code dealing with new or class methods, is inherited and works properly.
instantiation
inheritance
class
<c-bar>
<c-foo>
<i-foo>
A class>>c-foo
self new i-bar
self class c-bar
A>>i-foo
A class
<i-bar>
Figure
8: Smalltalk ensures both upward and downward compatibilities
When one creates the B class, a subclass of A (see Figure 8), Smalltalk automatically generates the
metaclass of B ("B class" 6 ), as a subclass of "A class", the metaclass of A. Suppose A implements a method
i-foo that sends c-bar to the class of the receiver. If i-foo is sent to an instance of B, the B class receives the
c-bar message. Thanks to the parallel hierarchies, the B class understands the c-bar message, and upward
compatibility is ensured. In a similar manner, downward compatibility is ensured thanks to the parallel
hierarchy.
3.3.2 Class property propagation in Smalltalk-80
Since metaclasses are automatically and implicitly managed by the system, Smalltalk drastically reduces
the opportunity to change class behaviors, making metaclass programming "anecdotal". As with SOM,
Smalltalk does not allow the assignment of a property to a class without propagating it to its subclasses.
inheritance
A class
A
class
A class>>new
self error: 'I am Abstract'
instantiation
Figure
9: Class property propagation in Smalltalk
In
Figure
9, the A class is abstract since its subclasses must implement some methods to complete the
instance behavior. B is a concrete class as it implements the whole set of these methods. Suppose that we
want to enforce the property of abstractness of A. In order to forbid instantiating A, we define the class
method A classAEnew which raises an error. Unfortunately, "B class" inherits the method new from "A
class". As a result, attempting to create an instance of B raises an error 7
6 The name of a Smalltalk metaclass is the name of its unique instance postfixed by the word 'class'.
7 This example is deliberately simple, and one could avoid this problem by redefining new in "B class". But, this solution
4 The compatibility model
Among the previous models, only the Smalltalk one with its parallel hierarchies ensures full compati-
bility. However, it does not allow the assignment of specific properties to classes. On the other hand, only
the Clos model allows the assignment of specific properties to classes. Unfortunately, it does not ensure
compatibility. We believe that these two goals can both be achieved by a new model which makes a clear
separation between compatibility and class specific properties.
inheritance
instantiation
Abstract
class>>new
self error: 'I am Abstract'
class
A
A class
Abstract
class
Figure
10: Avoiding the propagation of abstractness
We illustrate this idea of separation of concerns by refactoring the example of Figure 9. We create
a new metaclass named "Abstract class" as a subclass of "A class" (see Figure 10). The A class is
redefined as an instance of this new metaclass. As "Abstract + A class" redefines the new method to raise
an error, A cannot have any instance. However, since "B class" is not a subclass of "Abstract
the B class remains concrete. The generalization of this scheme is our solution, named the compatibility
model.
In the remainder of this paper, names of metaclasses defining some class property are denoted with the
concatenation of the property name, the + symbol and the superclass name. For example, "Abstract
class" is a subclass of "A class" that defines the property of abstractness named Abstract.
4.1 Description of the compatibility model
The compatibility model extends the Smalltalk model by separating two concerns: compatibility and
specific class properties. A metaclass hierarchy parallel to the class hierarchy ensures both upward and
downward compatibility like in Smalltalk. An extra metaclass "layer" is introduced in order to locally
assign any property to classes. Classes are instances of metaclasses belonging to this layer. So, the
compatibility model is based on two "layers" of metaclasses, each one addressing a unique concern:
Compatibility concern: This issue is addressed by the metaclasses organized in a hierarchy parallel
to the class hierarchy. We name such metaclasses: compatibility metaclasses. They define all the
behavior that must be propagated to all (sub)classes. All class methods which send messages to
instances should be defined in these metaclasses. Besides, all messages sent to classes by their
instances should be defined in these metaclasses too.
Specific class properties concern: This issue is addressed by metaclasses that define the class specific
properties. We name such metaclasses: property metaclasses. A class with a specific property is
instance of a property metaclass which inherits from the corresponding compatibility metaclass. The
property metaclass is not joined to other property metaclasses, since it defines a property specific to
the class.
is a kind of inheritance anomaly [MY93] that increases maintenance costs.
AClass BClass
metaclasses
Compatibility
metaclasses
Property
<i-foo>
A
<i-bar>
A>>i-foo
self new i-bar
AClass>>c-foo
self class c-bar
inheritance
instantiation
<c-foo>
BProperty
<c-bar>
AProperty
BClass
Figure
11: The compatibility model
Figure
shows 8 the compatibility model applied to a hierarchy consisting of two classes: A and
B. They are respectively instances of the "AProperty metaclasses.
"AProperty properties specific to class A, while "BProperty properties
specific to class B. As "AProperty + AClass" and "BProperty + BClass" are not joined by any link, class
property propagation does not occur. Thus, A and B can have distinct properties.
are subclasses of the AClass and BClass meta-
classes, both upward and downward compatibility are guaranteed. Suppose that A defines two instance
methods i-foo and i-bar. The i-foo method sends the c-bar message to the class of the receiver. The i-bar
method is sent to a new instance by the c-foo method. Because the AClass and BClass metaclass hierarchy
is parallel to the A and B class hierarchy, inter-level communication failure is avoided.
4.2 Example: Refactoring the Smalltalk-80 Boolean hierarchy
The Boolean hierarchy of Smalltalk 9 is depicted in Figure 12. Boolean is an abstract class
which defines a protocol shared by True and False. True and False are concrete classes that
cannot have more than one instance. These properties (i.e. abstractness and having a sole instance) are
implicit in Smalltalk. Using the compatibility model, we refactor the Boolean hierarchy to emphasize
them.
instantiation
inheritance
Boolean class
False class
Boolean
True class
True
False
Figure
12: The Boolean hierarchy of Smalltalk
We first create "Boolean class", which is a compatibility metaclass. The second step consists of creating
the property metaclass "Abstract Boolean class", which enforces the Boolean class to be abstract. Finally,
we build the Boolean class by instantiating the "Abstract metaclass.
To refactor the False class, we first create the "False class" metaclass, as a subclass of "Boolean class"
to ensure the compatibility. The second step consists of creating the property metaclass "SoleInstance
Compatibility metaclasses are surrounded with a dashed line and property metaclasses are drawn inside a grey shape.
9 We prefer this academic example to emphasize our ideas rather than a more complex example which should require a
more detailed presentation.
False class", which enforces the False class to have at most one instance. At last, we create the False class
by instantiating the "SoleInstance + False class" metaclass. The True class is refactored in the same way.
The result of rebuilding the whole hierarchy of Boolean is shown in Figure 13.
Abstract
Boolean class SoleInstance False class
instantiation
inheritance
False class
Boolean
Boolean class
False
True class
SoleInstance class
True
Figure
13: The Boolean hierarchy after refactoring
5 Reuse and composition within the compatibility model
We have experimented the compatibility model in NeoClasstalk 10 [Riv97], a fully reflective Smalltalk.
We quickly faced the need of class property reuse and composition. Indeed, unrelated classes belonging to
different hierarchies can have the same properties, and a given class can have many properties.
In the previous section, both the True class and the False class have the same property: having a unique
instance. Besides, we assigned only one property to each class of the Boolean hierarchy. But, a class need
to be assigned many properties. For example, the False class must not only have a unique instance, but it
also should not be subclassed (such a class is final in Java terminology). So, we have to reuse and compose
these class properties with respect to our compatibility model.
In this section, we propose an extension of our compatibility model that deals with reuse and composition
of class properties. Any language where classes are treated as regular objects may integrate our
extended compatibility model. NeoClasstalk has been used as a first experimentation platform.
5.1 Reuse of class properties
In Smalltalk, since metaclasses behave in a different way than classes, they are defined as instances of
a particular class, a meta-metaclass, called Metaclass. Metaclass defines the behavior of all metaclasses in
Smalltalk. For example, the name of a metaclass is the name of its sole instance postfixed by the word
'class'.
MetaclassAEname
"thisClass name, ' class'
We take advantage of this concept of meta-metaclasses to reuse class properties. Since metaclasses
implementing different properties have different behaviors, we need one meta-metaclass for each class
property. Property metaclasses defining the same class property are instances of the same meta-metaclass.
When a property metaclass is created, the meta-metaclass initializes it with the definition of the corresponding
class property. Thus, the code (instance variables, methods, . ) corresponding to the definition
NeoClasstalk and all related papers can be downloaded from
http://www.emn.fr/cs/object/tools/neoclasstalk/neoclasstalk.html
of the class property, is automatically generated. Reuse is achieved by creating property metaclasses defining
the same class property as instances of the same meta-metaclass, i.e. they are initialized with the same
class property definition (an example of such an initialization is given in section 5.4.2).
The root of the meta-metaclass hierarchy named PropertyMetaclass describes the default structure and
behavior of property metaclasses. For example, the name of a property metaclass is built from the property
name and the superclass name:
PropertyMetaclassAEname
"self class name, '+', self superclass
name
In the refactored Boolean hierarchy of section 4.2, both "SoleInstance False class" and "SoleInstance
True class" define the property of having a unique instance. Reuse is achieved by defining both "SoleInstance
False class" and "SoleInstance True class" as instances of SoleInstance, a subclass of PropertyMetaclass
(see
Figure
14).
SoleInstance False class
Abstract
Boolean class
metaclass
level
class
level
meta-metaclass
level
False class
Boolean
instantiation
inheritance
Boolean class
False
True class
SoleInstance class
Abstract
SoleInstance
True
PropertyMetaclass
Figure
14: Reuse properties in the Boolean hierarchy
5.2 Composition of class properties
Since a given class can have many properties, the model must support the composition of class proper-
ties. We chose to use many property metaclasses organized in a single inheritance hierarchy, where each
metaclass implements one specific class property.
To illustrate this idea, we modify the instantiation link for the False class (see Figure 15). We define two
property metaclasses, one for each property. The first property metaclass is "SoleInstance
which inherits from the compatibility metaclass "False class". The second one is "Final
False class", which is the class of False. It is defined as a subclass of "SoleInstance False class". The
resulting scheme respects the compatibility model: it allows the assignment of two specific properties to
the False class and still ensures compatibility.
5.2.1 Conflict management
The solution of the property metaclasses composition issue is not trivial. Indeed, it is necessary to deal with
conflicts that arise when composing different property metaclasses. When using inheritance to compose
property metaclasses, two kinds of conflicts can arise: name conflicts and value conflicts [DHH
SoleInstance
False class
Boolean class
Abstract
Boolean class False class
SoleInstance
False class
Final
False
Boolean
instantiation
inheritance
Figure
15: Assigning two properties to False
Name conflicts happen when orthogonal property metaclasses define instance variables or methods
which have the same name. Two property metaclasses are orthogonal when they define unrelated class
properties. Name conflicts for both instance variables and methods are avoided by adapting the definition of
a new property metaclass according to its superclasses. For example, although the two property metaclasses
"SoleInstance False class" and "SoleInstance + True class" define the same property for their respective
instances (classes False and True), they may use different instance variable names or method names.
Value conflicts happen when non-orthogonal property metaclasses define methods which have the same
name. Most of these conflicts are avoided by making the property metaclass hierarchy act as a cooperation
chain, i.e. a property metaclass explicitly refer to the overridden methods defined in its superclasses 11 .
Therefore, each property metaclass acts like a mixin [BC90].
5.2.2 Example of cooperation between property metaclasses
Suppose that we want to assign two specific properties to the False class of Figure 16: (i) tracing all messages
(Trace) and (ii) having breakpoints on particular methods (BreakPoint). These two properties deal with the
message handling which is based in NeoClasstalk on the technique of the "method wrappers" described
in [Duc98] and [BFJR98]. The executeMethod:receiver:arguments: method is the entry point to handle
messages in NeoClasstalk, i.e. customizing executeMethod:receiver:arguments: allows a specialization of
the message sending 12 . Thus, when the object false receives a message, the class False receives the message
executeMethod:receiver:arguments:.
According to the inheritance hierarchy, (1) the trace is first done, then (2), by the use of super, the
breakpoint is performed, and (3) a regular method application is finally executed (again called using super).
ffl (3) StandardClassAEexecuteMethod: method receiver:
rec arguments: args
ffl (2) BreakPoint+False classAEexecuteMethod: method
receiver: rec arguments: args
method selector == stopSelector
ifTrue: [self halt: 'Breakpoint for ', stopSelector].
"super executeMethod: method receiver: rec arguments: args
ffl (1) Trace+BreakPoint+False classAEexecuteMethod:
In NeoClasstalk, as in Smalltalk, this is achieved using the pseudo-variable super.
executeMethod:receiver:arguments: method is provided by StandardClass (the root of all metaclasses in Neo-
which just applies the method on the receiver with the arguments.
False class
instantiation
inheritance
False
Trace
BreakPoint
BreakPoint
False class
Trace
BreakPoint
False class
PropertyMetaclass
metaclass
level
class
level
meta-metaclass
level
Figure
Composition of non-orthogonal properties
method receiver: rec arguments: args
self transcript show: method selector; cr.
"super executeMethod: method receiver: rec arguments: args
5.3 The extended compatibility model
Generalizing previous examples allows us to define the extended compatibility model (see Figure 17) which
enables reusing and composing class properties. Each property metaclass defines the instance variables
and methods involved in a unique property. Property metaclasses specific to a given class are organized in
a single hierarchy. The root of this hierarchy is a subclass of a compatibility metaclass 13 . Each property
metaclass is an instance of a meta-metaclass which describes a specific class property, allowing its reuse.
metaclasses
Property
BClass
AClass
metaclasses
Compatibility
BClass
AClass
BClass
BClass
AClass
AClass
PropertyMetaclass
Figure
17: The Extended Compatibility Model
13 This single hierarchy may be compared to an explicit linearization of property metaclasses composed using multiple
inheritance [DHHM94].
Metaclass creation, composition and deletion are managed automatically with respect to the extended
compatibility model. Each time a new class is created, a new compatibility metaclass is automatically
created. This can be done in the same way that Smalltalk builds its parallel metaclass hierarchy. The
assignment of a property to this class results in the insertion of a new metaclass into its property metaclass
hierarchy. This insertion is made in two steps
1. first, the new property metaclass becomes a subclass of the last metaclass of the property metaclass
2. then, the class becomes instance of this new property metaclass.
NeoClasstalk provides protocols for dynamically changing the class of an object (changeClass:) and
the superclass of a class (superclass:) [Riv97]. Thus, the implementation of these two steps is immediate
in NeoClasstalk, and is provided by the composeWithPropertiesOf: method.
aClass
self superclass: aClass class.
aClass changeClass: self.
5.4 Programming within the extended compatibility model
We distinguish two kinds of programmers: (i) "base level programmers" who implement applications using
the language and development tools, and (ii) "meta level programmers" for whom the language itself is
the application.
5.4.1 Base Level Programming
To make our model easy to use for a "base-level programmer", the NeoClasstalk programming environment
includes a tool that allows one to assign different properties to a given class using a Smalltalk-like
browser (see Figure 18). These properties can be added and removed at run-time. The metaclass level is
automatically built according to the selection of the "base-level programmer".
5.4.2 Meta Level Programming
In order to introduce new class properties, "meta-level programmers" must create a subclass of the Prop-
ertyMetaclass meta-metaclass. This new meta-metaclass stores the instance variables and the methods
that should be defined by its instances (property metaclasses). When this new meta-metaclass is instan-
tiated, the previous instance variables are added to the resulting property metaclass and the methods are
compiled 15 at initialization time
For example, the evaluation of the following expression creates a property metaclass - instance of the
meta-metaclass Trace - that assigns the trace property to the True class.
Trace new composeWithPropertiesOf: True
14 The removal of a property metaclass is done in a symmetrical way.
solution consists of doing the compilation only once, resulting in proto-methods [Riv97]. Thus, when the property
metaclass gets initialized, proto-methods are "copied" into the method dictionary of the property metaclass, allowing a fast
instantiation of meta-metaclasses.
This assumes that initialization is part of the creation process, which is true in almost every language. This is tradition-
nally achieved in Smalltalk by the redefinition of new into super new initialize [SKT96].
Figure
properties assigned to a class using a browser
In order to achieve the trace, messages must be captured and then logged in a text collector. Therefore,
property metaclasses instances of Trace must define an instance variable (named transcript) corresponding
to a text collector and a method that handles messages. Message handling is achieved using the
executeMethod:receiver:arguments: method which source code was already presented in 5.2.2. These definitions
are generated when the property metaclasses are initialized, i.e. using the initialize method of the
Trace meta-metaclass:
TraceAEinitialize
super initialize.
self instanceVariableNames:' transcript '.
self
generateExecuteMethodReceiverArguments.
6 Conclusion
Considering classes as first class objects organizes applications in different abstraction levels, which inevitably
raises upward and downward compatibility issues. Existing solutions addressing the compatibility
issues (such as Smalltalk) do not allow the assignment of specific properties to a given class without
propagating them to its subclasses.
The compatibility model proposed in this paper addresses the compatibility issue and allows the assignment
of specific properties to classes without propagating them to subclasses. This is achieved thanks to
the separation of the two involved concerns: compatibility and class properties. Upward and downward
compatibilities are ensured using the compatibility metaclass hierarchy that is parallel to the class hierar-
chy. The property metaclasses, allowing the assignment of specific properties to classes, are subclasses of
these compatibility metaclasses. Therefore, we can take advantage of the expressive power of metaclasses
to define, reuse and compose class properties in a environment which supports safe metaclass programming.
Class properties improve readability, reusability and quality of code by increasing separation of concerns
allow a better organization of class libraries and frameworks for
designing reliable software. We are strongly convinced that our compatibility model enables separation of
concerns based on the metaclass paradigm. Therefore, it promotes building reliable software which is easy
to reuse and maintain.
Acknowledgments
The authors are grateful to Mathias Braux, Pierre Cointe, St'ephane Ducasse, Nick Edgar, Philippe Mulet,
Jacques Noy'e, Nicolas Revault, and Mario S-udholt for their valuable comments and suggestions. Special
thanks to the anonymous referees who provided detailed and thought-provoking comments.
--R
Programming with Explicit Metaclasses in Smalltalk.
Wrappers to the Rescue.
Concurrency and Distribution in Object Oriented Programming.
Noury Bouraqadi-Sa-adani
are First Class: the ObjVlisp Model.
Derived Metaclasses in SOM.
Reflections on Metaclass Programming in SOM.
Le point sur l'h'eritage multiple.
Proposal for a Monotonic Multiple Inheritance Linearization.
Evaluating Message Passing Control Techniques in Smalltalk.
Metaclass Compatibility.
Separation of Concerns.
"Object-Oriented Programming: The CLOS Perspective"
The Art of the Metaobject Protocol.
Explicit Metaclasses As a Tool for Improving the Design of Class Libraries.
Reflection and Distributed Systems
Adaptive Object-Oriented Software: The Demeter Method with Propagation Patterns
Concepts and Experiments in Computational Reflection.
Towards a Methodology for Explicit Composition of MetaObjects.
Research Directions in Concurrent Object-Oriented Programming
A New Smalltalk Kernel Allowing Both Explicit and Implicit Metalclass Pro- gramming
Object Behavioral Evolution Within Class Based Reflective Languages.
Smalltalk with Style.
Advances in Object-Oriented Metalevel Architectures and Reflec- tion
--TR
Smalltalk-80: the language and its implementation
Concepts and experiments in computational reflection
are first class: The ObjVlisp Model
Metaclass compatibility
Programming with explicit metaclasses in Smalltalk-80
Mixin-based inheritance
The art of metaobject protocol
Metaobject protocols
Analysis of inheritance anomaly in object-oriented concurrent programming languages
Proposal for a monotonic multiple inheritance linearization
Reflections on metaclass programming in SOM
Smalltalk with style
Towards a methodology for explicit composition of metaobjects
Concurrency and distribution in object-oriented programming
Advances in Object-Oriented Metalevel Architectures and Reflection
Adaptive Object-Oriented Software
Wrappers to the Rescue
Explicit Metaclasses as a Tool for Improving the Design of Class Libraries
--CTR
Stphane Ducasse , Oscar Nierstrasz , Nathanael Schrli , Roel Wuyts , Andrew P. Black, Traits: A mechanism for fine-grained reuse, ACM Transactions on Programming Languages and Systems (TOPLAS), v.28 n.2, p.331-388, March 2006 | compatibility;class property propagation;metaclasses;class specific properties |
286966 | Visualizing dynamic software system information through high-level models. | Dynamic information collected as a software system executes can help software engineers perform some tasks on a system more effectively. To interpret the sizable amount of data generated from a system's execution, engineers require tool support. We have developed an off-line, flexible approach for visualizing the operation of an object-oriented system at the architectural level. This approach complements and extends existing profiling and visualization approaches available to engineers attempting to utilize dynamic information. In this paper, we describe the technique and discuss preliminary qualitative studies into its usefulness and usability. These studies were undertaken in the context of performance tuning tasks. | INTRODUCTION
Effective performance of many software engineering tasks requires
knowledge of how the system works. Gaining the desired
knowledge by studying or statically analyzing the source
code can be difficult. Static analysis, for instance, can help
a software engineer determine if two classes can interact, but
it does not help the engineer determine how many objects of
a class might exist at run-time, nor how many method calls
might occur between particular objects. Determining answers
to these questions requires an investigation of dynamic information
collected as the software system executes. This
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage
and that copies bear this notice and the full citation on the first page.
To copy otherwise, to republish, to post on servers, or to
redistribute to lists, requires specific permission and/or a fee.
OOPSLA '98 10/98 Vancouver, B.C.
c
dynamic information helps bridge "the dichotomy between
the code structure as hierarchies of classes and the execution
structure as networks of objects" [1, p. 326].
Software engineers require tool support to effectively access
and interpret dynamic system information, because the
quantity, level of detail and complex structure of this information
would otherwise be overwhelming. In creating a tool
to help an engineer access this information, two goals must
be paramount: the tool must be usable and it must be useful
for the task it is designed to address. Usability is defined in
terms of practicality and simplicity of interface; usefulness is
defined in terms of easing the performance or completion of
a task of importance, especially in comparison to alternative
methods.
Many tools have been developed to provide engineers access
to dynamic information. Profilers, for instance, provide
numerical summaries of dynamic information, such as the
length of time spent executing a method. This information
can be helpful when trying to tackle some system performance
problems. Other tasks, however, such as verifying that objects
are interacting appropriately according to defined roles [6],
require additional structural information. The usefulness of
profilers for these types of task degrades because the relevant
dynamic information is not evident from a summary numeric
value produced on a per method or per class basis.
When structural dynamic information is needed, an engineer
may attempt to use an object-level visualizer (e.g., [1, 6,
7]). These visualizers provide such displays as the interactions
between objects (or classes) and the number of objects
created of each class. When the task requires views involving
many classes in a large system, the usability of these tools de-
grades, as they tend to display complex interactions between
multiple objects in a haze of extraneous, overlain information.
In part to overcome this complexity problem, Sefika
et al. introduced an architectural-oriented visualization approach
[14] that allows an engineer to investigate the operation
of the system at both coarse- and fine-grained levels.
Some of the design choices made in their approach limit its
applicability. Their approach is on-line, limiting its usefulness
for some kinds of tasks. Their approach requires hard-wired
instrument classes to be attached to the system, limiting
its flexibility and reducing its usability.
We have developed an off-line, flexible approach for visualizing
the operation of an object-oriented system at an architectural
level. Our approach abstracts two fundamental pieces
of dynamic information: the number of objects involved in the
execution, and the interactions between the objects. We visualize
these two pieces of information in terms of a high-level
view of the system that is selected by the engineer as useful
for the task being performed.
To represent the information collected across a system's ex-
ecution, we use a sequence of cels. Each cel displays abstracted
dynamic information representing both a particular
point in the system's execution, and the history of the execution
to that point. The integration of "current" and "histori-
cal" information is intended to ease the interpretation of the
display by the engineer. Using our prototype, a software engineer
can navigate both forwards and backwards through the
cels comprising views on the execution.
Our approach complements and extends existing approaches
to accessing dynamic system information. Our
approach
ffl allows an unfamiliar system to be studied without alteration
of source code,
ffl permits lightweight changes to the abstraction used for
condensing the dynamic information,
ffl supplies a visualization independent of the speed of execution
of the system being studied, and
ffl allows a user to investigate the abstracted information in
a detailed manner by supporting both forwards and backwards
navigation across the visualizations.
To investigate the usefulness and usability of the approach,
we have performed preliminary, qualitative studies of the use
of the technique to aid performance-tuning tasks on Smalltalk
programs. These studies show that the technique can help
software engineers make better use of dynamic system information
when performing tasks such as performance enhancement
We begin in Section 2 by describing our visualization tech-
nique; Section 3 discusses the creation of a visualization. In
Section 4, we discuss our initial evaluation efforts intended to
assess the usability and usefulness of the approach. In Section
5, we consider the design choices we made in our visualization
technique. Section 6 describes related work and
Section 7 concludes with a description of directions for future
work.
Our visualization technique abstracts information that has
been previously collected during a system's execution and
uses concepts from the field of computer animation to display
that information to a user. We begin the description of our
technique by focusing on the visualization itself, and then describe
how a software engineer can construct such a visualization
Figures
show different views withina visualization
produced during one of our case studies that was investigating
a performance problem in a reverse engineering program
(Section 4.1). The two windows in Figures 1 and 2 each
provide one view-a cel showing events that occurred within
a particular interval of the system's execution, defined as a set
of n events where n is adjustable. The view in Figure 3 shows
a summary view of all events occurring in the trace, and Figure
4 gives a detailed, textual view of some of the information
within the summary view. Sections 2.1, 2.2, and 2.3 describe
these views in more detail. Full details of the runningexample
we use are provided in Section 4.1.
Our prototype permits a software engineer to easily switch
from a particular cel to the summary view. A user may also
move through the sequence of cels sequentially or via random
access; animation controls, such as play, stop, step forward
and step backward, allow a user to review the execution trace
and pause or return to points of interest. We discuss the navigation
capabilities of our visualization in Section 2.3.
2.1 Cels
A cel consists of a canvas upon which are drawn a set of wid-
gets. These widgets consist of:
ffl boxes, each representing a set of objects abstracted
within the high-level model (Section 3.2) defined by the
engineer,
ffl a directed hyperarc between and through some of the
boxes,
ffl a set of directed arcs between pairs of boxes, each of
which indicates that a method on an object in the destination
box has been invoked from a method on an object
in the source box,
ffl a bar-chart style of histogram associated with each box,
indicating the ages of and garbage collection information
about the objects associated with that box,
ffl annotations and bars within each box, and
ffl annotations associated with each directed arc.
Each box drawn identically within each cel represents a
particular abstract entity specified by the engineer, and thus,
does not change through the animation. The grey rectangles
in
Figures
labelled Clustering, SimFunc, Module-
sAndSuch, and Rest are boxes corresponding to abstract entities
of the same names. Two of these entities, Clustering and
SimFunc, each correspond to a class in the reverse engineering
tool source; the other two entities represent collections of
classes.
ModulesAndSuch
Stop
Summary
Step >>
Play
<< Step Stop
Data Visualization1037 39166360123Cel#: 13
Rest
Clustering
Rest
ModulesAndSuch
Rest
Figure
1: A window showing an example cel in the visualization technique.
ModulesAndSuch
Stop
Summary
Play
<< Step Step >>
Options000000111111111111
Data Visualization
Stack: Clustering - Rest - SimFunc
Rest Rest
Clustering
Clustering
ModulesAndSuch
Figure
2: A window showing the next cel after that in Figure 1 for the same system and execution trace.
Step >>
<< Step Stop
Play Cel: 4
Stop
<< Step Step >>
Play
Options
Data Visualization
Summary
Rest ModulesAndSuch
Clustering
Clustering
Rest
Rest ModulesAndSuch
ModulesAndSuch
Deallocation Age
Deallocation Age
Deallocation Age
Deallocation Age
Allocation Pattern
Allocation Pattern Allocation Pattern
Allocation Pattern9992
Figure
3: A window showing the Summary View for the same system and execution trace as shown in Figures 1 and 2.
OK Cancel
37228 -> 40604 age: 3376: #MethodContext
37229 -> 40604 age: 3375: #BlockContextTemplate
37229 -> 40604 age: 3375: #MethodContext
37232 -> 40604 age: 3372: #MethodContext
37206 -> 40604 age: 3398: #Array
37217 -> 40604 age: 3387: #MethodContext
37196 -> 40604 age: 3408: #Array
37215 -> 40604 age: 3389: #EsCompactBlockContextTemplate
37214 -> 40604 age: 3390: #BlockContextTemplate
37214 -> 40604 age: 3390: #MethodContext
37213 -> 40604 age: 3391: #EsCompactBlockContextTemplate
37212 -> 40604 age: 3392: #MethodContext
37211 -> 40604 age: 3393: #EsCompactBlockContextTemplate
Data Visualization
37217 -> 40604 age: 3387: #BlockContextTemplate
Allocations over sample time
Figure
4: A pop-up window produced by clicking on the Allocation
Pattern histogram for the Clustering entity of Figure
3.
The path of the hyperarc represents the call stack at the end
of the current interval being displayed. In Figure 1, the current
call stack only travels between the Clustering and Rest
boxes-the hyperarc is marked in red (shown as a dashed
black line herein); in Figure 2, the call stack has extended to
SimFunc as well.
The set of directed arcs represents the total set of calls between
boxes up to the current interval; they are displayed in
blue (shown as solid black herein). Because the total number
of pairs of boxes is manageable, this set does not obscure the
rest of the cel significantly. Multiple instances of interaction
between two boxes are shown as a number annotating the directed
arc. The same two arcs are shown in Figures 1 and 2
from Clustering to Rest, with 123 calls, and from Rest to Sim-
Func, with 122 calls.
Object creation, age, and destruction are a particular focus
within the visualization. Each box is annotated with numbers
and bars indicating the total number of objects that map to the
box that have been allocated and deallocated until the current
interval. The length of a bar for a given box is proportional to
the maximum number of objects represented by that box over
the course of the visualization. The Clustering box of Figure 1
shows that a total of 1127 objects associated with it had been
created to this point in the execution, and that 1037 of these
had been garbage collected.
The histogram associated with each box shows this information
as well, but in a more refined form. An object that
was created in the interval being displayed has survived for
a single interval; stepping ahead one cel, if it still exists, the
object has survived for two intervals, and so on. The kth bin
of the histogram shows the total number of objects mapped to
the box that are of age k; to limit the number of bins in the
histogram, any objects older than some threshold age T are
shown in the rightmost bin of the histogram. The histogram
attached to the Clustering box in Figure 1 indicates that all of
its 1127 objects were created relatively far in the past, more
before the one being shown here.
Colour is used to differentiate those objects that still exist
from those that have been garbage collected; each bar of
the histogram is divided into a lower, green part (marked in a
vertical-line pattern herein) for the living objects and an up-
per, red part (marked in a diagonal-line pattern herein) for the
deleted objects. In Figure 1, the upper part of the bar in Clus-
tering's histogram shows that roughly 80% of the old objects
have been deallocated. Yellow (shown as light grey herein) is
used both within the box annotations and within histograms to
indicate a change that just occurred during the interval. More
specifically, it is used to show objects that have just been created
or deleted. In Figure 2, which shows the interval immediately
after that of Figure 1, an additional 324 objects had just
been allocated that were related to Clustering. This allocation
is shown both by the yellow (light grey) portion of the upper
bar, and the yellow (light grey) bar in the first bin of the histogram
No complex graph layout algorithms are currently used to
produce the views. The drawing package used in the prototype
supports interactive rearrangement of the widgets by its
user.
Each cel is intended to represent a combination of information
not present in its predecessor (in terms of the original ex-
ecution) and a summary-to-date of the information in its predecessors
and itself. The new information is difficult to interpret
in isolation; the context provided by the summary-to-date
eases this interpretation. See Section 5.3 for further discussion
2.2
Summary
View
In addition to the individual cels, a summary view is provided
to display the overall execution of the system being studied.
This view shows the same boxes, directed arcs, arc annota-
tions, and box annotations as the final cel of the animated
view. In addition, it displays two histograms per box; these
are different from the histograms in the animated view. One,
the allocation pattern, shows the entire execution trace divided
into a set of ten equal-length intervals; if the trace consists
of 10n events then each interval consists of n contiguous
events. The height of each bar represents the number of objects
allocated in that interval that map to that box. The other
histogram, the deallocation age, shows the age of every object
associated with the box when the object was garbage col-
lected; if an object had not been garbage collected when tracing
ended, it is displayed in the rightmost bar. For example
in
Figure
3, Clustering's deallocation-age histogram shows
that most of its objects were deallocated at a very young age
while the rest still existed when tracing stopped-this is the
case for all of the boxes in this example except ModulesAnd-
Such, whose associated objects were always deallocated at a
young age. Clustering's allocation pattern is fairly uniform,
showing only a slight increase in allocations halfway through
execution; on the other hand, SimFunc stopped allocating objects
after the halfway point.
2.3 Navigating the Visualization
There are three forms of interaction with the visualization:
view selection, animation control, and detail querying.
View selection simply entails choosing between a summary
view, or the detailed, cel-based view. It would be reasonable
to allow multiple, simultaneous views, both summary
and cel-based, but this is not provided by the current imple-
mentation. However, the off-line nature of this technique
(Section 3.1) allows multiple instances of the tool to be run
simultaneously.
Animation control is provided by several buttons, a slider,
and the textual entry of particular values. The buttons are Step
Backwards and Forwards (by step-size number of cels), Play,
and Stop. The slider is used for random access to a cel in a
drag-and-drop fashion.
Textual entry is used to specify step- and interval sizes, and
animation speed; altering the step size allows the engineer
to move through the animation more quickly by not showing
some cels-this allows the animation to proceed more quickly
when the redisplay rate of the graphics software and hardware
is slower than the desired rate of animation.
By default, an interval ends upon an event that caused a
frame to be added or removed from the execution stack; these
events include making or returning from a method call, and
generally, each object allocation or deallocation. This granularity
is generally too fine to be usable-with tens of thousands
of method calls occurring and similar numbers of objects
being created and destroyed, not much changes between
two adjacent cels, and histograms tend to have empty bins except
for the rightmost. Therefore, we allow the engineer to re-set
the interval size; a size of ten, for example, indicates that
each cel should represent the changes to the system produced
by ten events.
Detail querying allows the engineer to connect observations
made via the abstract visualization to the actual classes,
object allocations and deallocations, and method calls that are
being abstracted. This is done by clicking on the appropriate
widget for which details are sought. Arcs, hyperarcs, and histograms
can be clicked on in this way; all cause a textual dialog
window to pop-up (Figure 4).
This pop-up window contains a list of the dynamic entities
that were associated with the widget of interest. For exam-
ple, the pop-up for an arc contains a list of all the calls between
the boxes connected by that arc; the pop-up for the al-
location/deallocation histogram of the animated view gives a
list of the objects that mapped to that box, when they were
created, how old they were when garbage collected, and what
method caused them to be created. Selection of an entry
within these pop-up windows could be used to automatically
position the view in a textual code browser in a future version
of the tool.
3 CONSTRUCTING THE VISUALIZATION
A software engineer employs a four-stage process when using
our visualization technique (Figure 5).
1. Data is collected from the execution of the system being
studied, and is stored to disk.
2. The software engineer designs a high-level model of the
system as a set of abstract entities that are selected to investigate
the structural aspects of interest. For example,
in
Figure
5, "utilities" and "database" are specified as entities
3. The engineer describes a mapping between the raw dynamic
information collected and the abstract entities.
Figure
5, for instance, shows that any dynamic information
whose identifier begins with "foo" (such as objects
whose class name starts with "foo") is to be mapped to
the "utilities" entity. This mapping is applied to the raw
information collected by the tool, producing an abstract
visualization of the dynamic data.
4. The software engineer interacts with the visualization,
and interprets it to investigate the dynamic behaviour of
the system.
The process is deliberately divided into multiple stages to
increase its usability. Rather than having to complete the entire
process every time any change is required for the task
of discovery, iteration can occur over any suffix of the pro-
cess. For example, the software engineer might begin with
any extremely coarse view of the program, knowing very little
about its performance; after interacting with the resulting
visualization and gaining a partial understanding of the studied
system's operation, the software engineer need only alter
the high-level model and corresponding mapping to generate
a new visualization-there is no need to re-collect the identical
dynamic information.
This process is based on the work of Murphy et al. [12]. We
compare our visualization technique to this previous work in
Section 6.
3.1 Stage 1: Gathering Dynamic Information
Dynamic system information is collected for every method
call, object creation, and object deletion performed by the system
being investigated. In other words, trace information is
collected. This information currently consists of an ordered
list of:
ffl the class of the object calling the method or having the
object created, and
ffl the class of the object and method being called or returning
from, or the class of object being created or deleted.
Since the tool currently uses complete trace information, the
complete call stack for any given moment can be reconstructed
Stage 3
Stage 4
database
utilities
foo* -> utilities
*db* -> database
Figure
5: The process.
Because our implementation is in Smalltalk, this information
is collected by instrumenting the Smalltalk virtual machine
(VM) to log these events as they occur. There is nothing
inherent in the tool in its use of Smalltalk; it could be as easily
implemented in any language in which the execution was
instrumented to collect the required information.
Because a software engineer often needs to understand dynamic
problems that only occur after significant initialization
of the studied system, the collection of the trace information
needs to be performed only during portions of the execution.
This eliminates extraneous information not of interest, and
speeds up the process of collection. In our implementation,
VM methods are available to dynamically activate and deactivate
tracing. We used these methods to collect data for Figures
through 4 that included only the main iteration loop of
the algorithm, excluding execution pertaining to initialization
and the output of results.
3.2 Stage 2: Choosing a High-level View
The software engineer typically begins an investigation with
some idea of the static structure of the system being studied.
Even when this is not the case, the naming conventions and
organization of the source code itself often allow some guess
as to the system's structure.
The engineer chooses a high-level structural view to use as
the basis for visualization by stating the names of the abstract
entities. These entities may correspond to actual system com-
ponents, be aggregates or subdivisions thereof, or have little
connection to reality. In Figures 1 through 4, for instance, the
investigator chose two entities representing specific classes in
the program (Clustering and SimFunc), and two entities representing
collections of classes (Rest and ModulesAndSuch ).
3.3 Stage 3: Specifying a Mapping
For the tool to indicate the dynamic interactions between the
abstract entities, it needs to have a map relating dynamic system
entities to the abstract ones. A map indicates that specific
system entities, such as objects of a given class or methods
matching a particular pattern, are to be represented by a specific
abstract entity (and thus, by a box in the visualization).
An engineer states this mapping using a declarative mapping
language. To be usable, a mapping language must allow an
engineer to easily express the relationships between entities.
The mapping language's constructs are based on the standard
Smalltalk notion of structural hierarchy: methods are
grouped into classes, classes into categories, categories into
subapplications, and subapplications into applications. Amap
consists of an ordered set of entries, each of which has three
parts:
1. a name indicating the level of the Smalltalk structural hierarchy
being mapped,
2. a regular expression indicating the set of names to
map for the particular structural hierarchy level being
mapped, and
3. the name of the abstract entity to which these dynamic
system entities are to be mapped.
Methods are provided for mapping a class regular expression
plus method regular expression simultaneously, and
subapplication, class, category, and method simultaneously.
For example, say the engineer has defined an abstract entity
named foo, and every message foo passed to classes named
*bar* within the subapplications dog* should be mapped
there; this would be indicated by a map entry:
matchingSubApplication: 'dog*'
class: '*bar*' category: '*'
method: 'foo' mapTo: 'foo'.
The example in Figures 1 through 4 used the following map:
matchingClass: 'ArchClusteringAnalysis'
mapTo: 'Clustering'
matchingClass: 'ArchModuleGroup'
mapTo: 'ModulesAndSuch'
matchingClass: 'ArchProcedure'
mapTo: 'ModulesAndSuch'
matchingClass: 'ArchSymbol'
mapTo: 'ModulesAndSuch'
matchingClass: 'ArchSimFunc'
mapTo: 'SimFunc'
matchingSubApplication: 'Schwanke*'
mapTo: 'Rest'
Because we are interested in visualizing the interactions
between system components, the tool takes note both of the
method being called and the method being executed when
it was called. The same set of map entries is used to map
both; the visualization itself will differentiate between incoming
and outgoing calls.
Individual objects are also mapped in this way. Because it
is often important where an object was created, we track objects
not simply based on their class, but also in terms of the
call stack that was present when it was created. Such an object
will typically be mapped to a particular abstract entity through
the mechanism described above; the object is treated as belonging
to that abstract entity and is represented through its
visualization (i.e., through its representation as a box).
The mapping possesses two important properties: it is partial
and it is ordered. The ordering means that each system
entity is mapped to a single abstract entity, the first one for
which the map entry is a valid match. The mapping is partial
because a software engineer does not need to express the
structure of the entire system before investigating it. If a system
entity fails to match every entry in the map, it is not represented
in the resulting visualization. This feature both decreases
the overhead for the tool and removes unwanted information
from the visualization. If the engineer wants every
dynamic entity to appear in the visualization, a final entry in
the map of the form:
matchingAnything: '*' mapTo: 'default'
will act as a default abstract entity for all dynamic entities that
"fall through" the other map entries.
Three fundamental questions that must be answered about any
software visualization are:
ffl Is the technique useful to software engineers trying to
perform a task on a system?
ffl Is the technique usable by software engineers?
ffl For what kinds of software engineering tasks is the visualization
helpful?
Evaluating a technique against each of these questions requires
a number of careful, in-depth studies. These studies
are warranted only after an initial determination of the coarse-grained
utility of a technique. In this paper, we report on results
from our preliminary investigations into the utility of our
visualization technique.
In our preliminary investigations, we chose to fix the kind
of software engineering task studied to be performance tun-
ing. This task was chosen because it is heavily reliant on dynamic
system information and because it tends to be delegated
to "expert" developers. A visualization technique that
can aid a non-expert developer in tackling performance problems
would thus be beneficial in increasing the use of dynamic
system information by engineers, which was one of our initial
goals.
We also chose to focus on the usefulness of the technique,
rather than its usability. This decision was reasonable because
the main features of the technique affecting its usability have
been investigated in other related domains. The iterative selection
of the high-level entities and designation of the mapping
by the software, for instance, are also characteristic of
the software reflexion model approach from which this visualization
technique is derived. Users of the software reflexion
model approach have not had difficulties performing these
steps [10, 11].
Our preliminary studies, then, focus on investigating the
usefulness of the visualization. We report on two case stud-
ies. The first case study (Section 4.1) discusses the use of
the visualization technique by one of the authors to determine
why a Smalltalk implementation of a reverse engineering algorithm
[13] was running slower than expected. In this sce-
nario, we focus on the differences in information provided by
the visualization technique compared to a profiler. In the second
case study (Section 4.2), we had both an expert and a non-expert
Smalltalk developer use the visualization to attempt to
discover the cause of a performance problem with the visualization
technique itself. We report on both qualitative and
quantitative data collected about the use of the visualization.
4.1 Case Study #1
A hierarchical agglomerative reverse engineering algorithm
attempts to automatically cluster entities, such as procedures
in a C program, comprising a software system into subsystems
(modules) based on a similarity function. One of the authors
wanted to determine why a Smalltalk implementation of a particular
algorithm [13] executed significantly more slowly than
a C++ implementation.
The algorithm starts by placing each procedure in a separate
module. It then iteratively computes the similarity function
between each possible pair of modules; in each iteration, the
most similar pair of modules is combined. The algorithm terminates
when a specified number of modules are left or when
no modules are similar enough to be combined.
The performance investigator had knowledge of the design
of each program, but had not implemented either program.
To examine the performance of the Smalltalk implementation,
the investigator first used the IBM VisualAge for Smalltalk
execution profiler. With this tool, a user can either sample
or trace the execution of an application, and then view collected
statistics, such as the amount of execution time spent in
particular methods or the number of garbage collection scav-
enges. After perusing several of these views, the investigator
determined about 16% of the execution time was spent
in methods of the ArchClusteringAnalysis class that
contains the main iteration loop, 5.5% was spent in methods
of the ArchCache class that acts as a cache for already computed
similarity values, and 4.6% was spent in computing new
similarity values. This result was not surprising. The information
confirmed the investigator's understanding of how the
program works, but did not provide any hints as to whether
the performance could be enhanced.
The investigator next applied the visualization tech-
nique, choosing a high-level model consisting of
four entities. One entity, Clustering, represented the
ArchClusteringAnalysis class. Another, SimFunc,
represented the class that had methods for computing the similarity
function. A third, ModulesAndSuch, represented the
functions and modules whose similarity was to be compared,
and a fourth, Rest, represented all other classes comprising
the program. The mapping associated the appropriate classes
(and sub-applications) with these boxes. The investigator
collected trace information for the main iteration loop of the
program and then began interacting with the visualization.
Playing throughthe abstracted information, the investigator
noted the large number of objects (over 4500) associated with
the SimFunc entity. The investigator viewed the summary
and queried it for the objects associated with SimFunc's box.
The object list contained many Set and MethodContext
objects (Figure 4). These results confirmed that the cost of
computing the similarity between two modules was high and
should be minimized. Returning to a "play" through the vi-
sualization, the investigator noted that the ratio of calls from
Clustering to Rest and from Rest to SimFunc was lower than
expected. Prior investigation had shown that the majority of
the calls between Clustering and Rest were due to calls on the
ArchCache object; calls from Rest to SimFunc represent
new computations of similarity. 1 This insight led the investigator
to study the ArchCache class. The investigator found
that the "key" value used to store and access similarity values
in the cache was not causing as many hits as it could. A slight
modification to the formation of keys resulted in an increase
of just over 25% in the speed of the program.
The visualization technique aided this performance-tuning
task by presenting information that caused the investigator to
ask, and answer, the "right" questions about the implementa-
tion. Insight into structural interactions in the system helped
the investigator narrow in on the algorithmic problem. The investigator
made use of the both the interaction and object allocation
and deallocation information, the summary view, and
the ability to play, and re-play, through the traced execution.
4.2 Case Study #2
In the second case study, the tool was used to investigate its
own performance problems; specifically, due to a structural
design flaw, it was faster to step forward than to step backward
in the visualization tool. This flaw centered on the fact
that the implementor had chosen to generate cels on the fly and
often used simple linked lists to hold the required information
for the arc annotations; as a result, adding to these lists via the
method
better design for the program would havebeen to hide the cache behind
the ArchSimFunc interface.
addInteractionsFrom:to:between:
was fast, but removing from the lists via
removeInteractionsFrom:to:between:
required a linear-order search through each list. The implementor
of the tool had discovered this flaw and informed the
experimenters of its existence and its cause.
To prepare for the studies, the experimenters gathered a
trace consisting of stepping forwards and backwards in the
visualization tool. 2 An initial high-level model and mapping
were also prepared for the participants as the short study periods
were intended to focus on the visualization itself, rather
than the process of creating a visualization. The high-level
model was very simple, and can be seen in the visualization
shown in Figure 6; the classes used by the tool all had names
that began with a two- or three-letter prefix, and thus were
mapped to abstract entities with these prefixes as names.
In a separate session each, a previously collected trace was
given to two experimental participants: an expert at solving
performance problems in Smalltalk applications, and a
non-expert in solving performance problems in any language.
Each participant was given an introduction to the tool and a
short training session in which each had the opportunity to
use the tool on a toy problem. Then, the symptom of the flaw
in the tool was explained, and the parameters and interaction
that we had traced were described. Each was asked to determine
three or fewer points of interest within the source code
for the tool that they saw as being good candidates for more
detailed analysis; they were also asked to answer a set of questions
periodically in regards to their perceptions of the tool
and progress in their task. We audio-taped these question and
answer sessions. We also captured automatically a log of the
navigation pattern through the visualization using
instrumentation built into the prototype.
4.2.1 The Expert Participant
The expert participant began with a ten-minute inspection of
the summary view: the Gp box was seen to have the most
objects allocated, and most of these were immediately deal-
located. Querying the attendant Allocation Pattern histogram
showed that many of these objects were of the classes Point,
MethodContext, and BlockContext.
The animated view was then used, both in step forward and
backward mode and in play mode, to examine the range of cels
where many of these objects were being allocated; a repetitive
call pattern was observed between the Gp and Cdf boxes. The
arcs and hyperarcs between these boxes were queried for de-
tails, and the methods involved in this pattern were found by
the participant. A separate code browser was then used to investigate
the details causing this behaviour. After studyingthe
2 The visualization tool had to be run on a different, pre-existing execution
trace. A toy example was used for this purpose, but choice of input was not
a factor in the tool's symptoms. The second participant actually received a
trace of only a step backwards.
system for an hour, the participant decided that the likeliest
cause was in the methods
ffl removeInteractionsFrom:to:between:, and
ffl addInteractionsFrom:to:between:.
The participant noted the similarity of code in these two meth-
ods. This observation made sense because the fundamental
problem was due to the data structure. The participant was
thus able to indicate a useful point to continue the investiga-
tion, as had been requested at the start of the study.
The expert participant liked two features of the tool in particular
ffl the summary view, although the participant stated: "in
this case [the effect] was slightly obvious [in the summary
view]-it may not be so obvious in other cases";
and
ffl the animation of the hyperarc resulting from pressing
"play", because of the way one can watch "how things
go into loops or circles or watch the communication back
and forth between different things, or specific things."
The expert participant felt the tool lacked two desirable features
ffl integration between it and a traditional code browser,
so one could, for example, select a method in a pop-up
detail window and have the code browser display that
method; and
ffl the lack of ability to view a detailed stack dump, comparable
to that available from a Smalltalk debugger, particularly
so that the parameter types being passed could
be seen (this cannot be seen from the static code because
Smalltalk is dynamically typed). The actual values being
passed were deemed desired in some instances.
Code browser integration is a desired feature that has not
yet been implemented; the tool has been designed to accommodate
this change. The tool did allow the participant to narrow
the search to particular points of interest that could then
be investigated via a debugger or similar means. The desire
for greater, integrated informationfrom the tool is understand-
able, but runs contrary to its design philosophy of complementing
existing techniques-it is not intended to supercede
the use of a debugger. This desire also highlights the tension
between off-line and on-line approaches to accessing dynamic
information.
4.2.2 The Non-Expert Participant
The non-expert participant made extensive use of both the object
histograms and the allocation/deallocation bars in the detailed
view to investigate the performance problem. Specif-
ically, the participant would find cels in which object deallocation
was not keeping pace with object allocation (i.e.,
Cw
<< Step Stop
Play Step >> Summary
Stop
Options00011111100000011111111111100000000000011111111111111111111111100110011
Data Visualization
Cdf
Gp
Cg
Cg72010Cw
Figure
Case study #2 visualization.
the green bar-shown herein via a vertical-line pattern-was
longer than the red bar-shown herein via a diagonal-line
pattern-within a box) and would then step forward to see
when objects were being allocated. Queries on the associated
histograms were then used to determine the classes of the allocated
objects. Less frequently, the participant would investigate
the calls involved with the allocations.
For the first forty minutes, the participant worked solely
with the visualization tool. After that, the participant began
to use the Smalltalk code browsers to study the associated
code. After approximately an hour with the tool,
the participant had identified two methods, including the
removeInteractionsFrom method, as a point in the
code at which to continue the investigation. This determination
was based, in part, on noticing a correlation between an
increase in message sends between the Gp and Cg boxes and
the number of objects allocated by Cg. Similar to the case of
the expert participant then, the non-expert found the correct
area of code to investigate, which was the task that had been
posed.
The non-expert found the deallocation age histograms and
the ability to determine the correlation between abstract information
to method and object names by clicking on histograms
and interactions in the visualization particularly help-
ful. However, the non-expert indicated a desire for different
displays of this information, finding the "screen with all the
methods [was] too cluttered." Similar to the expert, the non-expert
desired more integration with other Smalltalk tools,
such as the code browser. For instance, the participant wanted
to be able to select a call from a list of interactions and visit
that call site in the code.
During an interview part of the way through the study pe-
riod, the participant noted that it was difficult to attack the task
because of a lack of knowledge of what could cause performance
problems. The visualization tool provided some clue
as to how to proceed because of its emphasis on particular dynamic
information. The applicability of the dynamic information
chosen for other tasks requires further research.
Key features of our technique include off-line operation, a
navigable visualization of the collected data, cels based on
a running summary, and the use of a declarative mapping to
abstract fine-grained information about a system's execution.
We discuss each of these features and our use of trace information
5.1 Off-line Operation
Using an on-line visualization technique can be a slow, unidirectional
procedure. Taking the technique off-line and separating
the visualization from the system execution can achieve
two benefits.
First, it allows the information to be preprocessed as a
whole prior to visualization, enabling the generation of summary
information about the entire execution. For the performance
tuning tasks described in the case studies, summary information
was used to provide clues about which parts of the
system to investigate as potential sources of the problem. After
accessing summary information, the users returned to investigate
detailed parts of the execution.
Second, it allows any partial trace of an execution to be reviewed
without having to re-run the entire execution. This review
capability permits the visualization to be navigable in a
way that is not possible for an on-line technique. Not only
may the trace be replayed from any arbitrary point, but also
it may be played backwards, or at a rate that is independent of
the speed of the original execution of the system being studied
5.2 Navigable Visualization
One advantage of an off-line visualization approach is the
navigation capability provided to the software engineer. The
user can unfold the execution in a forward, "play", mode, but
then can perform detailed investigations of particular parts of
the execution by moving the visualization both forward and
backward. In our current prototype, we do not associate any
information about the actual execution time with the off-line
navigation. Each step forward or backward in our visualization
takes time proportional to the display time of the next cel,
rather than representing the length of time required by an associated
method call, allocation or garbage collection. For some
tasks, including performance tuning, it would sometimes be
helpful to have steps between cels represent the system running
time.
5.3 Running Summary
We believe that separately displaying individual events, or
small groups of contiguous events, makes for an insufficient
visualization of a system execution because of a lack of connection
to the greater context of that execution. Some sort of
summary information is also needed.
We considered two means of providing such summary in-
formation: a single summary picture, such as that in Figure 3,
and a set of pictures showing the change to the state of the
system over individual intervals of its execution ("delta" in-
formation), which is not provided by our tool. But neither
alone would be sufficient to illustrate the dynamic nature of
the information we are attempting to visualize. The summary
picture clearly does not contain any temporal ordering
of events-it is difficult to look at one and mentally reconstruct
the sequence of events that produced it. Furthermore,
this summary alone cannot contain enough detail about the execution
to be useful without becoming so cluttered that it is
rendered unusable. Delta pictures address the concern of visualization
of the temporal nature of the information; however,
it is difficult to understand the relationship between a delta
picture and the execution in toto. To reach a compromise between
these alternatives, we chose to provide a running summary
of the execution within the individual cels. This implicitly
provides the temporal component of the summary information
while maintaining context for the delta information
within a cel.
Two other alternatives to maintain context are possible. In
the first, we could begin with a summary view such as that
provided by our tool. But rather than being a single, static
picture, it could also be divided into a sequence of cels each
of which would show the same summary information while
highlightingin a different colour, say, the information that was
changed or added over the represented interval, such as the directed
arcs that were traversed, or the subset of objects that
were deallocated. The second alternative is similar, but instead
of highlighting only the information that is different for
that interval, a running summary of all the information that
had changed from the start of execution of the system to the
current interval would be highlighted. Both can suffer from
the fact that a complete summary view can quickly become
too detailed, leading to information overload. However, both
these schemes could be used to complement the delta plus
running-summary combination currently used in our cels; we
have not yet investigated this possibility.
5.4 Mapping Objects
Each cel maps objects to abstraction units. Associating an object
with an abstraction unit using our declarative mapping approach
requires a means of "naming" objects. We chose to
name-more precisely, identify-an object based on where it
is created in the code: a software engineer identifies objects
mapping to a particular abstraction unit by describing a part
of the call stack that exists when one of the objects is created.
This approach has the advantage that an engineer can identify
collections of objects by perusing the source code and describing
the locations where relevant allocations occur. Another
possible choice would be to name objects based on their class.
However, this approach to naming would not allow objects of
the same class to be mapped to different abstraction units, limiting
the ability of the engineer to differentiate distinct uses of
classes.
Currently, the mapping provided by the engineer is applied
uniformly to all dynamic information collected as the system
executes. A ramification of this decision is that once an object
is associated with an abstraction unit, it remains associated
with that unit for the duration of the visualization. Some-
times, though, it may be useful to modify the association of
objects to abstraction units over the course of the execution.
For instance, if an object is created in one subsystem, but is
then immediately passed as an argument to another subsys-
tem, it may be useful to capture the "migration" of the object.
Supporting this migration would require not only a means to
allow the engineer to describe when and how the migration
would occur, but also would require updates to the use of histograms
for object allocation and deallocation. Further understanding
of how this capability might help in the performance
of tasks is required before support is added.
5.5 Dynamic Information
Our current prototype visualizes trace information collected
about a system's execution. Trace information has the benefit
that it is complete: all object interactions, allocations,
and deallocations are included in the trace. Complete information
is easy for the engineer to reason about. However,
trace information has the often cited problem of being voluminous
[9, 2, 8]. Tracing even small pieces of a system's execution
can result in a huge amounts of data. Althoughwe have
been able to successfully use trace data to investigate some
performance problems, the use of trace information limits the
flexibility and usability of our current prototype. We plan to
investigate the use of sampled information as a basis for our
prototype to overcome some of these limitations.
6 RELATED WORK
De Pauw et al. have developed a number of visualizations to
describe the execution of an object-oriented system, including
inter-class call cluster diagrams, inter-class call matrices,
a histogram of instances, and an allocation matrix [1]. All
of these visualizations show fine-grained execution information
about individual classes and objects. The utility of these
visualizations degrades as the size, measured in the number
of classes, of a system grows. Several other similar object-
and class-level visualization approaches have been developed
(e.g., [6, 5]); these techniques share the same scalability problem
Lange and Nakamura in the Program Explorer tool allow
the developer to integrate, off-line, static and dynamic information
about a program to aid comprehension activities [7, 8].
For instance, they show how this combination of information
can help a developer find and investigate instances of design
patterns in a system. The visualizations they produce are also
at a fine-grained level. Vlissides et al. use a different notion
of pattern, which they refer to as execution patterns, to help
developers investigate the large amount of fine-grained execution
information available about a system [3]. Specifically,
they allow a developer to query an on-line animation for patterns
appearing in a dynamic execution stream. In both the
Program Explorer and execution pattern approaches, the developer
must apply detailed knowledge about a system to formulate
appropriate queries.
Jerding et al. have applied the information mural approach
to create a scalable visualization of fine-grained program
events [4]. The result, an execution mural, places classes
vertically on the screen and uses single pixel vertical bars,
with various colouring approaches, to indicate calls between
classes. The interactions occurring in the system are then
shown across the screen. Using this approach, thousands of
interactions occurring between objects can be visualized on
one screen. The authors extend these ideas to a Pattern Mural
that provides an information mural display of automatically
detected common occurring sequences of calls (patterns)
in the execution. Although this approach may help a developer
find unexpected patterns, or verify existing patterns in the
code, it still visualizes only fine-grained informationabout the
system.
The approach taken by Sefika et al. differs in allowing a developer
to utilize coarse-grained system information to produce
visualizations [14]. Using their technique, a developer
may introduce various abstractions into the system instrumentation
process, including subsystem, framework and pattern-level
abstractions. The abstractions can then be used as a basis
for several visualizations including affinity and ternary dia-
grams. The coarser-grained visualizations produced with this
technique make it easier for developers to investigate inter-component
interactions in large systems than previous approaches
Some of the design decisions Sefika et al. made in developing
their technique limit its flexibility. Choosing an on-line
approach permits a link between the speed shown in the visualization
and the execution speed. However, as we have
discussed, an on-line approach limits the modes of investigation
available to an engineer. Choosing an approach that
hard-wires the abstractions of interest into the instrumentation
process provides an effective data gathering mechanism; how-
ever, it decreases the usability of the technique by making it
more difficult for an engineer to apply it to a new system. We
have been able to easily apply our technique to different systems
because of the separation in our process between data
gathering and visualization.
Our visualization technique buildson the software reflexion
model technique developed by Murphy et al. [12, 10]. The reflexion
model technique helps an engineer access both static
and dynamic information about a system by enabling a comparison
between a posited high-level model and a model representing
informationextracted from either the static source or
from a system's execution. Similar to our visualization tech-
nique, the software reflexion model depends on a declarative
mapping language. Our visualization technique extends the
reflexion model work in three fundamental ways: by applying
the abstraction approach across discrete intervals of the execution
with animation controls, by providing support to map
dynamic entities rather than only static entities, and by mapping
memory aspects of an execution in addition to interac-
tions. Our visualization technique also uses the running summary
model rather than the complete summary model used in
the reflexion model approach.
AND FUTURE WORK
Condensing dynamic information collected during a system's
execution in terms of abstractions that represent coarse system
structure, such as frameworks and subsystems, can help
software engineers investigate the behaviour of a system. We
have developed a visualization technique that allows engineers
to flexibly define the coarse structure of interest, and to
flexibly navigate through the resulting abstracted views of the
system's execution. Our approach complements and extends
existing visualization techniques.
Our preliminary investigations into the usefulness and usability
of the visualization indicate it shows promise for enhancing
a software engineer's ability to utilize dynamic information
when performing tasks on a system. To date, we have
focused on the use of dynamic information to aid one particular
software engineering task-performance tuning. We intend
to continue our investigations into the utility of the entire
technique through more extensive case studies on a wider
range of tasks on larger systems. Although there is evidence
elsewhere [10, 11] that the iterative mapping approach is usable
for static information, our further studies will investigate
if this remains true for dynamic information.
ACKNOWLEDGMENTS
This work was funded by a British Columbia Advanced Systems
Institute Industrial Partnership Program grant, by OTI,
Inc., and by an NSERC research grant. We thank Edith Law
for participating in the case study, and we thank the anonymous
reviewers for their comments.
--R
Visualizing the behavior of object-oriented systems
Modeling object-oriented program execution
Execution patterns in object-oriented visualization
Visualizing interactions in program executions.
Interactive visualization of design patterns can help in framework understanding.
Efficient program tracing.
Lightweight Structural Summarization as an Aid to Software Evolution.
Reengineering with reflexion models: A case study.
Software reflexion models: Bridging the gap between source and high-level models
An intelligent tool for re-engineering software modularity
--TR
Visualizing the behavior of object-oriented systems
Interactive visualization of design patterns can help in framework understanding
Software reflexion models
Architecture-oriented visualization
Visualizing interactions in program executions
An intelligent tool for re-engineering software modularity
Efficient program tracing
Object-Oriented Program Tracing and Visualization
Reengineering with Reflexion Models
Modeling Object-Oriented Program Execution
Lightweight structural summarization as an aid to software evolution
--CTR
George Yee, Visualization for privacy compliance, Proceedings of the 3rd international workshop on Visualization for computer security, November 03-03, 2006, Alexandria, Virginia, USA
Steven P. Reiss, Visual representations of executing programs, Journal of Visual Languages and Computing, v.18 n.2, p.126-148, April, 2007
Andrs Moreno , Mike S. Joy, Jeliot 3 in a Demanding Educational Setting, Electronic Notes in Theoretical Computer Science (ENTCS), 178, p.51-59, July, 2007
Johan Moe , David A. Carr, Using execution trace data to improve distribute systems, SoftwarePractice & Experience, v.32 n.9, p.889-906, July 2002
Lei Wu , Houari Sahraoui , Petko Valtchev, Program comprehension with dynamic recovery of code collaboration patterns and roles, Proceedings of the 2004 conference of the Centre for Advanced Studies on Collaborative research, p.56-67, October 04-07, 2004, Markham, Ontario, Canada
Robert J. Walker , Gail C. Murphy, Implicit context: easing software evolution and reuse, ACM SIGSOFT Software Engineering Notes, v.25 n.6, p.69-78, Nov. 2000
Bradley Schmerl , David Garlan , Hong Yan, Dynamically discovering architectures with DiscoTect, ACM SIGSOFT Software Engineering Notes, v.30 n.5, September 2005
Davor ubrani , Gail C. Murphy, Hipikat: recommending pertinent software development artifacts, Proceedings of the 25th International Conference on Software Engineering, May 03-10, 2003, Portland, Oregon
Robert J. Walker , Gail C. Murphy , Jeffrey Steinbok , Martin P. Robillard, Efficient mapping of software system traces to architectural views, Proceedings of the 2000 conference of the Centre for Advanced Studies on Collaborative research, p.12, November 13-16, 2000, Mississauga, Ontario, Canada
Iain Milne , Glenn Rowe, OGRE: Three-Dimensional Program Visualization for Novice Programmers, Education and Information Technologies, v.9 n.3, p.219-237, September 2004
Avi Bryant , Andrew Catton , Kris De Volder , Gail C. Murphy, Explicit programming, Proceedings of the 1st international conference on Aspect-oriented software development, April 22-26, 2002, Enschede, The Netherlands
Giuseppe Pappalardo , Emiliano Tramontana, Automatically discovering design patterns and assessing concern separations for applications, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Hong Yan , David Garlan , Bradley Schmerl , Jonathan Aldrich , Rick Kazman, DiscoTect: A System for Discovering Architectures from Running Systems, Proceedings of the 26th International Conference on Software Engineering, p.470-479, May 23-28, 2004
Abdelwahab Hamou-Lhadj , Timothy C. Lethbridge, A survey of trace exploration tools and techniques, Proceedings of the 2004 conference of the Centre for Advanced Studies on Collaborative research, p.42-55, October 04-07, 2004, Markham, Ontario, Canada
Paul Gestwicki , Bharat Jayaraman, Methodology and architecture of JIVE, Proceedings of the 2005 ACM symposium on Software visualization, May 14-15, 2005, St. Louis, Missouri
Raimondas Lencevicius , Urs Hlzle , Ambuj K. Singh, Dynamic Query-Based Debugging of Object-Oriented Programs, Automated Software Engineering, v.10 n.1, p.39-74, January
Eleni Stroulia , Tarja Syst, Dynamic analysis for reverse engineering and program understanding, ACM SIGAPP Applied Computing Review, v.10 n.1, p.8-17, Spring 2002
Martin P. Robillard , Gail C. Murphy, Representing concerns in source code, ACM Transactions on Software Engineering and Methodology (TOSEM), v.16 n.1, p.3-es, February 2007 | software visualization;performance;software structure;program comprehension;execution trace;programming environments |
286977 | Multiple dispatch as dispatch on Tuples. | Many popular object-oriented programming languages, such as C++, Smalltalk-80, Java, and Eiffel, do not support multiple dispatch. Yet without multiple dispatch, programmers find it difficult to express binary methods and design patterns such as the "visitor" pattern. We describe a new, simple, and orthogonal way to add multimethods to single-dispatch object-oriented languages, without affecting existing code. The new mechanism also clarifies many differences between single and multiple dispatch. | INTRODUCTION
Single dispatch, as found in C++ [Stroustrup 97], Java
[Arnold & Gosling 98, Gosling et al. 96], Smalltalk-80
[Goldberg & Robson 83], and Eiffel [Meyer 92, Meyer 97],
selects a method using the dynamic class of one object, the
message's receiver. Multiple dispatch, as found in CLOS
[Chapter 28, Steele 90] [Paepcke 93], Dylan [Shalit 97,
Feinberg et al. 97], and Cecil [Chambers 92, Chambers 95],
generalizes this idea, selecting a method based on the
dynamic class of any subset of the message's arguments.
Multiple dispatch is in many ways more expressive and
flexible than single dispatch in object-oriented (OO)
programming [Bobrow et al. 86, Chambers 92, Castagna 97,
Moon 86].
In this paper we propose a new, simple, and orthogonal way
of adding multiple dispatch to existing languages with single
dispatch. The idea is to add tuples as primitive expressions
and to allow messages to be sent to tuples. Selecting a
method based on the dynamic classes of the elements of the
tuple gives multiple dispatch. To illustrate the idea, we have
designed a simple class-based OO language called Tuple * .
While perhaps not as elegant as a language built directly with
multiple dispatch, we claim the following advantages for our
mechanism:
1. It can be added to existing single dispatch languages,
such as C++ and Java, without affecting either (a) the
semantics or (b) the typing of existing programs written
in these languages.
2. It retains the encapsulation mechanisms of single-
dispatch languages.
3. It is simple enough to be easily understood and
remembered, we believe, by programmers familiar with
standard single dispatch.
4. It is more uniform than previous approaches of
incorporating multiple dispatch within a single-
dispatching framework.
To argue for the first two claims, we present the semantics of
the language in two layers. The first layer is a small, class-based
single-dispatching language, SDCore, of no interest in
itself. The second layer, Tuple proper, includes SDCore and
adds multiple dispatch by allowing messages to be sent to
tuples.
Our support for the third claim is the simplicity of the
mechanism itself. If, even now, you think the idea of sending
messages to tuples is clearly equivalent to multiple dispatch,
then you agree. To support the fourth claim, we argue below
that our mechanism has advantages over others that solve the
same problem of incorporating multiple dispatch into
existing single-dispatching languages.
The rest of this paper is organized as follows. In Section 2 we
describe the single-dispatching core of Tuple; this is needed
only to show how our mechanism can be added to such a
language. In Section 3 we describe the additions to the
single- dispatching core that make up our mechanism and
support our first three claims. Section 4 supports the fourth
claim by comparison with related work. In Section 5 we offer
Appears in the OOPSLA '98 Conference Proceedings,
Conference on Object-Oriented Programming, Systems, and
Applications, Vancouver, British Columbia, Canada, October 18-
22, 1998, pp. 374-387.
* With apologies to Bruce et al., whose TOOPLE language [Bruce et al. 93]
is pronounced the same way.
some conclusions. In Appendix A we give the concrete
syntax of Tuple, and in Appendix B we give the language's
formal typing rules.
In this section, we introduce the syntax and semantics of
SDCore by example.
2.1 Syntax and Semantics
The single-dispatching core of Tuple is a class-based
language that is similar in spirit to conventional OO
languages like C++ and Java. Integer and boolean values,
along with standard operations on these values, are built into
the language. Figure 1 illustrates SDCore with the well-known
Point/ColorPoint example. The Point class
contains two fields, representing an x and y coordinate
respectively. Each point instance contains its own values for
these fields, supplied when the instance is created. For
example, the expression new Point(3,4) returns a fresh
point instance with xval and yval set to 3 and 4
respectively. The Point class also contains methods for
retrieving the values of these two fields and for calculating
the distance from the point to some line. (We assume the
Line class is defined elsewhere.)
For ease of presentation, SDCore's encapsulation model is
extremely simple. An instance's fields are only accessible in
methods where the instance is the receiver argument. An
instance may contain inherited fields, which this rule allows
to be accessed directly; this is similar to the protected notion
in C++ and Java.
The ColorPoint class is a simple subclass of the Point
class, augmenting that definition with a colorval field
and a method to retrieve the color. (We assume that a class
Color is defined elsewhere.) The ColorPoint class
inherits all of the Point class's fields and methods. To
create an instance of a subclass, one gives initial values for
inherited fields first and then the fields declared in the
subclass. For example, one would write new
ColorPoint(3,5,red). As usual, subclasses can also
override inherited methods. To simplify the language,
SDCore has only single inheritance. (However, multiple
inheritance causes no problems with respect to the new ideas
we are advocating.)
We use a standard syntax for message sends. For example, if
p1 is an instance of Point or a subclass, then the
expression p1.distanceFrom(ln2) invokes the
instance's distanceFrom method with the argument
ln2. Within the method body, the keyword self refers to
the receiver of the message, in this case p1.
Dynamic dispatching is used to determine which method to
invoke. In particular, the receiver's class is first checked for
a method implementation of the right name and number of
arguments. If no such method exists, then that class's
immediate superclass is checked, and so on up the
inheritance hierarchy until a valid implementation is found
(or none is found and a message-not-understood error
occurs).
For simplicity, and to ease theoretical analysis, we have not
included assignment and mutation in SDCore. Again, these
could be easily added.
2.2 Type Checking for SDCore
An overview of the static type system for SDCore is included
here for completeness. The details (see Appendix B) are
intended to be completely standard. For ease of presentation
SDCore's type system is simpler than that found in more
realistic languages, as this is peripheral to our contributions.
There are four kinds of type attributes. The types int and
bool are the types of the built-in integer and boolean values,
respectively. The function type (T 1 ,.,T n )-T n+1 is the type
of functions that accept as input n argument values of types
respectively and return a result of type T n+1 . The
class type, CN, where CN is a class name, is the type of
instances of the class CN.
Types are related by a simple subtyping relation. The types
int and bool are only subtypes of themselves. The
ordinary contravariant subtyping rule is used for function
types [Cardelli 88]. A class type CN 1 is a subtype of another
class type CN 2 if the class CN 1 is a subclass of the class CN 2 .
To ensure the safety of this rule, the type system checks that,
for every method name m in class CN 2 , m's type in CN 2 is a
supertype of m's type in CN 1 . * Classes that do not meet this
check will be flagged as errors. Thus every subclass that
passes the type checker implements a subtype of its
superclass.
To statically check a message send expressionof the form
check that the static type of E 0 is a
subtype of a class type CN whose associated class contains a
method I of type (T 1 ,.,T n )-T n+1 , where the types of the
expressions are subtypes of the T 1 ,.,T n
class Point {
fields (xval:int, yval:int)
method x():int { xval }
method y():int { yval }
method distanceFrom(l:Line):int
{ . }
class ColorPoint inherits Point {
fields (colorval:Color)
method color():Color { colorval }
Figure
1: The classes Point and ColorPoint.
* Unlike C++ and Java, SDCore does not allow static overloading of
method names. However, since we add multiple dispatch to SDCore by a
separate mechanism, and since dynamic dispatch can be seen as dynamic
overloading, there is little reason to do so.
respectively; in this case, E . For
example, p1.distanceFrom(ln2) has type int,
assuming that p1 has type Point and ln2 has type Line.
2.3 Problems with Single Dispatching
Single dispatching does not easily support some
programming idioms. The best-known problem of this sort is
the "binary method problem" [Bruce et al. 95]. For example,
consider adding an equality test to the Point and
ColorPoint classes above as follows. (For simplicity, in
SDCore we have not included super, which would allow
ColorPoint's equal method to call Point's equal
method.)
class Point {
method equal(p:Point):bool
{
class ColorPoint inherits Point {
method equal(p:ColorPoint):bool
- a type error in SDCore
{
and
As is well-known, this makes it impossible for
ColorPoint to be considered a subtype of Point [Cook
et al. 90]. In other words, ColorPoint instances cannot be
safely used wherever a Point is expected, so
polymorphism on the point hierarchy is lost. (For this reason
the example is ill-typed in SDCore.)
The problem is semantic, and not a fault of the SDCore type
system. It stems from the asymmetry in the treatment of the
two Point instances being tested for equality. In particular,
one instance is the message receiver and is thereby
dynamically dispatched upon, while the other is an ordinary
argument to the message and plays no role in method
selection [Castagna 95]. Multiple dispatch avoids this
asymmetry by dynamically dispatching based on the run-time
class of both arguments.
A more general problem is the "visitor" design pattern
[Pages 331-344, Gamma et al. 95]. This pattern consists of a
hierarchy of classes, typically representing the elements of a
structure. In order to define new operations on these
elements without modifying the element classes, a separate
hierarchy of visitors is created, one per operation. The code
to invoke when a "visit" occurs is dependent both on which
visitor and which element is used. Multimethods easily
express the required semantics [Section 7, Baumgartner et
al. 96], while a singly-dispatched implementation must rely
on unwieldy simulations of multiple dispatching.
OF SDCORE
Tuple extends SDCore with tuple expressions, tuple classes,
tuple types, and the ability to declare and send messages to
tuples, which gives multiple dispatch. Nothing in the
semantics or typing of SDCore is changed by this extension.
3.1 Syntax and Semantics
In Tuple the expression creates a tuple of size n
with components v 1 ,.,v n , where each v i is the value of the
corresponding
Figure
shows how one would solve the Point/ColorPoint
problem in Tuple. Rather than defining equal methods
within the Point and ColorPoint classes, we create two
new tuple classes for the methods. In the first tuple class, a
tuple of two Point instances is the receiver. The names p1
and p2 can be used within all methods of the tuple class to
refer to the tuple components. However, tuple classes are
client code and as such have no privileged access to the fields
of such components. The second tuple class is similar,
defining equality for a tuple of two ColorPoint instances.
(We assume that there is a tuple class for the tuple (Color,
Color) with an equal method.) There can be more than
one tuple class for a given tuple of classes.
Since no changes are made to the Point or ColorPoint
classes when adding equal methods to tuple classes, the
subtype relationship between ColorPoint and Point is
unchanged. That is, by adding the equal method to a tuple
class instead of to the original classes of Figure 1,
ColorPoint remains a safe subtype of Point.
The syntax for sending a message to a tuple is analogous to
that for sending a message to an instance. For example,
(p1,p2).equal() sends the message "equal()" to
the tuple (p1,p2), which will invoke one of the two
* As in ML [Milner et al. 90], we do not allow tuples of length one. This
prevents ambiguity in the syntax and semantics. For example, an
expression such as (x).g(y) is interpreted as a message sent to an
instance, not to a tuple. Tuples have either zero, or two or more elements.
We allow the built-in types int and boolean to appear as components of a
tuple class as well. Conceptually, one can think of corresponding built-in
classes int and boolean, each of which has no non-trivial subclasses or
superclasses.
tuple class (p1:Point, p2:Point) {
method equal():bool
{
and
tuple class (cp1:ColorPoint,
{
method equal():bool
{
and
and (cp1.color(),
Figure
2: Two tuple classes holding methods for testing
equality of points.
equal methods. Just as method lookup in SDCore relies on
the dynamic class of the receiver, method lookup in Tuple
relies on the dynamic classes of the tuple components.
Therefore, the appropriate equal method is selected from
either the (Point, Point) or the (ColorPoint,
tuple class based on the dynamic classes of
p1 and p2. In particular, the method from the
(ColorPoint, ColorPoint) tuple class is only invoked
if both arguments are ColorPoint instances at run-time.
The use of dynamic classes distinguishes multiple dispatch
from static overloading (as found, for example, in Ada 83
[Ada 83]).
The semantics of sending messages to tuples, multiple
dispatch, is similar to that in Cecil [Chambers 95]. Consider
the expression
value v i , and where C d,i is the minimal dynamic class of v i .
A method in a tuple class (C 1 ,.,C n ) is applicable to this
expression if the method is named I, if for each 1-i-n the
dynamic class C d,i is a subclass of C i , and if the method takes
m-n additional arguments. (The classes of the additional
arguments are not involved in determining applicability, but
their number does matter.) Among the applicable methods
(from various tuple classes), a unique most-specific method
is chosen. A method M 1 in a tuple class (C 1,1 ,.,C 1,n ) is more
specific than a method M 2 in a tuple class (C 2,1 ,.,C 2,n ) if for
each 1-i-n, C 1,i is a subclass of C 2,i . (The other arguments
in the methods are not involved in determining specificity.)
If no applicable methods exist, a message-not-understood
error occurs. If there are applicable methods but no most-
specific one, a message-ambiguous error occurs. Algorithms
for efficiently implementing multiple dispatch exist (see,
e.g., [Kiczales & Rodriguez 93]).
This semantics immediately justifies part 1(a) of our claim
for the orthogonality of the multiple dispatch mechanism.
An SDCore expression cannot send a message to a tuple.
Furthermore, the semantics of message sends has two cases:
one for sending messages to instances and one for sending
messages to tuples. Hence the meaning of an expression in
SDCore is unaffected by the presence of multiple dispatch.
The semantics for tuple classes also justifies our second
claim. That is, since tuple classes have no special privileges
to access the fields of their component instances, the
encapsulation properties of classes are unaffected. However,
because of this property, Tuple, like other multimethod
languages, does not solve the "privileged access" aspect of
the binary methods problem [Bruce et al. 95]. It may be that
a mechanism such as C++ friendship grants would solve
most of this in practice. We avoided giving methods in tuple
classes default privileged access to the fields of the instances
in a tuple because that would violate information hiding. In
particular, any client could access the fields of instances of a
class C simply by creating a tuple class with C as a
component.
3.2 Multiple Dispatch is not just for Binary Methods
Multimethods are useful in many common situations other
than binary methods [Chambers 92, Baumgarter et al. 96]. In
particular, any time the piece of code to invoke depends on
more than one argument to the message, a multimethod
easily provides the desired semantics.
For example, suppose one wants to print points to output
devices. Consider a class Output with three subclasses:
Terminal, an ordinary Printer, and a
ColorPrinter. We assume that ColorPrinter is a
subclass of Printer. Printing a point to the terminal
requires different code than printing a point to either kind of
printer. In addition, color printing requires different code
than black-and-white printing. Figure 3 shows how this
situation is programmed in Tuple.
In this example, there is no binary method problem. In
particular, the addition of print methods to the Point and
ColorPoint classes will not upset the fact that
ColorPoint is a subtype of Point. The problem is that
we need to invoke the appropriate method based on both
whether the first argument is a Point or ColorPoint and
whether the second argument is a Terminal, Printer, or
ColorPrinter. In a singly-dispatched language, an
unnatural work-around such as Ingalls's "double
dispatching" technique [Ingalls 86, Bruce et al. 95] is
required to encode the desired behavior.
3.3 Tuples vs. Classes
The ability to express multiple dispatching via dispatching
on tuples is not easy to simulate in a single-dispatching
language, as is well-known [Bruce et al. 95]. The Ingalls
double-dispatching technique mentioned above is a faithful
simulation but often requires exponentially (in the size of the
more methods than a multimethod-based solution.
A second attempt to simulate multiple dispatch in single-
dispatching languages is based on product classes [Section
tuple class (p:Point, out:Terminal) {
method print():()
{ - prints Points to the terminal
. }
tuple class (p:Point, out:Printer) {
method print():()
{ - prints Points to the printer
. }
tuple class (cp:ColorPoint,
out:ColorPrinter) {
method print():()
{ - print ColorPoints to the printer in color
. }
Figure
3: Multimethods in tuple classes for printing. The
unit tuple type, (), is like C's void type.
3.2, Bruce et al. 95]. This simulation is not faithful, as it loses
dynamic dispatch. However, it is instructive to look at how
this simulation fails, since it reveals the essential capability
that Tuple adds to SDCore. Consider the following classes in
SDCore (adapted from the Bruce et al. paper).
class TwoPoints {
fields(p1:Point, p2:Point)
method
{
and
class TwoColorPoints {
fields(cp1:ColorPoint, cp2:ColorPoint)
method
{
and
and (new TwoColors(
cp1.color(),
With these classes, one could create instances that simulate
tuples via the new expression of SDCore. For example, an
instance that simulates a tuple containing two Point
instances is created by the expression new
TwoPoints(my_p1,my_p2). However, this loses
dynamic dispatching. The problem is that the new
expression requires the name of the associated class to be
given statically. In particular, when the following message
send expression is executed
(new TwoPoints(my_p1,my_p2)).equal()
the method in the class TwoPoints will always be invoked,
even if both my_p1 and my_p2 denote ColorPoint
instances at run-time.
By contrast, a tuple expression does not statically determine
what tuple classes are applicable. This is because messages
sent to tuples use the dynamic classes of the values in the
tuple instead of the static classes of the expressions used to
construct the tuple. For example, even if the static classes of
my_p1 and my_p2 are both Point, if my_p1 and my_p2
denote ColorPoint instances, then the message send
expression (my_p1,my_p2).equal() will invoke the
method in the tuple class for (ColorPoint,
ColorPoint) given in Figure 2. Hence sending messages
to tuples is not static overloading but dynamic overloading.
It is precisely multiple dispatch.
Of course, one can also simulate multiple dispatch by using
a variant of the typecase statement to determine the dynamic
types of the arguments and then dispatching appropriately.
(For example, in Java one can use the getClass method
provided by the class Object.) However, writing such code
by hand will be more error-prone than automatic dispatch by
the language. Such dispatch code will also need to be
duplicated in each method that requires multiple dispatch,
causing code maintenance problems. Every time a new class
enters the system, the dispatch code will need to be
appropriately rewritten in each of these places, while in
Tuple no existing code need be modified. Further problems
can arise if the intent of the dispatch code (to do dispatch) is
not clear to maintenance programmers. By contrast, when
writing tuple classes it is clear that multiple dispatch is
desired. The semantics of Tuple ensures that each dispatch is
handled consistently, and the static type system ensures that
this dispatching is complete and unambiguous.
3.4 Type Checking for Tuple
We add to the type attributes of SDCore product types of the
these are the types of tuples containing
elements of types T 1 ,.,T n . A product type (T 1 -,.,T n -) is a
subtype of (T 1 ,.,T n ) when each T i - is a subtype of T i .
Because of the multiple dispatching involved, type checking
messages sent to tuples is a bit more complex than checking
messages sent to instances (see Appendix B for the formal
typing rules). We divide the additions to SDCore's type
system into client-side and implementation-side rules
[Chambers & Leavens 95]. The client-side rules check
messages sent to tuples, while the implementation-side rules
check tuple class declarations and their methods. The aim of
these rules is to ensure statically that at run-time no type
mismatches occur and that no message-not-understood or
message-ambiguous error will occur in the execution of
messages sent to tuples.
The client-side rule is the analog of the method application
rule for ordinary classes described above. In particular, given
an application that
there is some tuple class for the product type (T 1 ,.,T n ) such
that the static type of
Further, that tuple class must contain a method
implementation named I with m-n additional arguments such
that the static types of E are subtypes of the
method's additional argument types. Because the rule
explicitly checks for the existence of an appropriate method
implementation, this eliminates the possibility of message-
not-understood errors.
However, the generalization to multiple dispatching can
easily cause method-lookup ambiguities. For example,
consider again the Point/ColorPoint example from Section 2.
Suppose that, rather than defining equality for the tuple
classes (Point,Point) and
(ColorPoint,ColorPoint), we had defined equality
instead for the tuple classes (Point,ColorPoint) and
(ColorPoint,Point). According to the client-side rule
above, an equal message sent to two ColorPoint
expressions is legal, since there exists a tuple class of the
right type that contains an appropriate method
implementation. The problem is that there exist two such
tuple classes, and neither is more specific than the other.
Therefore, at run-time such a method invocation will cause a
method-ambiguous error to occur.
Our solution is based on prior work on type checking for
multimethods [Castagna et al. 92, Castagna 95]. For each
pair of tuple classes that have a
method named I that accepts k additional arguments, we
check two conditions.
The first check ensures monotonicity [Castagna et al. 92,
Castagna 95, Goguen & Meseguer 87, Reynolds 80]. Let
-U- be the types of the
methods named I in the tuple classes
respectively. Suppose that (T 1 ,.,T n ) is a subtype
of must be a subtype of
-U-. By the contravariant rule for function types,
this means that for each j, S j - must be a subtype of S j , and U
must be a subtype of U-.
The second check ensures that the two methods are not
ambiguous. We define two types S and T to be related if
either S subtypes T or vice versa. In this case, min(S,T)
denotes the one of S and T that subtypes the other. It must be
the case that (T 1 ,.,T n ) and (T 1 -,.,T n -) are not the same tuple.
Further, if for each j, T j and T j - are related, then there must be
a tuple class (min(T 1 ,T 1 -),.,min(T n ,T n -)) that has a method
named I with k additional arguments. The existence of this
method is necessary and sufficient to disambiguate between
the two methods being checked.
The type rules for tuple classes and message sends to tuples
validate part (b) of our first claim. That is, Tuple's extensions
to the SDCore type system are orthogonal. The typing rules
in Tuple are a superset of the typing rules in SDCore. Hence,
if an SDCore program or expression has type T, it will also
have type T in Tuple.
In Tuple we chose a by-name typing discipline, whereby
there is a one-to-one correspondence between classes and
types. This unification of classes with types and subclasses
with subtypes allows for a very simple static type system. It
also reflects the common practice in popular object-oriented
languages. Indeed, this approach is a variant of that used by
C++ and Java. (Java's interfaces allow a form of separation
of types and classes.) Although the type system is simplistic,
the addition of multimethods to the language greatly
increases its expressiveness, allowing safe covariant
overriding while preserving the equivalence between
subclassing and subtyping.
There are several other ways in which we could design the
type system. For example, a purely structural subtyping
paradigm could be used, with classes being assigned to
record types based on the types of their methods. Another
possibility would be to maintain by-name typing but keep
this typing and the associated subtyping relation completely
orthogonal to the class and inheritance mechanisms. This is
the approach taken in Cecil [Chambers 95]. We ruled out
these designs for the sake of clarity and simplicity.
Another design choice is whether to dispatch on classes or on
types. In Tuple, this choice does not arise because of the
strong correlation between classes and types. In particular,
the Tuple dispatching semantics can be viewed equivalently
as dispatching on classes or on types. However, in the two
alternate designs presented above, the dispatching semantics
could be designed either way. Although both options are
feasible, it is conceptually simpler to dispatch on classes, as
this nicely generalizes the single-dispatching semantics and
keeps the dynamic semantics of the language completely
independent of the static type system.
The names of the tuple formals in a tuple class are, in a sense,
a generalization of self for a tuple. They also allow a very
simple form of the pattern matching found in functional
languages such as ML [Milner et al. 90]. Having the tuple
formals be bound to the elements of the tuple allows Tuple,
like ML, to include tuple values without needing to build into
the language primitive operations to extract the components
of a tuple. It is interesting to speculate about the advantages
that might be obtained by adding algebraic datatypes and
more extensive pattern-matching features to object-oriented
languages (see also [Bourdoncle & Merz 97, Ernst et al.
98]).
In this section we discuss two kinds of related work. The first
concerns generic-function languages; while these do not
solve the problem we address in this paper, using such a
language is a way to obtain multiple dispatch. The second,
more closely-related work, addresses the same problem that
we do: how to add support for multiple dispatch to languages
with single dispatch.
An inspirational idea for our work is the technique for
avoiding binary methods by using product classes described
by Bruce et al. [Section 3.2, Bruce et al. 95]. We discussed
this in detail in Section 3.3 above.
Another source of inspiration for this work was Castagna's
paper on covariance and contravariance [Castagna 95]. This
makes clear the key idea that covariance is used for all
arguments that are involved in method lookup and
contravariance for all arguments that are not involved in
lookup. In Tuple these two sets of arguments are cleanly
separated, since in a tuple class the arguments in the tuple are
used in method lookup, and any additional arguments are not
used in method lookup. The covariance and contravariance
conditions are reflected in the type rules for Tuple by the
monotonicity condition.
4.1 Generic-Function Languages
Our approach provides a subset of the expressiveness of
CLOS, Dylan, and Cecil multimethods, which are based on
generic functions. Methods in tuple classes do not allow
generic functions to have methods that dynamically dispatch
on different subsets of their arguments. That is, in Tuple the
arguments that may be dynamically dispatched upon must be
decided on in advance, since the distinction is made by client
code when sending messages to tuples. In CLOS, Dylan, and
Cecil, this information is not visible to clients. On the other
hand, a Tuple programmer can hide this information by
always including all arguments as part of the tuple in a tuple
class. (This suggests that a useful syntactic sugar for Tuple
might be to use f(E 1 ,.,E n ) as sugar for
is at least 2.)
Second, generic function languages are more uniform, since
they only have one dispatching mechanism and can treat
single dispatching as a degenerate case of multiple
dispatching rather than differentiating between them.
Although we believe that these advantages make CLOS-style
multimethods a better design for a new language, the
approach illustrated by Tuple has some key advantages for
adapting existing singly-dispatched languages to
multimethods. First, our design can be used by languages
like C++ and Java simply by adding tuple expressions, tuple
types, tuple classes, and the ability to send messages to
tuples. As we have shown, existing code need not be
modified and will execute and type check as before. This is
in contrast to the generic function model, which causes a
major shift in the way programs are structured.
Second, our model maintains class-based encapsulation,
keeping the semantics of objects as self-interpreting records.
The generic function model gives this up and must base
encapsulation on scoping constructs, such as packages
[Chapter 11, Steele 90] or local declarations [Chambers &
Leavens 97].
4.2 Encapsulated and Parasitic Multimethods
Encapsulated multimethods [Section 4.2.2, Bruce et al. 95]
[Section 3.1.11, Castagna 97] are similar in spirit to our work
in their attempt to integrate multimethods into existing
singly-dispatched languages. The following example uses
this technique to program equality tests for points in an
extension to SDCore.
class Point {
method equal(p:Point):bool
{
class ColorPoint inherits Point {
method equal(p:Point):bool
{
method equal(p:ColorPoint):bool
{
and
With encapsulated multi-methods, each message send results
in two dispatches (in general). The first is the usual dispatch
on the class of the receiving instance (messages cannot be
sent to tuples). This dispatch is followed by a second,
multimethod dispatch, to select a multimethod from within
the class found by the first dispatch. In the example above,
the message p1.equal(p2) first finds the dynamic class
of the object denoted by p1. If p1 denotes a ColorPoint,
then a second multimethod dispatch is used to select between
the two multimethods for equal in the class
ColorPoint. In essence, the first dispatch selects a
generic function formed from the multimethods in the class
of the receiver, and the second dispatch is the usual generic
function dispatch on the remaining arguments in the
message.
One seeming advantage of encapsulated multimethods is that
they have access to the private data of the receiver object,
whereas in Tuple, a method in a tuple class has no privileged
access to any of the elements in the tuple. In languages like
C++ and Java, where private data of the instances of a class
are accessible by any method in the class, this privileged
access will be useful for binary methods. However, this
advantage is illusory for multimethods in general, as no
special access is granted to private data of classes other than
that of the receiver. This means that access must be provided
to all clients, in the form of accessor methods, or that some
other mechanism, such as C++ friendship grants, must
provide such access to the other arguments' private data.
Two problems with encapsulated multimethods arise
because the multimethod dispatch is preceded by a standard
dispatch to a class. The first problem is the common need to
duplicate methods or to use stub methods that simply
forward the message to a more appropriate method. For
example, since ColorPoint overrides the equal generic
function in Point, it must duplicate the equal method
declared within the Point class. As observed in the Bruce
et al. paper, this is akin to the Ingalls technique for multiple
polymorphism [Ingalls 86]. Parasitic multimethods
[Boyland & Castagna 97], a variant of encapsulated
multimethods, remove this disadvantage by allowing
parasitic methods to be inherited.
The second problem caused by the two dispatches is that
existing classes sometimes need to be modified when new
are added to the system. For example, in order to
program special behavior for the equality method accepting
one Point and one ColorPoint (in that order), it is
necessary to modify the Point class, adding the new
encapsulated multimethod. This kind of change to existing
code is not needed in Tuple, as the programmer simply
creates a new tuple class. Indeed, Tuple even allows more
than one tuple class with the same tuple of component
classes, allowing new multimethods that dispatch on the
same tuple as existing multimethods to enter the system
without requiring the modification of existing code.
Encapsulated and parasitic multimethods have an advantage
in terms of modularity over both generic-function languages
and Tuple. The modularity problem of generic-function
languages, noted by Cook [Cook 90], is that independently-developed
program modules, each of which is free of the
possibility of message-ambiguous errors, may cause
message-ambiguous errors at run-time. For example,
consider defining the method equal in three modules:
module A defines it in a tuple class (Point, Point),
module B in a tuple class (Point, ColorPoint), and
module C in a tuple class (ColorPoint, Point). By
themselves these do not cause any possibility of message-
ambiguous errors, and a program that uses either A and B or
A and C will type check. However, a program that includes
all three modules may have message-ambiguous errors,
since a message sent to a tuple of two ColorPoint
instances will not be able to find a unique most-specific
method. Therefore, a link-time check is necessary to ensure
type safety. Research is underway to resolve this problem for
generic function languages [Chambers & Leavens 95],
which would also resolve it for Tuple. However, to date no
completely satisfactory solution has emerged.
The design choices of encapsulated and parasitic
multimethods were largely motivated by the goal of avoiding
this loss of modularity. Encapsulated multimethods do not
suffer from this problem because they essentially define
generic functions within classes, and each class must
override such a generic function as a whole. (However, this
causes the duplication described above.) Parasitic
multimethods do not have this problem because they use
textual ordering within a class to resolve ambiguities in the
inheritance ordering. However, this ordering is hard to
understand and reason about. In particular, if there is no
single, most-specific parasite for a function call, control of
the call gets passed among the applicable parasites in a
manner dependent on both the specificity and the textual
ordering of the parasites, and determining at which parasite
this ping-ponging of control terminates is difficult. Boyland
and Castagna also say that, compared with their textual
ordering methods by specificity as we do in Tuple
"is very intuitive and clear" [Page 73, Boyland & Castagna
97]. Finally, they note that textual ordering causes a small
run-time penalty in dispatching, since the dispatch takes
linear instead of logarithmic time, on the average.
The key contribution of this work is that it describes a
simple, orthogonal way to add multiple dispatch to existing
single-dispatch languages. We showed that the introduction
of tuples, tuple types, tuple classes for the declaration of
multimethods, and the ability to send messages to tuples is
orthogonal to the base language. This is true in both
execution and typing. All that tuple classes do is allow the
programmer to group several multimethods together and
send a tuple a message using multimethod dispatching rules.
Since existing code in single-dispatching languages cannot
send messages to tuples, its execution is unaffected by this
new capability. Hence our mechanism provides an extra
layer, superimposed on top of a singly-dispatched core.
Design decisions in this core do not affect the role or
functionality of tuples and tuple classes.
Tuple also compares well against related attempts to add
multiple dispatch to singly-dispatched languages. We have
shown that Tuple's uniform dispatching semantics avoids
several problems with these approaches, notably the need to
"plan ahead" for multimethods or be forced to modify
existing code as new classes enter the system. On the other
hand, this uniformity also causes Tuple to suffer from the
modularity problem of generic-function languages, which
currently precludes the important software engineering
benefits of separate type checking.
The Tuple language itself is simply a vehicle for illustrating
the idea of multiple dispatching via dispatching on tuples.
Although it would complicate the theoretical analysis of the
mechanism, C++ or Java could be used as the singly-
dispatched core language.
ACKNOWLEDGMENTS
Thanks to John Boyland for discussion and clarification
about parasitic multimethods. Thanks to the anonymous
referees for helpful comments. Thanks to Olga Antropova,
John Boyland, Giuseppe Castagna, Sevtap Karakoy, Clyde
Ruby, and R. C. Sekar for comments on an earlier draft.
Thanks to Craig Chambers for many discussions about
multimethods. Thanks to Olga Antropova for the syntactic
sugar idea mentioned in Section 4.1. Thanks to Vassily
Litvinov for pointing us to [Baumgartner et al. 96] and to
Craig Kaplan for an idea for an example. Leavens's work
was supported in part by NSF Grants CCR 9593168 and
CCR-9803843.
--R
American National Standards Institute.
Subtyping recursive types.
The Java Programming Language.
On the Interaction of Object-Oriented Design patterns and Programming Languages
Merging Lisp and Object-Oriented Programming
Type Checking Higher-Order Polymorphic Multi- Methods
Parasitic Methods: An Implementation of Multi-Methods for Java
Safe and decidable type checking in an object-oriented language
The Hopkins Object Group
A Semantics of Multiple Inheritance.
A Calculus for Overloaded Functions with Subtyping.
Covariance and contravariance: conflict without a cause.
The Cecil Language: Specification and Rationale: Version 2.0.
Typechecking and Modules for Multi-Methods
BeCecil, A Core Object-Oriented Language with Block Structure and Multimethods: Semantics and Typing
Inheritance is not Subtyping.
Predicate Dispatching: A Unified Theory of Dispatch.
The Dylan Programming Book.
Design Patterns: Elements of Reusable Object-Oriented Software
Steele</Author>, <Author>Guy L.</Author> Steele The Java Language Specification.
A Simple Technique for Handling Multiple Polymorphism.
Rodriguez Jr.
The Language.
The Definition of Standard ML.
Using Category Theory to Design Implicit Conversions and Generic Operators.
The Structure of Typed Programming Languages.
The Dylan Reference Manual: The Definitive Guide to the New Object-Oriented Dynamic Language
Steele Jr.
--TR
Smalltalk-80: the language and its implementation
Object-oriented programming with flavors
CommonLoops: merging Lisp and object-oriented programming
A simple technique for handling multiple polymorphism
A semantics of multiple inheritance
The definition of Standard ML
Common LISP: the language (2nd ed.)
Inheritance is not subtyping
The C++ programming language (2nd ed.)
Eiffel: the language
A calculus for overloaded functions with subtyping
Subtyping recursive types
Safe and decidable type checking in an object-oriented language
Object-oriented programming
Efficient method dispatch in PCL
The structure of typed programming languages
Design patterns
Covariance and contravariance
Typechecking and modules for multimethods
On binary methods
The Dylan reference manual
Object-oriented programming
Object-oriented software construction (2nd ed.)
Parasitic methods
Type checking higher-order polymorphic multi-methods
The Java programming language (2nd ed.)
The Java Language Specification
Object-Oriented Multi-Methods in Cecil
Dispatching
Using category theory to design implicit conversions and generic operators
Object-Oriented Programming Versus Abstract Data Types
--CTR
Rajeev Kumar , Vikram Agrawal, Multiple dispatch in reflective runtime environment, Computer Languages, Systems and Structures, v.33 n.2, p.60-78, July, 2007
Timmy Douglas, Making generic functions useable in Smalltalk, Proceedings of the 45th annual southeast regional conference, March 23-24, 2007, Winston-Salem, North Carolina
Ran Rinat, Type-safe convariant specialization with generalized matching, Information and Computation, v.177 n.1, p.90-120, 25 August 2002
Curtis Clifton , Gary T. Leavens , Craig Chambers , Todd Millstein, MultiJava: modular open classes and symmetric multiple dispatch for Java, ACM SIGPLAN Notices, v.35 n.10, p.130-145, Oct. 2000
Todd Millstein , Mark Reay , Craig Chambers, Relaxed MultiJava: balancing extensibility and modular typechecking, ACM SIGPLAN Notices, v.38 n.11, November
Antonio Cunei , Jan Vitek, PolyD: a flexible dispatching framework, ACM SIGPLAN Notices, v.40 n.10, October 2005
Paolo Ferragina , S. Muthukrishnan , Mark de Berg, Multi-method dispatching: a geometric approach with applications to string matching problems, Proceedings of the thirty-first annual ACM symposium on Theory of computing, p.483-491, May 01-04, 1999, Atlanta, Georgia, United States
Curtis Clifton , Todd Millstein , Gary T. Leavens , Craig Chambers, MultiJava: Design rationale, compiler implementation, and applications, ACM Transactions on Programming Languages and Systems (TOPLAS), v.28 n.3, p.517-575, May 2006
Yuri Leontiev , M. Tamer zsu , Duane Szafron, On type systems for object-oriented database programming languages, ACM Computing Surveys (CSUR), v.34 n.4, p.409-449, December 2002 | semantics;tuple;multimethods;generic functions;typing;binary methods;single dispatch;multiple dispatch;language design |
287011 | Similarity and Symmetry Measures for Convex Shapes Using Minkowski Addition. | AbstractThis paper is devoted to similarity and symmetry measures for convex shapes whose definition is based on Minkowski addition and the Brunn-Minkowski inequality. This means, in particular, that these measures are region-based, in contrast to most of the literature, where one considers contour-based measures. All measures considered in this paper are invariant under translations; furthermore, they can be chosen to be invariant under rotations, multiplications, reflections, or the class of affine transformations. It is shown that the mixed volume of a convex polygon and a rotation of another convex polygon over an angle is a piecewise concave function of . This and other results of a similar nature form the basis for the development of efficient algorithms for the computation of the given measures. Various results obtained in this paper are illustrated by experimental data. Although the paper deals exclusively with the two-dimensional case, many of the theoretical results carry over almost directly to higher-dimensional spaces. | Introduction
The problem of shape similarity has been extensively investigated in both machine vision
and biological vision. Although for human perception, different features such as shape,
color, reflectance, functional information play an important role while comparing objects,
in machine vision usually only geometric properties of shapes are used to introduce shape
similarity. In the literature, one finds two concepts for expressing the similarity of shapes:
distance functions measuring dissimilarity and similarity measures expressing how similar
two shapes are. In this paper we shall work with similarity measures.
In practice, similarity approaches have to be invariant under certain classes of transfor-
mations, e.g. similitudes (i.e., translations, rotations, change of scale). Affine transformations
are also of great practical value as these can approximate shape distortions arising
when an object is observed by a camera under arbitrary orientations with respect to the
image plane [1]. A well-known method to develop a similarity approach which is invariant
under a given class of transformations is to perform a shape normalization first [2], [3], [4].
In Subsection IV-C of this paper we discuss one particular method based on the ellipse of
inertia.
In the literature, one finds several different methods for comparing shapes. Among the
best known ones are matching techniques [4]. We mention here also contour matching [5],
[6], structural matching [7] (which is based on specific structural features), and point
set matching [8]. In several approaches, one uses the Hausdorff distance for point sets
to describe similarity [9]. An interesting construction of a similitude invariant distance
function for polygonal shapes is given in [5]; here one computes the L 2 -distance of the
so-called turning functions representing the boundary of the polygons. Several authors
use the concept of a scale space to develop a multiresolution similarity approach [10],
[11]. Finally, Fourier descriptors derived from contour representations have been used by
various authors to describe shape similarity and symmetry [12], [13], [14], [15]. See [16]
for a comprehensive discussion.
Similarity measures can be used to compute "how symmetric a given shape is", e.g. with
respect to reflection in a given line. For many objects, presence or absence of symmetry
is a major feature, and therefore the problem of object symmetry identification is of great
interest in image analysis and recognition, computer vision and computational geometry.
Unfortunately, in many practical cases, exact symmetry does not occur or, if it does, is
disturbed by noise. In such circumstances it is useful to define measures of symmetry
which give quantitative information about the amount of symmetry of a shape. There
exists a vast literature dealing with all kinds of symmetry of shapes and (grey-scale)
images: central symmetry [17], [18], [19], reflection symmetry [20], [21], [22], rotation
symmetry [23], [24], skew symmetry [25], [26]; see also [27], [28].
The Brunn-Minkowski theory [29] allows us to introduce a general framework for comparing
convex shapes of arbitrary dimension. In this paper we introduce and investigate a
class of similarity and symmetry measures for convex sets which are based on Minkowski
addition, the Brunn-Minkowski inequality, and the theory of mixed volumes. Although
we deal with the 2D case, most of results can be extended to higher dimensions. The
similarity measures examined in this paper are translation invariant by definition. In ad-
dition, they can be defined in such a way that they are also invariant with respect to other
transformation groups such as rotations and reflections. We propose efficient algorithms
for the computation of similarity measures for convex polygons which are invariant under
November 4, 1997
similitude transformations. These algorithms are based on the observation that the given
measures are piecewise concave functionals; thus to find their maximal values it is sufficient
to compute them only for a finite number of points. Moreover this number is bounded by
are the number of vertices of the polygons
son. We also propose efficient algorithms for the computation of affine invariant similarity
measures. In this case the calculation is preceded by a normalization of the polygonal
shapes into so-called canonical shapes. Now the computation of the affine invariant similarity
measure is reduced to the computation of the rotation invariant similarity measure
for their respective normalizations.
In this paper we investigate symmetry measures for convex shapes which are invariant
under line reflections and rotations. We introduce symmetry measures using two different
approaches. The first approach uses similarity measures, the second is a direct approach.
We propose efficient algorithms for the computation of rotation and reflection invariant
symmetry measures for convex polygons. The normalization technique makes it possible
to compute skew symmetry measures as well.
We conclude with an overview of this paper. We start with some notations and recall
some basic concepts in Section II. In Section III we give short treatment of the theory of
mixed volumes, the Brunn-Minkowski inequality, and some derived inequalities. A formal
definition of similarity measures can be found in Subsection IV-A, where we also present
some examples based on Minkowski addition. In Subsection IV-B we investigate similarity
measures for convex polygons which are invariant under rotations and multiplications, and
we present an algorithm to compute such measures efficiently. An affine invariant similarity
measure is presented in Subsection IV-C. To define it, we introduce an image normalization
(canonical form) based on the ellipse of inertia known from classical mechanics. Symmetry
measures are introduced in Section V; there we also give several examples, some of them
based on similarity measures. In Section VI we illustrate our theoretical findings with
some experimental results, and we end with some conclusions in Section VII.
In this paper, most results are given without proof. Readers interested in such proofs,
as well as some additional results, may refer to our report [30] from which this paper has
been extracted.
II. Preliminaries
In this section we present some basic notation and other prerequisites needed in the
sequel of the paper. By K(IR 2 ), or briefly K, we denote the family of all nonempty compact
subsets of IR 2 . Provided with the Hausdorff distance [29] this is a metric space. The
compact, convex subsets of IR 2 are denoted by and the convex polygons by
just P. In this paper, we are not interested in the location of a shape A '
other words, two shapes A and B are said to be equivalent if they differ only by translation.
We denote this as A j B. The Minkowski sum of two sets
It is well-known [29] that every element A of C is uniquely determined by its support
function given by:
Here ha; ui is the inner product of vectors a and u, 'sup' denotes the supremum, and S 1
denotes the unit circle. It is also known that [29]:
for C. The support set F (A; u) of A at u 2 S 1 consists of all points a 2 A for which
A polygon P ' IR 2 can be represented uniquely by specifying the position of one of its
vertices, the lengths and directions of all its edges, and the order of the edges. Below, p i
will denote the length of edge i and u i is the vector orthogonal to this edge: see Fig. 1. By
we denote the angle between the positive x-axis and u i . Since we are not interested
in the location of P , it is sufficient to give the sequence
is the number of vertices of P . We will call this sequence the perimetric
representation of P . In Fig. 1 we give an illustration.
We denote this sequence by M(P ). If the polygon is convex, then the order of
does not have to be specified in advance, since in this case the normal vectors are ordered
counter-clockwise. In this case we can think of M(P ) as a set. But we can also use the
November 4, 1997
Fig. 1. Perimetric representation of a polygon.
so-called perimetric measure M(P; \Delta) as an alternative representation [19]:
0; otherwise.
We point out that the perimetric measure is a special case of the concept of area measure
[29]. It is evident that for every convex polygon P , we have the identity
where the sum is taken over all u for which M(P; u) 6= 0. This relation expresses that the
contour of P is closed. Moreover, any discrete function that satisfies this relation is the
perimetric measure of a convex polygon.
It is well-known that the Minkowski addition of two convex polygons can be computed by
merging both perimetric representations; see e.g. [31], [32]. Mathematically, this amounts
to the following relation:
for
In the second part of this section we consider affine transformations on IR 2 . The reader
may refer to [33] for a comprehensive discussion. The group of all affine transformations
on IR 2 is denoted by G 0 . If g 2 G 0 and A 2 K, then Ag. We write
for every A 2 K. This is equivalent to saying that g \Gamma g 0 is a
translation. We denote by G the subgroup of G 0 containing all linear transformations, i.e.,
transformations g with
Lemma 1: For any two sets A; B ' IR d and for every G, we have
We introduce the following notations for subsets of G:
ffl I: isometries (distance preserving transformations);
ffl R: rotations about the origin;
multiplications with respect to the origin by a positive
ffl L: (line) reflections (lines passing through the origin);
ffl S: similitudes (rotations, reflections, multiplications).
Observe that I; R; M and S are subgroups of G. For every transformation g 2 G we
can compute its determinant 'det g' which is, in fact, the determinant of the matrix corresponding
with g. Note that this value is independent of the choice of the coordinate
system. If g is an isometry then j det the converse is not true, however. If H
is a subgroup of G, then H+ denotes the subgroup of H containing all transformations
with positive determinant. For example, I multiplications and
rotations. If H is a subgroup of G, then the set fmh j h 2 H; m 2 Mg is also a subgroup,
which will be denoted by MH.
Denote by r ' the rotation in IR 2 around the origin over an angle ' in a counter-clockwise
direction, and by ' ff the reflection in IR 2 with respect to the line passing through the origin
which makes an angle ff with the positive x-axis. The following relations hold:
In what follows, the topology on K is the one induced by the Hausdorff metric, also
called myopic topology [34]. At several instances in this paper we shall need the following
concept.
Definition 1: Let H ' G and J ' K. We say that H is J -compact if, for every A 2 J
and every sequence fh n g in H, the sequence fh n (A)g has a limit point of the form h(A),
It is easy to verify that R is K-compact. However, the subcollection fr
R is a rotation with '=- irrational, is not K-compact. The following result is
easy to prove.
Lemma 2: Assume that H is J -compact and let f : J ! IR be a continuous function.
If A 2 J and f 0 := sup h2H f(h(A)) is finite, then there exists an element h 0 2 H such
that f(h 0
III. Mixed volumes and the Brunn-Minkowski inequality
In this section we present a brief account of the theory of volumes and mixed volumes
of compact sets (also called 'mixed areas' in the 2-dimensional case). For a comprehensive
treatment the reader may consult the book of Schneider [29]. The volume (or area) of a
compact set A will be denoted by V (A). It is well-known that for every affine transformation
g the following relation holds:
The mixed volume V (A; B) of two compact, convex sets A; B ' IR 2 is implicitly defined
by the following formula for the volume of A \Phi B:
Fig. 2 for an illustration.
Fig. 2. The right figure is P \Phi Q. The sum of the volumes of the light grey regions equals 2V (P; Q), the
sum of the volumes of the dark grey regions equals V (P ).
The mixed volume has the following properties being arbitrary compact,
convex
for every affine transformation
is continuous in A and B (11)
with respect to the Hausdorff metric:
Note for example that (9) is a straightforward consequence of (3) and (4)-(5).
In this paper the following well-known inequality plays a central role. See Hadwiger [35]
or Schneider [29] for a comprehensive discussion.
Theorem 1 (Brunn-Minkowski inequality) For two arbitrary compact sets
the following inequality holds:
with equality if and only if A and B are convex and homothetic modulo translation, i.e.,
The Brunn-Minkowski inequality (12) in combination with (5) yields the following inequality
for mixed volumes:
and as before equality holds iff A and B are convex and B j -A for some - ? 0. This
latter inequality is called Minkowski's inequality.
Using the fact that for two arbitrary real numbers x; y one has
equality y, one derives from (12) that:
with equality iff A j B and both sets are convex.
The mixed volume of two convex polygons can be easily computed using support
functions and perimetric representations. Assume that the perimetric representation of Q
is given by the sequence (v Furthermore, if h(P; \Delta) is the support
function of P , then
Fig. 2 for an illustration of this formula.
Note that with this formula the additivity of V (P; Q) as stated in (10) follows immediately
from the additivity of the support function; see (1). Furthermore, (15) in combination
with (6) shows that V (P; Q) is increasing in both arguments. In fact, this observation
holds for arbitrary compact, convex sets, i.e.,
We conclude this section with a formula for the computation of the volume of a 2-
dimensional polygon (not necessarily convex) using its perimetric representation. Several
formulas for calculating volumes of polyhedra are known [36]. Let the vertices (ordered
counter-clockwise) of a polygon P be given by
Refer to [36] for some further information.
If P is a polygon with perimetric representation then the
vertices are given by (putting x
6 u j . Here P needs not be convex. Substitution into (17) gives
This is the formula which we will use in the sequel of this paper.
IV. Similarity measures
This section, which is concerned with similarity measures, falls apart into three subsec-
tions. In Subsection IV-A we give a formal definition and present some basic properties.
In the next two subsections we treat, respectively, similarity measures that are invariant
under rotations and multiplications (Subsection IV-B) and similarity measures that are
invariant under arbitrary affine transformations (Subsection IV-C).
A. Definition and basic properties
One of the goals of this paper is to find a tool which enables us to compare different
shapes, but in such a way that this comparison is invariant under a given group H of
transformations and can be computed efficiently. For example, if we take for H all rota-
tions, then our comparison should return the same outcome for A and B as for A and
r(B), where r is some rotation.
Towards this goal one could try to find a distance function (or metric) d(A; B) which
equals zero if and only if B j h(A) for some h 2 H. Many authors, however, rather
work with so-called similarity measures than with distance functions. In this paper we
will follow this convention.
Definition 2: Let H be a subgroup of G and J ' K. A function oe : J \Theta J ! [0; 1] is
called an H-invariant similarity measure on J if
1.
2.
3.
4.
5. oe is continuous in both arguments with respect to the Hausdorff metric.
When H contains only the identity mapping, then oe will be called a similarity measure.
Although not stated explicitly in the definition above, it is also required that J is invariant
under H, that is,
If oe is a similarity measure on J and H is a J -compact subgroup of G, then
defines an H-invariant similarity measure on J . Unfortunately, oe 0 is difficult to compute
in many practical situations. Below, however, we present two cases (with
which this can be done efficiently if one restricts attention to convex polygons.
Let H be a given subgroup of G, and define
and
Proposition 1: If H is a C-compact subgroup of G, then
(a) oe 1 is an H-invariant similarity measure on C;
(b) oe 2 is an MH-invariant similarity measure on C.
In [30] we present a simple example which shows that compactness is essential.
We conclude this section with the following simple but useful result. Recall that ' 0 is
the line reflection with respect to the x-axis.
Proposition 2: Let oe be a similarity measure on J , and define
(a) If oe is R-invariant, then oe 0 is an I-invariant similarity measure.
(b) If oe is G+ -invariant, then oe 0 is a G-invariant similarity measure.
To give the reader an idea of the flavour of the proof, we show, for the result in (b),
property 3 of the definition of an H-invariant similarity measure (Definition 2).
G. There are two possibilities: g 2 G+ or g 2 G n G+ . We consider the second
case. We can write
Now
which had to be demonstrated.
B. Rotations and multiplications
In this section we consider similarity measures on P which are S+ -invariant, i.e., invariant
under rotations and multiplications. We use the similarity measures defined in (20)-
In these expressions, the terms V (P \Phi r ' (Q))
and play an important role. Let the perimetric representations of the convex
November 4, 1997
polygons P and Q be given by
respectively. To compute
The support set F (P; r ' (v j )) consists of a vertex of P unless ' is a solution of r ' (v j
g. Angles ' for which this holds (i.e., r ' (v are called critical angles.
The set of all critical angles for P and Q is given by
f(
where
6 u denotes the angle of vector u with the positive x-axis. We denote the critical
angles by 0 - '
2-. It is evident that N - n P nQ . Now, fix a vertex
We have seen that the support set F (P; r ' (v j )) consists of a
vertex C of P ; see Fig. 3.
d
a
O
Fig. 3. The support set F (P; r '
(v j
consists of the vertex C.
be the angle between the line through C with normal vector r ' (v j ) and the
line through O and C. It follows that h(P; r ' (v j
Taking the second derivative with respect to ' we find
we find a similar result for ' 7!
Thus we arrive at the following result.
Proposition 3: The volume V (P \Phi r ' (Q)) and the mixed volume are functions
of ' which are piecewise concave on ('
This result is illustrated in Fig. 4.525660
Fig. 4. Left: convex polygons P and Q. Right: the function ' 7! V (P \Phi r ' (Q)) is piecewise concave.
The ffi's indicate the location of the critical angles.
Consider the S+ -invariant similarity measure obtained from (20) by choosing
Then
sup -?0; '2[0;2-)
=h
=h
=h
Thus, in order to compute oe 1 (P; Q) we have to minimize two expressions, one in - and
one in '. The first expression achieves its minimum at
2 , the second at
one of the critical angles associated with
The similarity measure oe 2 given by (21) results in
From Proposition 1 we know that oe 2 is S+ -invariant, too. As above, the maximum is
attained at one of the critical angles associated with Q, and we get
In Section III we have given some formulas for the computation of (mixed) volumes of
convex polygons. The expression in (19) uses the perimetric representation, and we use it
to get the following result.
Proposition 4: Given the perimetric representation of the convex polygons P and Q,
the time complexity of computing oe 1 and oe 2 is O(n P nQ (n P are the
number of vertices of P and Q, respectively.
If we choose
Using that '
min
where ~
To find the minimum, we need to consider the critical angles of
well as those of
Q.
C. Affine invariant similarity measure
G, then the similarity measures defined in (20) and (21), respectively, are
affine invariant (that is, invariant under arbitrary affine transformations). Unfortunately,
we do not have efficient algorithms to compute them. However, using the approach of
Hong and Tan in [2], we are able to define similarity measures which can be computed
efficiently, and which are invariant under a large group of affine transformations, namely
G+ , the collection of all linear transformations which have a determinant which is positive.
In combination with Proposition 2, this leads to similarity measures which are G-invariant.
The basic idea is to transform a set A to its so-called canonical form A ffl in such a way
that two sets A and B are equivalent modulo a transformation in G+ if and only if A ffl and
are equivalent modulo rotation. The definition of the canonical form, as discussed by
Hong and Tan [2], is based on the concept of the ellipse of inertia known from classical
mechanics [37]. Note, however, that Hong and Tan [2] use a slightly different approach;
they introduce the moment curve which is closely related to the ellipse of inertia.
Throughout this section we restrict ourselves to the family of compact sets with positive
area, in the sequel denoted by K+ . Consider an axis through the centroid of A, and denote,
for a point (x; y) 2 A, by r(x; y) its distance to this axis. The moment of inertia with
respect to the axis is given by:
Z Z
A
Here ' denotes the angle between the axis and the x-axis in some fixed coordinate system.
An easy calculation shows that
(m xx +m yy
(see also [38, p.48-53]). Here m
A xydxdy.
The point q
sin ') on the axis traces an ellipse when ' varies between
and 2-, the so-called ellipse of inertia.
ellipse of inertia
a
yy
Fig. 5. The ellipse of inertia of a shape.
This ellipse, depicted in Fig. 5, has its long axis at angle ' 0 , which is the unique solution
in [0; -) of the equations
sin
2m xy
cos
Let 2a and 2b be the lengths of these axes, respectively. One easily finds that a =
which yields that
xy
xy
A simple calculation shows thata 2
The following definition is due to Hong and Tan [2].
Definition 3: A shape is said to be in canonical form if its centroid is positioned at the
origin and its ellipse of inertia is a unit circle.
Proposition 5: Every compact set can be transformed into its canonical form by means
of a transformation in G+ , namely by a stretching along the long axis of the ellipse of
inertia by a factor b=(ab) 1=4 and along the short axis by a factor a=(ab) 1=4 .
The proof of this result is based on the observation that, under the transformation (x; y) 7!
(-x; -y), the second moments scale as follows:
Our next result shows how the canonical form of a shape is affected by affine transformations
Proposition
(a) For every A 2 K+ we have
(b) If
In [30] we use the notion of covariance matrix to prove (b); see also [3].
With this result it is easy to construct G+ -invariant similarity measures from R-invariant
ones.
Proposition 7: Let oe : K+ \Theta K+ ! [0; 1] be an R-invariant similarity measure, and define
then oe ffl is a G+ -invariant similarity measure.
As the map A 7! A ffl preserves convexity, we get the same result for shapes in
as well as for shapes in P.
To apply these results for convex polygons, there are at least two possibilities. We
can compute the canonical shape of the polygon itself or of the set given by its vertices
considered as point masses. In the latter case, which is the one considered below, the
previous findings remain valid, albeit that integrals have to be replaced by summations.
Furthermore, the stretching factors in Proposition 5 become b (along the long axis) and
a (along the short axis), respectively. Suppose we are given the perimetric representation
of a convex polygon P . The computation of M(P ffl
consists of the following steps (putting
1. Fixing the origin at the first vertex of P , we can find the coordinates of the
other vertices; see (18).
2. The centroid of P is given by
3. The second moments m are given by
4. Compute b from (23)-(24).
5. Define OE
6 in such a way that \Gamma-=2 - OE
i from
6
6
a tan OE i
a
tan
and
Example 1: Consider the rotation invariant similarity measure given by (21), i.e.,
oe ffl on P as in Proposition 7, then oe ffl is a G+ -invariant similarity measure. Using
Proposition 2(b) we obtain a G-invariant similarity measure.
V. Symmetry measures
Exact symmetry only exists in the mathematician's mind. It is never achieved in the real
world, neither in nature nor in man-made objects [39]. Thus, in order to access symmetry
of objects (convex 2-dimensional polygons in our case), we need a tool to measure the
amount of symmetry. Towards that goal Gr-unbaum [18] introduced the concept of a
symmetry measure; refer to [23], [21] for some other references. Below we give a formal
definition of this concept. But first we recall some basic terminology. We will restrict
attention to the 2-dimensional case, but most of what we say carries over immediately to
higher dimensions.
The symmetry group of a set A ' IR 2 consists of all g 2 G such that g(A) j A.
The use of the word 'group' is justified by the observation that these transformations
constitute a subgroup of G. An element g in this subgroup is called a symmetry of A
and A is said to be g-symmetric. An element g 2 G for which denoting the
identity transformation) for some finite m - 1 is called a cyclic transformation of order m.
Sometimes we to denote the dependence on g. It is evident that j det
if g is cyclic.
In this paper we are mostly interested in symmetries of a given shape which are cyclic.
However, as shown in Example 2(b), there may also exist symmetries which are not cyclic.
Example 2: (a) Not every cyclic transformation is an isometry. For example, (x; y) 7!
(2y; x=2) is cyclic of order 2, but it is not an isometry.
(b) Not every symmetry is cyclic. Let B be the unit disk in IR 2 and let A := g(B) for
some 2 G. It is obvious that gr ' g \Gamma1 is a symmetry of A for every ' 2 [0; 2-], since r ' is
a symmetry of B. In most cases, however, this symmetry is not cyclic. Let, for example,
g be the transformation (x; y) 7! (2x; y). Then A is the ellipse x 1. For every '
with '=- irrational, the transformation gr ' g \Gamma1 is a non-cyclic, non-isometric symmetry of
the ellipse.
If H is a subgroup of G, then the set of cyclic transformations in H is denoted by C(H).
It is easy to see that
In general, C(H) is not a subgroup.
Let e 2 G be a cyclic transformation of order m. We define the mapping e : K ! K by
Here the denominator m represents a scaling with factor 1=m. It is easy to see that e (A)
is e-symmetric, and we call this set the e-symmetrization of A. Observe that e is not an
affine transformation. As a matter of fact, e is defined for shapes rather than for points.
Every line reflection ' ff is a cyclic transformation of order 2. The corresponding symmetrization
of a set A, that is (A))=2, is called Blaschke symmetrization of A [29].
Proposition 8: Let A 2 C. If e is a cyclic transformation, then V (e (A)) - V (A).
Furthermore, the following statements are equivalent:
e is a symmetry of A;
if the greatest common divisor of k and m equals 1. If e is
a cyclic transformation of order m and k / m, then e k is a cyclic transformation of order
m, and (e k ) . It is easy to see that every cyclic rotation of order m is of the form
Symmetry measures were introduced by Gr-unbaum [18] for point symmetries. Here we
will generalize this definition to arbitrary families of cyclic transformations.
Definition 4: Let E be a given collection of cyclic transformations and J ' K. A
is called an E-symmetry measure on J if, for every e 2 E, the
function -(\Delta; e) is continuous on J with respect to the Hausdorff topology, and if
Suppose that, in addition, the following property holds:
if e has order m and k / m, then -(A;
then - is called a consistent E-symmetry measure.
G be such that heh we say that - is H-invariant if
Note that in this definition we have restricted ourselves to cyclic transformations.
Example 3: It is easy to show that
defines a symmetry measure for all cyclic rotations r ' (i.e., '=- rational). This symmetry
measure is invariant under similitudes. It is not consistent, however.
The consistency condition (4) has the following intuitive interpretation. Suppose that a
shape A is (nearly) symmetric with respect to rotation over 2-=m, then it is also (nearly)
symmetric with respect to rotation over an angle 2k-=m, where 1 - k - m. Moreover, if
k / m, then the converse also holds.
There are at least two different ways to make an E-symmetry measure consistent. Our
next result, the proof of which is straightforward, shows how this can be done.
Proposition 9: If - is an E-symmetry measure, then
k/me
Y
k/me
both define a consistent E-symmetry measure. If - is H-invariant, then - min and - are
H-invariant as well.
It is easy to see that - consistent. The next result shows how one can
obtain symmetry measures from similarity measures.
Proposition 10: Let H be a subgroup of G and E ' C(H) such that
If oe is an H-invariant similarity measure, then - given by
is a consistent H-invariant E-symmetry measure.
Remarks. (a) If we do not assume the conditions in (26), the equality e (A) j h(A)
yields that This implies that h(A) is
e-symmetric.
(b) It is tempting to replace (27) by: -(A; However, such a definition
does not allow us to consider invariance under groups H which contain e. For, oe(A;
if oe is H-invariant and e 2 H.
The following example is based on the similarity measure oe 2 given by (21).
Example 4: Let E consist of the rotations e is a positive integer.
Furthermore, let . It is clear that condition (26) in Proposition 10 holds, hence
defines an S+ -invariant E-symmetry measure.
There are other construction methods for symmetry measures besides those based on
similarity measures. Below we present several examples of symmetry measures based on
Minkowski addition.
In Proposition 8 we have seen that V (e e is a cyclic transformation. Let
E be a collection of cyclic transformations; we define
Proposition 11 below shows that - 1 defines a consistent E-symmetry measure. There is
alternative way to define a symmetry measure using mixed volumes. It is based upon the
observation (see (13)) that
if e is a cyclic transformation. We define
Note that in Example 3 we have discussed the case where E comprises all cyclic rotations.
At first sight, it seems possible to define yet another symmetry measure by replacing
in (29). However, a simple calculation using properties
shows that V (A; e using (7), one gets that
Therefore, such a definition would coincide with - 1 in (28).
Proposition 11: Let E be a given collection of cyclic transformations, then - 1 and - 2
given by (28) and (29), respectively, are E-symmetry measures. The measure - 1 is consistent
Suppose, furthermore, that H ' G is such that heh
and - 2 are H-invariant.
If e is a finite-order rotation or a reflection, and if P is a convex polygon whose perimetric
representation is given, then it is easy to compute the perimetric representation of e (P )
by merging the perimetric representations of e i (P ); see Section II. This also leads to an
efficient computation of the symmetry measure - 1 .
Example 5 (Rotations) Let E consist of all cyclic rotations. Then - 1 given by (28) is a
consistent S-invariant E-symmetry measure, S being the group of similitudes. Because of
the consistency of - 1 , it suffices to consider :g. Given a polygon
P and a rotation r over the angle 2-=m, for some m - 1; the r-symmetrization r (P ) is
a polygon which is symmetric under rotations of order m. If M(P; u) is the perimetric
measure of P , then we can use (2) to find the perimetric measure of r (P
M(r (P ); u) =m
=m
It is obvious that
mod 2-:
Using formula (19), we can compute - 1 directly. Table II in Section VI contains the
outcomes for a given collection of convex polygons.
Example 6 (Line reflections) In this example we restrict ourselves to convex polygons.
If E consists of all line reflections, then - 1 given by (28) defines an S-invariant E-symmetry
measure. For a line reflection ' ff we find
Like in the previous example, we can compute the perimetric measure M('
ff (P ); u) if the
perimetric measure of P is given:
and
6 u) mod 2-:
In
Table
III in Section VI we compute the symmetry measure - 1 for several convex polygons
for the angles
The symmetry measure - 2 given by (29) amounts to
Thus we get that
In most of the literature, one does not compute the symmetry measure for specific line
reflections ' ff , but rather the maximum over all lines. In our setting this leads to the
following definition. A function ' : K ! [0; 1] is called an index of reflection symmetry if '
is continuous, and only if A is reflection symmetric with respect to some
line. If - is a measure of reflection symmetry, such as - 1 in (30), then
is an index of reflection symmetry.
The computation of this index can be done efficiently because of the following observa-
tions. Since V
we conclude from Proposition 3
that ff 7! V concave on (ff
the angles 2ff
k are the critical
angles of
lying between 0 and 2-. Thus every ff
k is of the form 1(
6
where P has perimetric representation f(u g.
This yields that the minimum of ff 7! V is achieved at one of the angles ff
k .
Using the same argument as in Proposition 4, one finds that the index can be computed
in O(n 3
In Table III we also give the index as well as the angle of the reflection
axis for which the index (maximum) is attained.
Example 7 (Skew-symmetry) A shape A is said to be skew-symmetric if there exists
an affine transformation g 2 G+ such that g(A) is reflection symmetric with respect to
some line. In this example we show that one can use the notion of canonical shapes
(Subsection IV-C) to find 'how skew-symmetric' a given shape is.
Suppose that A is skew-symmetric; then g(A) is reflection symmetric for some g 2 G+ .
The symmetry line of g(A) coincides with one of the axes of inertia, and therefore it is also
a symmetry axis of (g(A)) ffl . As this latter shape is a rotation of A ffl (see Proposition 6(b)),
we conclude that A ffl is reflection symmetric, too. Conversely, if A ffl is reflection symmetric,
then A is skew-symmetric (for, A ffl is the result of two stretchings along the principal axes
of the ellipse of inertia of A). Thus we find that A is skew-symmetric if and only if A ffl is
reflection symmetric. This yields immediately that we obtain an index of skew symmetry
from any index of reflection symmetry applied to the canonical shapes; see Example 6.
VI. Experimental results
In this section the results obtained previously will be applied to some concrete examples.
We consider four, more or less regular, shapes, namely: a triangle, a square, a tetragon
with one reflection axis, and a regular octagon. These shapes, along with their canonical
forms, are depicted in Fig. 6. In this figure, we depict four other convex polygons (and
their canonical forms), namely: P , a reflection of P denoted by P refl , a distortion of P
denoted by Q (the lower three points have been shifted in the x-direction), and an affine
transformation of Q denoted by Q aff .
In
Table
I we compute the similarity measure oe 2 given by (22) which is S+ -invariant.
In the first row we compute oe 2 (Q; R), where R is one of the other polygons depicted in
Fig. 6. In the third row we compute the values oe 2 (Q aff ; R). The second row contains the
values oe ffl
2 is the G+ -invariant similarity measure obtained from Proposition
7. Observe that we do not compute oe ffl
since these values are identical to
R). Note for example that using oe 2 , which is invariant under rotations and multi-
plications, Q is more similar to the square than to the tetragon (values 0.724 and 0.692,
respectively), whereas oe ffl
2 , which is G+ -invariant, gives opposite results (values 0.907 and
0.920, respectively).
In
Table
I we also give the angle at which the maximum in expression (22) is achieved.
Often, this 'optimal angle' depends, to a large extent, on the similarity measure that is
being employed.
Table
II and Table III are concerned with symmetry measures for rotations and re-
flections, respectively. In Table II we illustrate the measure - 1 discussed in Exam-
6triangle triangle ffl square square ffl
tetragon tetragon ffl octagon octagon ffl
refl
aff
Fig. 6. Polygons used in the experiments described in this section. Note that Q ffl
aff
is a rotation of Q ffl .
ple 5 for corresponding with rotations over 360 ffi =m. Observe that
In fact, it is easy to see that both - 1 and - 2 defined in (28)
and (29), respectively, satisfy
for every shape A and every affine transformation g.
Table
II shows the rotation symmetries of the square ( 90 ffi and 180 ffi degrees) and the
degrees). The triangle is not rotation symmetric, but the
measure is maximal at an angel of 120 ffi degrees (value 0.696). Note also that P is almost
-rotation invariant (value 0.995).
Table
III shows the reflection symmetry measure of Example 6 for five different reflection
axes. Furthermore, the two bottom rows capture the maximum over all axes (i.e., the index
November 4, 1997
of reflection symmetry; see Example 6) and the angle at which this maximum is attained.
Table
III shows that the triangle and tetragon have one reflection axis ( 90 ffi ), and the
square and octagon have three reflection axes Furthermore, P is
almost reflection symmetric with respect to the axes at 0 ffi and 90 ffi (value 0.995). The
angle at which the index of reflection is attained is almost the same for P and Q
and 95:3 ffi respectively).
VII. Conclusions
The objectives of this paper are twofold: on the one hand we wanted to give a formal
definition of similarity and symmetry measures that are invariant under a given group of
transformations, and to derive some general properties of such measures; see for example
Propositions 2, 7, 9, and 10. But, on the other hand, we have introduced some new examples
of such measures based on Minkowski addition and the Brunn-Minkowski inequality.
We believe that our analysis shows that such measures can be useful in certain applica-
tions. By no means, however, do we claim that our approach can be usefully applied in
every shape analysis problem. It is clear that our restriction to convex sets is a very severe
one: we will come back to this issue below.
As our approach is based on the area of shapes, it will be difficult to compare it with
boundary-oriented approaches, such as boundary matching. Van Otterloo [16, p.143]
points out that the quality of a similarity measure is a subjective matter: "it is usually
not possible to make general statements about the quality of a similarity measure on the
basis of results in a particular application: a measure that performs well in character
recognition does not necessarily perform well in industrial inspection."
Our similarity measures do not possess the triangle inequality. Moreover, the approach
applied in the paper is limited to convex shapes. In particular this second limitation is
a major drawback. To use our approach with non-convex shapes there are at least two
options. Firstly, one might still use the perimetric measure of a nonconvex shape, even
though it characterizes only a convex shape. Alternatively, one can choose to work with
the convex hull of non-convex shapes. In both cases, one has to give up some properties
of a similarity measure as given in Definition 2, in particular 4.
Although our exposition is mainly restricted to the 2D case, the approach has a straight-forward
8extension to 3D (and higher dimensional) shapes. For example, in the case of 3D
shapes, instead of using the perimetric representation, we must use the so-called slope
diagram representation [40]. Note that from the computational point of view the 3D case
becomes much more difficult, however. We will study such problems in our future work.
Acknowledgments
A. Tuzikov is grateful to the Centrum voor Wiskunde en Informatica (CWI, Amsterdam)
for its hospitality. He also would like to thank G. Margolin and S. Sheynin for discussions
on some results in this paper.
--R
"Recognize the similarity between shapes under affine transformation,"
"Image normalization for pattern recognition,"
"An efficiently computable metric for comparing polygonal shapes,"
"Identification of partially obscured objects in two and three dimensions by matching noisy characteristic curves,"
Computer Vision
"Similarity and affine invariant distances between 2D point sets,"
"Comparing images using the Hausdorff distance,"
"A multiresolution algorithm for rotation-invariant matching of planar shapes,"
"Scale-based description and recognition of planar curves and two-dimensional shapes,"
"Elliptic Fourier features of a closed contour,"
"Classification of partial 2-D shapes using Fourier descriptors,"
"Shape discrimination using Fourier descriptors,"
"Fourier descriptors for plane closed curves,"
A Contour-Oriented Approach to Digital Shape Analysis
"A determinition of the center of an object by autoconvolution,"
"Measures of symmetry for convex sets,"
"Convexity and symmetry: Part 2,"
"On symmetry detection,"
"Measures of axial symmetry for ovals,"
"On the detection of the axes of symmetry of symmetric and almost symmetric planar images,"
"Measures of N-fold symmetry for convex sets,"
"Detection of generalized principal axes in rotationally symmetric shapes,"
"Symmetry detection through local skewed symmetries,"
"Finding axes of skewed symmetry,"
"Detection of partial symmetry using correlation with rotated- reflected images,"
"Symmetry as a continuous feature,"
The Brunn-Minkowski Theory
"Similarity and symmetry measures for convex sets based on Minkowski addition,"
"A unified computational framework for Minkowski operations,"
Metric Affine Geometry
"Computing volumes of polyhedra,"
Symmetry in Science and Art
"Mathematical morphological operations of boundary-represented geometric objects,"
--TR
--CTR
Jos B. T. M. Roerdink , Henk Bekker, Similarity measure computation of convex polyhedra revisited, Digital and image geometry: advanced lectures, Springer-Verlag New York, Inc., New York, NY, 2001
Alexander V. Tuzikov , Stanislav A. Sheynin, Symmetry Measure Computation for Convex Polyhedra, Journal of Mathematical Imaging and Vision, v.16 n.1, p.41-56, January 2002
Hamid Zouaki, Representation and geometric computation using the extended Gaussian image, Pattern Recognition Letters, v.24 n.9-10, p.1489-1501, 01 June
Antonio Chella , Marcello Frixione , Salvatore Gaglio, Conceptual Spaces for Computer Vision Representations, Artificial Intelligence Review, v.16 n.2, p.137-152, October 2001
Andrew B. Kahng, Classical floorplanning harmful?, Proceedings of the 2000 international symposium on Physical design, p.207-213, May 2000, San Diego, California, United States
Bertrand Zavidovique , Vito Di Ges, The S-kernel: A measure of symmetry of objects, Pattern Recognition, v.40 n.3, p.839-852, March, 2007 | symmetry measure;minkowski addition;convex set;similarity measure;brunn-minkowski inequality |
287016 | Location- and Density-Based Hierarchical Clustering Using Similarity Analysis. | AbstractThis paper presents a new approach to hierarchical clustering of point patterns. Two algorithms for hierarchical location- and density-based clustering are developed. Each method groups points such that maximum intracluster similarity and intercluster dissimilarity are achieved for point locations or point separations. Performance of the clustering methods is compared with four other methods. The approach is applied to a two-step texture analysis, where points represent centroid and average color of the regions in image segmentation. | Introduction
Clustering explores the inherent tendency of a point pattern to form sets of points
(clusters) in multidimensional space. Most of the previous clustering methods assume
tacitly that points having similar locations or constant density create a single cluster
(location- or density-based clustering). Two ideal cases of these clusters are shown in
Figure
1. Location or density becomes a characteristic property of a cluster. Other
properties of clusters are proposed based on human perception [1, 2, 3] (Figure 2
left) or specific tasks (texture discrimination from perspective distortion [4]), e.g.,
points having constant directional change of density in Figure 2 right. The properties
of clusters have to be specified before the clustering is performed and are usually a
priori unknown.
This work presents a new approach to hierarchical clustering of point patterns.
Two hierarchical clustering algorithms are developed based on the new approach.
The first algorithm detects clusters with similar locations of points (location-based
clustering). This method achieves identical results as centroid clustering [5, 6, 7]
with slight improvement in uniqueness of solutions. The second algorithm detects
clusters with similar point separations (density-based clustering). This method can
create clusters with points being spatially interleaved and having dissimilar densities
called transparent clusters. Figure 3 shows two transparent clusters. The detection
orig. data20x2
Figure
1: Ideal three (left) and two (right) clusters for location (left) and density
based clusterings.
orig.data
Orig data
Figure
2: Illustration of other possible properties of points creating a cluster.
Left - two clusters with smoothly varying nonhomogeneous densities taken from [1].
Right - two clusters with constant directional change of density.
Figure
3: Two transparent clusters, C
of transparent clusters is a unique feature of the method among all existing clustering
methods.
The two methods are developed using similarity analysis. The similarity analysis
relates intra-cluster dissimilarity with inter-cluster dissimilarity. The dissimilarity of
point locations or point separations is considered for clustering and is denoted in general
as a dissimilarity of elements e i . Each method can be described as follows. First,
every element e i gives rise to one cluster C e i
having elements dissimilar to e i by no
more than a fixed number '. Second, a sample mean -
of all elements in C e i
is cal-
culated. Third, clusters would be formed by grouping pairs of elements if the sample
means computed at the two elements are similar. Fourth, the degree of dissimilarity '
is used to form several (multiscale) partitions of a given point pattern. A hierarchical
organization of clusters within multiscale partitions is built by agglomerating clusters
for increasing degree of dissimilarity. Lastly, the clusters that do not change for a
large interval of ' are selected into the final partition. Experimental evaluation is
conducted for synthetic point patterns, standard point patterns (8Ox - handwritten
character recognition, IRIS - flower recognition) and point patterns obtained from
image texture analysis. Performance of the clustering methods is compared with four
other methods (partitional - FORGY, CLUSTER, hierarchical - single link, complete
link [5]). Detection of clusters perceived by humans (Gestalt clusters [1]) is shown.
Location- and density-based clusterings are suitable for texture analysis. A texture
is modeled as a set of uniformly distributed identical primitives [8] (see Figure 4).
A primitive is described by a set of photometric, geometrical or topological features
(e.g., color or shape). Spatial coordinates of a primitive are described by another
set of features. Thus the point pattern obtained from texture analysis consists of
two sets of features (e.g., centroid location and average color of primitives) and has
to be decomposed first. Location-based clustering is used to form clusters corresponding
to identical primitives in one subspace (color of primitives). Density-based
clustering creates clusters corresponding to uniformly distributed primitives in the
other subspace (centroid locations of primitives). The resulting texture is identified
by combining clustering results in the two subspaces. This decomposition approach
is also demonstrated on point patterns obtained in other application domains. In
general, it is unknown how to determine the choice of subspaces. Thus an exhaustive
search for the best division in terms of classification error is used in the experimental
part for handwritten character recognition and taxonomy applications.
The salient features of this work are the following. First, a decomposition of the
Figure
4: Example of textures.
Top - original image. Bottom - the resulting five textures (delineated by black line)
obtained by location- and density-based clusterings.
clustering problem into two lower-dimensional problems is addressed. Second, a new
clustering approach is proposed for detecting clusters having any constant property
of points (location or density). Third, a density-based clustering method using the
proposed approach separates spatially interleaved clusters having various densities,
thus is unique among all existing clustering methods. The methods can be related to
the graph-theoretical algorithms.
This paper is organized as follows. Section 2 provides a short overview of previous
clustering methods. Theoretical development of the proposed clustering method is
presented in Section 3. Analytical, numerical and experimental performance evaluations
of the clustering method follow in Section 4. Section 5 presents concluding
remarks.
Previous Work
Clustering is understood as a low-level unsupervised classification of point patterns
[5, 9]. A classification method assigns every point into only one cluster without a
priori knowledge. All methods are divided into partitional and hierarchical methods.
Partitional methods create a single partition of points while hierarchical methods give
rise to several partitions of points that are nested.
Partitional clustering methods can be subdivided roughly into (1) error-square
clusterings [5, 6], (2) clustering by graph theory [10, 2, 5, 3]) and (3) density estimation
clusterings [7, 5, 11, 6]. Error-square clusterings minimize the square error
for a fixed number of clusters. These methods require to input the number of sought
clusters as well as the seeds for initial cluster centroids. Comparative analysis in
this work is performed using the implementations of error-square clusterings called
FORGY and CLUSTER [5]. FORGY uses only one K-means pass, where K is a given
number of clusters in the final partition. CLUSTER uses the K-means pass followed
by a forcing pass. During the forcing pass all mergers of the clusters obtained from the
K-means pass are performed until the minimum square error is achieved. Clustering
by graph theory uses geometric structures such as, minimum spanning tree (MST),
relative neighborhood graph, Gabriel graph and Delaunay triangulation. The methods
using these geometric structures construct the graph first, followed by removal
of inconsistent edges of the graph. Inconsistent edges and how to remove edges are
specified for each method. Due to computational difficulties, only methods using
MST are used for higher than three dimensional point patterns. Density estimation
clusterings have used the two approaches: (a) count the number of points within a
fixed volume (mode analysis, Parsen window), (b) calculate the volume for a fixed
number of points (k-nearest neighbors). These methods vary in their estimations
(Parsen, Rosenblatt, Loftsgaarden and Quesenberry [11]).
Two most commonly used hierarchical clusterings are single-link and complete-link
methods [12, 5]. Both methods are based on graph theory. Every point represents
a node in a graph and two nodes are connected with a link. A length of a link is
computed as the Euclidean distance between two points. Single- and complete-link
clusterings begin with individual points in separate clusters. Next, all links having
smaller length than a fixed threshold create a threshold graph. The single link method
redefines current clustering if the number of maximally connected subgraphs of the
threshold graph is less than the number of current clusters. The complete-link method
does the same for maximally complete subgraphs. A connected subgraph is defined
as any graph that connects all nodes (corresponding to points). A complete subgraph
is any connected subgraph that has at least one link for all pairs of nodes (points).
The implementations of single-link and complete-link clusterings based on Hubert's
and Johnson's algorithms [5] are used for the comparative analysis in this work.
The use of clustering methods can be found in many applications related to remote
sensing [13, 14, 15], image texture detection [16], taxonomy [12, 17], geography [18,
19] and so on. The objective of this work is to contribute to (1) the theoretical
development of non-existing clustering methods and (2) the use of clustering for
texture detection.
3 Location- and Density-Based Clusterings
First, a mathematical framework is established in Section 3.1. The clustering method
is proposed in Section 3.2. The algorithms for hierarchical location- and density-based
clusterings are outlined in Section 3.3 and related methods to the proposed ones are
compared in Section 3.4.
3.1 Mathematical formulation
An n-dimensional (nD) point pattern is defined as a set of points I = fp i g P
with
coordinates general goal of unsupervised clustering is to partition a
set of points I into non-overlapping subsets of points fC j g N
and
, where W j is an index set from all integer numbers in the interval
The subsets of points are called clusters and are characterized in this work by
the similarity (dissimilarity) of point locations or point separations. A notion of an
element e is introduced to refer either to a point location
or a point separation d(l p1 ;p 2
(the Euclidean distance between two
points also called length of a link d(l p1 ;p 2
In general, every cluster of elements can be characterized by its maximum
intra-cluster dissimilarity
intra-cluster similarity ') and minimum inter-cluster dissimilarity the dissimilarity
value of any two elements defined as the Euclidean distance
Figures 5 and 6 show a cluster of points CF j characterized by
and a cluster of point separations (links) CL j characterized by
One would like to obtain clusters with a minimum intra-cluster
and maximum inter-cluster dissimilarity (maximum intra-cluster and minimum inter-cluster
similarity) in order to decrease the probability of misclassification. Thus our
goal is to partition a point pattern I into nonoverlapping clusters fC j g N
having a
minimum intra-cluster and maximum inter-cluster dissimilarity of elements.
If clusters of elements are not clusters of points as in the case point separations
then a mapping from the clusters obtained to clusters of points is performed. The
mapping from clusters of point separations (links) to clusters of points takes two
steps: (1) Construct a minimum spanning tree from the average values of individual
clusters of links. (2) Form clusters of points sequentially from clusters of links in the
order given by the minimum spanning tree (from smaller to larger average values of
clusters).
d
d
CF CF
d
d
a
F
Figure
5: Characteristics of clusters of points.
Clusters of two-dimensional points are illustrated. All points from one
cluster are within a sphere having the center at p midp and radius ffik
d midp
a s
a s
e
e
a s
e
e
e
e
l k0
e
0.5e 0.5e
e
Figure
Characteristics of clusters of links.
Top: Three clusters of links partitioning two-dimensional points into three clusters
of points. Bottom: Characteristics of the three clusters of links. The horizontal axis
represents values of Euclidean distances between pairs of points d(l k ). Links in each
cluster of links differ in length by no more than ".
3.2 The clustering method
Given a set of elements I and the goal, the unknown parameters of the classification
problem are the values ' and ff for each final cluster, as well as, the number of final
clusters N. Two steps are taken to partition the input elements into clusters. First,
a value of intra-cluster dissimilarity ' is fixed and clusters characterized by ' are
formed by grouping pairs of elements. The result of the first step is a set of clusters
denoted as fCE '
m=1 since they are only characterized by '. Second, a value of
' is estimated. A cluster CE '
j with the estimated value ' is selected into the final
partition fCE j g N
. The choice of CE '
j is driven by a maximization of inter-cluster
dissimilarity ff and a minimization of intra-cluster dissimilarity '. The final partition
is aimed to be identical with a ground truth partition fC j g N
j=1 , which is
assumed to exist for the purpose of evaluating the classification accuracy (number
of misclassified elements). The development of the proposed classification method is
described next by addressing the following issues.
(1) Given a fixed value of intra-cluster dissimilarity ', how to estimate an unknown
cluster at a single element?
(2) How to group pairs of elements based on estimates calculated at each element?
(3) How to estimate a value of intra-cluster dissimilarity ' of an unknown cluster of
(1) Estimate of an unknown cluster derived from a single element
In order to create an unknown cluster C j , every pair of elements in C j should be
grouped together. The grouping is based on a certain estimate of the cluster C j
computed at each element. The best estimate of an unknown cluster C j is obtained
at a single element e i if the element e i gives rise to a cluster C e i
identical with the
unknown one C j . It would be possible to create the cluster C e i
if the unknown
cluster C j of elements is characterized by a value of inter-cluster dissimilarity ff larger
d d
d
a F
d
d
CF
d> d1p
Figure
7: A cluster of points with
e
ee a s >
e
e
e
Figure
8: A cluster of links with
than a value of intra-cluster dissimilarity '. (see Figures 7 and 8). Under the assumption
ff ? ', the cluster C e i
is obtained from any element e grouping
together all other elements e k satisfying the inequality k e Thus for any
two elements e 1 and e 2 from the cluster C e i
, their pairwise dissimilarity is always
less than 2'; if e
then 2'. The last fact about 2'
intra-cluster dissimilarity leads to a notation C e i
(2) Grouping elements into clusters using similarity analysis
The final clusters fCE '
characterized by ' are obtained in the following way:
(a) Create clusters C 2'
g, such that k e
(b) Compare all pairs of clusters C 2'
(c) Assign elements into the final clusters of elements
based on the comparisons
in (b).
Steps (b) and (c) are performed using similarity analysis. The similarity analysis
relates intra-cluster dissimilarity ' and inter-cluster dissimilarity ff of an unknown
cluster C ';ff
. The relationship between ff and ' breaks down into two cases; ff ? '
and ff - '.
For ff ? ', an unknown cluster C ';ff
j has a value of inter-cluster dissimilarity
larger than a value of intra-cluster dissimilarity. In this case, clusters C 2'
are either
identical to or totally different from an unknown cluster C ';ff
. Thus two elements
and e 2 would belong to the same final cluster CE '
e2 . For ff - ', an
unknown cluster C ';ff
j has a value of inter-cluster dissimilarity smaller or equal to a
value of intra-cluster dissimilarity. In this case, clusters C 2'
are not identical to
an unknown cluster C ';ff
. A cluster C 2'
is a superset of C ';ff
because the cluster C 2'
also contains some exterior elements of C ';ff
j due to ff - '.
For ff - ', it is not known how to group elements into clusters and the analysis
of this case proceeds. Our analysis assumes that the case ff - ' occurs due to a
random noise. This assumption about random noise leads to a statistical analysis of
similarity of clusters C 2'
. Two issues are investigated next: (i) a statistical parameter
of a cluster C ';ff
j that would be invariant in the presence of noise and (ii) a maximum
deviation of two statistically invariant parameters computed from clusters C ';ff
. First, let us assume that deterministic values of elements are corrupted by
a zero mean random noise with a symmetric central distribution. Then a sample
mean (average) of elements would be a statistically invariant parameter because the
mean of noise is zero. Although the sample mean of noise corrupted elements varies
depending on each realization of noise, it is a fixed number for a given set of noise
corrupted elements. Thus the sample mean -
- j of noise corrupted elements in C ';ff
j is a
statistically invariant parameter under the aforementioned assumptions about noise.
Second, a sample mean is computed from each cluster C 2'
and is denoted as
. The deviation of -
from -
- j is under investigation. If C 2'
is a subset of C '
PROBABILITY DISTRIBUTION
e
e
Figure
9: Confidence interval for 1D case of e i .
then the sample mean -
would not deviate by more than ' from -
This statement is always true. If there are two arbitrary subsets
e l
then their sample means would not be more than ' apart, as
is a superset of C '
oe C '
then the same deviation
of - e i
from either -
is assumed as before for e
'. The validity of the previous if statement depends on the ratio of
elements from the true cluster C '
j and other clusters exterior to C '
. Thus for the
second issue, the sample mean -
is not expected to deviated from -
by more than
Figure
and any two elements would be grouped
together if their corresponding sample means -
- e1 and -
- e2 are not more than ' apart;
.
The inequality k -
used for ff - ' can be applied to the case ff ? '.
There would be no classification error in the final partition fCE '
m=1 for the case
the inequality was used. For ff - ', the classification error is evaluated in
a statistical framework as a probability P r(k -
'). The complement
probability
corresponds to a confidence interval of the
mean estimator with a confidence coefficient - and upper and lower confidence limits
\Sigma'.
(3) Estimation of intra-cluster dissimilarity '
The value of intra-cluster dissimilarity ' is a priori unknown for an unknown cluster
. An estimation of the value ' is based on the assumption that an unknown
cluster C j with a maximum inter-cluster dissimilarity ff does not change the elements
in C j for a large interval of values '. Thus clusters CE '
j that do not change their
elements for a large interval of ' are selected into the final partition fCE j g N
. The
set fCE j g N
j=1 is an estimate of the ground truth partition fC j g N
.
The procedure for an automatic selection of ' uses an analysis of hierarchical
classification results and consists of four steps: (i) produce multiple sets of clusters
by varying the value ' called multiscale classification, (ii) organize multiscale sets
of clusters into a hierarchy of clusters, (iii) detect clusters that do not change their
elements for a large interval of ' and (iv) select the value ' based on the analysis in
Step (iii). Hierarchical organization of the output is defined as a nested sequence of
sets of clusters along the scale axis. The nested sequence is understood as follows: a
cluster obtained at scale ' cannot split at scale ' cannot merge at scale
with other clusters. The hierarchy of multiscale classification results is guaranteed by
modifying elements within the final cluster CE '
m created at each scale ' to the sample
mean of elements of the cluster. This implementation of hierarchical organization
can be supported by the following fact. Two elements e 1 and e 2 which have identical
values belong to the same cluster CE '
m for all scales ' - 0.
3.3 Clustering algorithms
Proposed density-based clustering, where elements are links, requires (1) to map
clusters of links into clusters of points and (2) to process a large number of links.
These two issues are tackled before the final algorithms for location- and density-based
clusterings are provided.
Specifics of density-based clustering
In order to obtain clusters of points, a mapping from clusters of links to clusters of
points is designed. The mapping consists of three steps. (1) Compute an average
link length of each cluster of links. (2) Construct a minimum spanning tree from the
average values of individual clusters of links. (3) Form clusters of points sequentially
from clusters of links in the order given by the minimum spanning tree (from smaller
to larger average values).
Knowing the mapping, the number of processed links is decreased by merging
links in the order of the link distances d(l k ) (from the shortest links to the longest
links). Clusters CL "
are created and the corresponding clusters of points CS "
are
derived immediately. No other links, which contain already merged points
will be processed afterwards. When the union of all clusters of points includes all
given points ([CS "
more links are processed.
Clustering algorithm for location-based clustering
(2) Create a cluster CF 2ffi
at each point
(3) Calculate sample means of CF 2ffi
Group together any two points p 1 and p 2 into a cluster CF ffi
(5) Assign the sample mean of a cluster CF ffi
m to all points in CF ffi
m (for all m).
Increase ffi and repeat from Step 2 until all points are clustered into one cluster.
Select those clusters CF ffi
m into the final partition fCF j g N
j=1 that do not change
over a large interval of ffi values.
Clustering algorithm for density-based clustering
calculate point separations d(l k ) (length of links) for all pairs of
points p i .
(2) Order d(l k ) from the shortest to the longest; d(l 1
(3) Create clusters of links CL 2"
l k
for each individual link l k , such that d(l k
Calculate sample means -
l k
(5) Group together pairs of links l 1 and l 2 sharing one point p i into a cluster of links
".
Assign those unassigned points to clusters CS "
which belong to links creating
clusters
.
Remove all links from the ordered set, which contain already assigned points.
Perform calculations from Step 3 for increased upper limit
there exist unassigned points.
Assign the link average of a cluster CL "
m to all links from the cluster CL "
(for
all m).
Increase " and repeat from Step 2 until all points are partitioned into one cluster.
Select those CL "
clusters into the final partition fCS j g N
j=1 that do not change
over a large interval of " values.
3.4 Related clustering methods
Location-based clustering is related to centroid clustering [5] and density-based clustering
is related to Zahn's method [1]. Centroid clustering achieves results identical
to the proposed location-based clustering although the algorithms are different (see
[5]). The only difference in performance is in the case of equidistant points, when the
proposed method gives a unique solution, while the centroid clustering method does
not, due to sequential merging and updating of point coordinates.
Zahn's method consists of the followings steps: (1) Construct the minimum spanning
tree (MST) for a given point pattern. (2) inconsistent links in the
MST. (3) Remove inconsistent links to form connected components (clusters). A
link is called inconsistent if the link distance is significantly larger than the average of
nearby link distances on both sides of the link. The proposed density-based clustering
differs from the Zahn's clustering in the following ways: (1) We use the average of the
largest set of link distances (descriptors of CL 2"
l k
rather than nearby link distances
for defining inconsistent link and this leads to more accurate estimates of inconsistent
links. (2) We replace the threshold for removing inconsistent links ("significantly
larger" in the definition of inconsistent links) with a simple statistical rule. (3) We
work with all links from a complete graph 1 rather than a few links selected by MST
(this is crucial for detecting transparent clusters).
Performance Evaluation
The problem of image texture analysis is introduced in Section 4.1. This problem
statement explains our motivation for pattern decomposition followed by using both
location- and density-based clusterings. Theoretical and experimental evaluations
of the methods follow next. The evaluation focuses on (1) clustering accuracy in
Section 4.2, (2) detection of Gestalt clusters in Section 4.3, and (3) performance on
real applications in Section 4.4.
4.1 Image texture analysis
An image texture is modeled as a set of uniformly distributed identical primitives
shown in Figure 4. Each primitive in Figure 4 is characterized by its color and size.
All primitives having similar colors and shapes are uniformly distributed therefore
the centroid coordinates of all texture interior primitives have similar inter-neighbor
distances. The goal of image texture analysis is to (1) obtain primitives, (2) partition
the primitives into sets of primitives called texture and (3) describe each texture
using interior primitives and their distribution. In this work, all primitives are found
Links between all pairs of points create a complete graph according to the notation in graph
theory.
based exclusively on their color. A homogeneity based segmentation [20] is applied
to an image. The segmentation partitions an image into homogeneous regions called
primitives (similar colors are within a region). A point pattern is obtained from all
primitives (regions) by measuring an average color and centroid coordinates of each
primitive.
Given the pattern, a decomposition of features is performed first. One set of features
corresponds to the centroid measurements and the other to the color measurements
of primitives. Two lower dimensional patterns are created from these features.
The location-based clustering is applied to the pattern consisting of the color feature
and the density-based clustering is applied to the pattern consisting of the centroid
coordinate features. Clustering results are combined and shown in Figure 4 (bottom).
The cluster similarity in each subspace provides a texture description characterized
by similarity of primitives and uniformity of distribution.
The density-based clustering was applied to the pattern shown in Figure 10 (top).
The points from the pattern are numbered from zero to the maximum number of
points. The output in Figure 10 (bottom) shows three clusters labeled by the number
of a point from each cluster that has the minimum value of its number. Partial
spatial occlusions of blobs in the original image gave rise to a corrupted set of features
corresponding to the centroid coordinates of primitives. From this follows that the
lower dimensional points are not absolutely uniformly distributed in the corresponding
subspace. The value of similarity " was selected manually. The method demonstrates
its exceptional property of separating spatially interleaved clusters which is a unique
property of the clustering methods described here.
dist
26 26 26 26 2626 26 26 26 26
26 26 26 26 26
26 26 26 26 26
26 26 26 26 26
Figure
10: Spatially interleaved clusters.
Top - original point pattern. Bottom - density-based clustering at
4.2 Accuracy and computational requirements
Experimental analysis of clustering accuracy is evaluated by measuring the number of
misclassified points with respect to the ground truth. Clustering accuracy is tested for
(1) synthetic point patterns generated using location and density models of clusters
and (2) standard test point patterns (80x, IRIS), which have been used by several
other researches to illustrate properties of clusters (80x is used in [5] and IRIS in [12, 5,
1]). Computational requirements are stated. Experimental results are compared with
four other clustering methods, two hierarchical methods - single link and complete
link, and two partitional - FORGY and CLUSTER [5].
Synthetic and standard point patterns
A point pattern is generated and the points are numbered. Detected clusters are
shown pictorially as sets of points labeled by same number. The common number
for a cluster corresponds to the number of a point that has the minimum value of
its number. Two models were used to generate synthetic point pattern. First, three
locations in a two-dimensional
space gave rise to a synthetic pattern with three clusters. These three locations
were perturbed by Gaussian noise (zero mean, variation oe) with various values of
the standard deviation oe. The number of points derived by perturbations of each
location varied as well. Figure 11 shows two realizations of synthetic patterns (left
column). Results obtained from the location-based clustering are shown in Figure 11,
right column.
Second, a 2D synthetic point pattern (64 points) was generated with four clusters
(30, 10, 12, 12 points) of different densities. The point pattern is shown in Figure 12
(left). Points from the pattern were corrupted by uniform noise \Delta \Sigma 0:5 and by
Gaussian noise Figure 12 middle and right). Results obtained from
density-based clustering method for the point patterns are shown in Figure 13.
input data2040
Figure
11: Clusters detected by location-based clustering.
Left column - three clusters, points (top) and 60 points (bottom), Gaussian noise
location-based clustering corresponding
to the left column point patterns
2.5
orig. data7.512.517.5
data uniform noise10x2
data Gaussian noise15
Figure
12: Synthetic pattern with four clusters of different densities.
Internal link distances between points from the four clusters are equal to 1, 2, 3 and
(first left). Locations of points are corrupted by noise with uniform (middle) and
Gaussian (right) distributions.
1.301122 dist
52 52 52 52
52 52 52 52
dist
43 43
43 43
43 43 43 43
43 43 43 43
43 43 43 43
dist
38 53 53 53
53 53 53 53
53 53 53 53
dist
-55150. dist
000000 000000 000000 000000 0000003030303042 42
52 52 52 52
52 52 52 52
52 52 52 52
-55152. dist
000000 000000 000000 000000 000000
Figure
13: Clusters detected by density-based clustering.
Clustering results for the patterns shown in Figure 12. Cluster labels for points
without noise (top row), with uniform noise (middle row) and with Gaussian noise
(bottom row). The number above each plot refers to the value of ".
We selected two standard point patterns obtained from (1) handwritten character
recognition problem (recognition of 80X with 8 features) and (2) flower recognition
problem (Fisher-Anderson iris data denoted as IRIS; recognition of iris setosa, iris
versicolor and iris virginica with 4 features). The features are shown in Figure 14.
The data set 80x contains 45 points with 3 categories each of size 15 points. The
data set IRIS contains 150 points with 3 categories each of size 50. Results expressed
in terms of misclassified points are in Table 3. Decomposition of features followed by
location- and density-based clusterings is explored for each point pattern (80X and
IRIS). It is unknown how to determine the choice of features for the decomposition.
The goal is to create lower dimensional point patterns showing inherent tendency
to form sets of points with similar locations or approximately constant density. An
exhaustive search for the best division in terms of classification error was used.
Comparisons with other clustering methods
Two partitional clustering methods (FORGY, CLUSTER) and two hierarchical clus-
8petal width and length
sepal width and length
Figure
14: Features for standard point patterns.
Features are shown for the 80x (top) and the IRIS (bottom) standard data.
tering methods (single and complete link) were compared with the proposed methods.
Compared four methods are fully described in [5]. The comparison is based on the
number of misclassified points with respect to the ground truth. The two hierarchical
methods were selected because they cluster points using links (clustering by graph
theory) that is similar to the proposed density-based clustering. The other two meth-
ods, FORGY and CLUSTER, cluster points using their coordinates that is similar to
the proposed location-based clustering.
The misclassified points for hierarchical methods were counted from the best possible
non-overlapping point pattern partition (the closest to the ground truth) with
dominant labels within correct clusters. The misclassified points for partitional methods
were counted from the closest partition to the ground truth for the two input
values (variables), (1) a random seed location for the initial clustering and (2) a number
of expected clusters in the result. A summary of clustering results in terms of
misclassified points is provided in Tables 1, 2 and 3 for synthetic and standard data
shown in Figures 11, 12 and 14. The order of methods based on their performance is
shown in the most right column of each table. The performance criterion is the sum
of misclassified points for several point patterns with known ground truth clusters
(shown in the second column from right in each table).
The best method for a class of point patterns shown in Figure 11 is the proposed
location-based clustering (see Table 1). A class of point patterns shown in Figure 12
was clustered the most accurately by the proposed density-based clustering (see Table
2). A combination of location- and density-based clusterings applied to 80X and
IRIS data led to the best clustering results (see Table 3). The eight-dimensional point
pattern 80X was decomposed experimentally into two lower-dimensional spaces; one
4-dimensional subspace (features 1,2,7,8) and one 4-dimensional subspace
(features 3,4,5,6) in order to achieve the result stated in Table 3. By applying the
location-based clustering to n 4-dimensional points followed by the density-based
clustering applied to n 4-dimensional points we could separate 0 from 8X and then
8 from X. The four-dimensional point pattern IRIS was decomposed experimentally
as well, but the clustering results were not better than the results from location-based
clustering applied alone. All six methods used for the comparison were applied to
a class of point patterns with spatially interleaved clusters, e.g., Figures 3 and 10.
Proposed density-based clustering outperforms all other methods because it is the
only method that is able to separate spatially interleaved clusters.
Time and memory requirements
time requirement for running each method is linearly proportional to the number
of processed elements (N point points, N link links) and to the number of used elements
for a sample mean calculation at each element (N CF 2ffi
and N CL 2"
l k
). The number of
processed links N link was reduced by sequential mapping of clusters of links to clusters
of points therefore the time requirement was lowered. Time measurements were
taken for various patterns. For example, the user time needed for clustering a point
pattern having points (similar to one in Figure 11 bottom) takes in average 0:06s
Table
1: Number of misclassified points resulting from clustering data in Figure 11.
method / data
pts
order
locat. clus.
dens. clus. 1 11 8 11 31 6.
single link 1 2 7 9 19 5.
complete link 1
2.
Table
2: Number of misclassified points resulting from clustering data in Figure 12.
method / data no noise
order
locat. clus.
dens. clus. 0 3 1 4 1.
single link 9 14 13 36 6.
complete link 0 11 4 15 2.
Table
3: Number of misclassified points resulting from clustering 80X and IRIS data.
method / data 80x IRIS perform.
pts 150 pts P
order
locat. dens. clus. 7 14 21 1.
locat. clus. 24 14 38 4.
dens. clus.
single link 24 25 49 7.
complete link 12 34 46 6.
2.
per one location-based clustering and 1:33s per one density-based clustering on Sparc
machine. The size of memory required is linearly proportional to the number of
processed elements (N point points, N link links).
4.3 Detection of Gestalt clusters
Gestalt clusters are two-dimensional clusters of points that are perceived by humans
as point groupings [1, 10]. The goal of this Section is to test the properties of the
proposed methods for detecting and describing Gestalt clusters. Properties of the
location- and density-based methods are demonstrated using the data of sample cluster
problems from [1] and [2]. The sample problems [1] are (1) composite cluster
problem, (2) particle track description, (3) touching clusters, (4) touching Gaussian
clusters and (5) density gradient detection.
Each of the problems tackles one or more configurations of clusters in a given point
pattern. The configurations of clusters refer to the properties of individual clusters
in a point pattern by which the clusters are detected. The properties are, for exam-
ple, location and density of clusters, distribution of points within a cluster (Gaussian
clusters), spatial shape of clusters (lines in particle tracks problems), mutual spatial
position of clusters (touching clusters, surrounding clusters), density gradient of
clusters. Point patterns containing clusters with the abovementioned properties are
in
Figures
15, 16, 17 and 18. Results corresponding to Gestalt clusters are shown
as well. All results were obtained using the proposed methods. There are two acceptable
results in Figure 16 for the case of touching clusters with identical densities.
The choice of the method and the similarity parameter of a shown result are made
manually.
The proposed approach to clustering of elements (points or links) gave rise to
location- and density-based clusterings. These clusterings can detect and describe
Gestalt clusters equally well as graph-theoretical methods using minimum spanning
tree [1]. Cases of point patterns similar to Figure require special treatment using
the graph-theoretical methods (detecting and removing denser cluster followed by
clustering the rest of the points). This drawback is not present in the proposed
methods.
4.4 Experimental results on real data.
Experimental results on real data are reported for image texture analysis and syn-
thesis. The analysis is conducted by (1) segmenting image into regions, (2) creating
a point pattern from regions, (3) clustering point pattern and (4) presenting application
dependent results. In the following application, the goal is to represent image
texture in a very concise way suitable for storage or coding of texture. In order to
achieve this goal, a density of texture primitives (homogeneous regions) is assumed
to be constant over the whole texture. Thus a concise representation of textures will
consist of description of primitives and spatial distributions.
Figure
19 (left) shows an image with a regular texture (tablecloth) having approximately
constant density of dark, bright and gray rectangles. A decomposition of the
tablecloth into sets of dark, bright and gray rectangles was performed by (1) creating
a point pattern with three features (centroid locations and average intensity value of
segmented regions), (2) using the density-based clustering (see the result of clustering
in
Figure
20) and (3) separating those regions into one image that gave rise to the
points grouped into one cluster during the clustering. The decomposition is shown in
Figure
(top). A possible synthesis of the image is shown in Figure 21 (bottom).
The synthesis starts with painting the background (intensity of the largest region)
followed by laying a region representative from each cluster at all centroid locations
from the cluster. In this way, a textured image is represented more efficiently than
1.789437 dist00000000000 00 0262626 26
26 26
26 26262626
26 26
26 26
26 2626262626266060606060
Figure
15: Point pattern showing a composite cluster problem and a problem of the
particle track description.
Top - original data, bottom - result of density-based clustering at
dist
dist
34 34
34 34 34 34 34
34 3434 34 34 34 34
34 3434
34 3434 3434
34 34
Figure
A problem of touching clusters.
Top - original data, bottom - results of density- (left) and location- (right) based
clusterings at
orig.data
Figure
17: A problem of touching Gaussian clusters.
original data, right - result of location-based clustering at
dist
72 72 7272 72 72
72 72 72 72 72
72 72 72 72 72 72
Figure
A problem of density gradient detection.
original data, right - result of density-based clustering at
any single pixel or region based description.
5 Conclusion
We have presented a new clustering approach using similarity analysis. Two hierarchical
clustering algorithms, location- and density-based clusterings, were developed
based on this approach. The two methods process locations or point separations denoted
as elements e i . The methods start with grouping elements into clusters C e i
for
every element e i . All elements in C e i
are dissimilar to e i by no more than a fixed
value '. The dissimilarity of two elements is defined as their Euclidean distance. A
sample mean -
of all elements in C e i
is calculated. Clusters are formed by grouping
elements having similar -
. The resulting set of clusters is identified among all clusters
obtained by varying '. Those clusters that do not change over a large interval of
are selected into the final partition in order to minimize intra-cluster dissimilarity
and maximize inter-cluster dissimilarity.
Figure
19: Image texture analysis: Image "Tablecloth".
Left to right: original image, segmented image, contours of segmented im-
age, centroids of segmented regions overlapped with the original image.
7.946564 dist0222222
Figure
20: Result of density-based clustering.
A point pattern obtained from Figure 19 is clustered. Numerical labels denote the
clusters corresponding to dark blobs of the tablecloth (label 2), white blobs of the
tablecloth (label 3), a piece of banana shown in the left corner (label 78) and background
with left top triangle and shading of the banana (label 0).
Figure
21: Texture analysis and synthesis.
The image shown in Figure 19 is decomposed and reconstructed. Top row - image
decomposition (analysis) based on obtained clusters shown in Figure 20, bottom row
image reconstruction (synthesis).
Location-based clustering achieves results identical to centroid clustering. Density-based
clustering can create clusters with points being spatially interleaved and having
dissimilar densities. The separation of spatially interleaved clusters is a unique feature
of the density-based clustering among all existing methods. The accuracy and computational
requirements of the proposed methods were evaluated thoroughly. Synthetic
point patterns and standard point patterns (8Ox - handwritten character recognition,
were used for quantitative experimental evaluation of accu-
racy. Performance of the clustering methods was compared with four other methods.
Correct detections of various Gestalt clusters were shown.
Location- and density-based clusterings were used for image texture analysis. A
texture was defined as a set of uniformly distributed identical primitives. Primitives
were found by segmenting an image into color homogeneous regions. A point pattern
was obtained from textured images by measuring centroid locations and average colors
of primitives. Features of this point pattern were divided into two sets, because
each set of features required a different clustering model. The centroid locations of
primitives, were hypothesized to have uniform distribution therefore the density-based
clustering was applied to form clusters in this lower-dimensional subspace corresponding
to features of the centroid locations. Properties of primitives, such as color, were
modeled to be identical within a texture, therefore the location-based clustering was
applied to form clusters in the second lower-dimensional subspace corresponding to
the color feature. Resulting texture was identified by combining clustering results
in the two subspaces. In a nutshell, this clustering problem required (1) a point
pattern decomposition into two lower-dimensional point patterns, (2) location- and
density-based clusterings to form clusters from the two point patterns and (3) texture
identification using both clustering results. The decomposition approach motivated
by image texture analysis was explored for point patterns that originated from hand-written
character recognition and taxonomy problems.
The contributions of this work can be summarized as (1) addressing a decomposition
of the clustering problem into two lower-dimensional problems, (2) proposing
a new clustering approach for detecting clusters having a constant property of interior
points, such as location or density, and (3) developing a density-based clustering
method that separates spatially interleaved clusters having various densities.
Acknowledgments
The authors greatfully acknowledge all people who provided the data for exper-
iments. Point patterns from [1] and [2, 21] - Mihran Tuceryan, Texas Instruments;
Standard point patterns 80X and IRIS - Chitra Dorai with permission of Anil Jain;
The authors thank for providing the four clustering methods to Chitra Dorai and
Professor Anil Jain from the Pattern Recognition and Image Processing Laboratory,
Michigan State University. This research was supported in part by Advanced Research
Projects Agency under grant N00014-93-1-1167 and National Science Foundation under
grant IRI 93-19038.
--R
"Graph-theoretical methods for detecting and describing gestalt clusters,"
Extraction of Perceptual Structure in Dot Patterns.
"Dot pattern processing using voronoi neighborhoods,"
"Shape from texture: Integrating texture-element extraction and surface estimation,"
Algorithms for Clustering Data.
analysis.
John Wiley and sons inc.
"Uniformity and homogeneity based hierarchical cluster- ing,"
Pattern Classification and Scene Analysis.
John Wiley and sons inc.
Freeman and com- pany
"A binary division algorithm for clustering remotely sensed multispectral images,"
"A new clustering algorithm applicable to multi-scale and polarimetric sar images,"
"Texture segmentation using voronoi polygons,"
The Advanced Theory of Statistics
"A comparison of three exploratory methods for cluster detection in spatial point patterns,"
Models of Spatial Processes.
"Segmentation of multidimensional images,"
"Extraction of early perceptual structure in dot pat- terns: Integrating region, boundary and component gestalt,"
--TR
--CTR
Jos J. Amador, Sequential clustering by statistical methodology, Pattern Recognition Letters, v.26 n.14, p.2152-2163, 15 October 2005
Qing Song, A Robust Information Clustering Algorithm, Neural Computation, v.17 n.12, p.2672-2698, December 2005
Chaolin Zhang , Xuegong Zhang , Michael Q. Zhang , Yanda Li, Neighbor number, valley seeking and clustering, Pattern Recognition Letters, v.28 n.2, p.173-180, January, 2007
Kuo-Liang Chung , Jhin-Sian Lin, Faster and more robust point symmetry-based K-means algorithm, Pattern Recognition, v.40 n.2, p.410-422, February, 2007
Hichem Frigui , Cheul Hwang , Frank Chung-Hoon Rhee, Clustering and aggregation of relational data with applications to image database categorization, Pattern Recognition, v.40 n.11, p.3053-3068, November, 2007
Ana L. N. Fred , Jos M. N. Leito, A New Cluster Isolation Criterion Based on Dissimilarity Increments, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.8, p.944-958, August
Ana L. N. Fred , Anil K. Jain, Combining Multiple Clusterings Using Evidence Accumulation, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.6, p.835-850, June 2005 | point patterns;density-based clustering;location-based clustering;hierarchy of clusters;spatially interleaved clusters;clustering |
287330 | Scalable S-To-P Broadcasting on Message-Passing MPPs. | AbstractIn s-to-p broadcasting, s processors in a p-processor machine contain a message to be broadcast to all the processors, 1 sp. We present a number of different broadcasting algorithms that handle all ranges of s. We show how the performance of each algorithm is influenced by the distribution of the s source processors and by the relationships between the distribution and the characteristics of the interconnection network. For the Intel Paragon we show that for each algorithm and machine dimension there exist ideal distributions and distributions on which the performance degrades. For the Cray T3D we also demonstrate dependencies between distributions and machine sizes. To reduce the dependence of the performance on the distribution of sources, we propose a repositioning approach. In this approach, the initial distribution is turned into an ideal distribution of the target broadcasting algorithm. We report experimental results for the Intel Paragon and Cray T3D and discuss scalability and performance. | Introduction
The broadcasting of messages is a basic communication
operation on coarse-grained, message passing
massively parallel processors (MPPs). In the
standard broadcast operation, one processor broadcasts
a message to every other processor. Various
implementations of this operation for architectures
with different machine characteristics have been proposed
[5, 9, 12, 13, 14]. Another well-studied broadcasting
operation is the all-to-all broadcast in which
every processor broadcasts a message to every other
processor [3, 7, 8, 15]. Let p be the number of proces-
sors. Assume that s of the p processors, which we call
source processors, contain a message to be broadcast
to every other processor, 1 - s - p. In this paper
we present broadcasting algorithms that handle all
ranges of s. We report experimental results for s-to-
Research supported in part by ARPA under contract
DABT63-92-C-0022ONR. The views and conclusions contained
in this paper are those of the authors and should not
be interpreted as representing official policies, expressed or
implied, of the U.S. government.
broadcasting algorithms on the Intel Paragon and
discuss their scalability and performance.
In general, quantities influencing scalability, and
thus the choice of which algorithm gives the best per-
formance, include the number of processors, the message
sizes, and the number of source processors [10].
Our algorithms are scalable with respect to p, s, and
the message sizes; i.e., they maintain their speedup
as these parameters change. For s-to-p broadcast-
ing, other factors influence scalability as well. For
any fixed s, a particular algorithm exhibits a different
behavior depending where the s source processors
are located. Each algorithm has ideal distribution
patterns and distribution patterns giving poor
performance. Poor distribution patterns for one algorithm
can be ideal for another. Thus, the location
of the source processors and the relationship of these
locations to the size and dimensions of the architecture
effect the scalability of an algorithm. In order to
study these relationships to the fullest extent, we assume
that every processor knows the position of the
source processors and the size of the messages. This
implies synchronization occurs before the broadcasting
In this paper we describe a number of different
broadcasting algorithms and investigate for each algorithm
its good and bad distribution patterns. We
characterize features of s-to-p broadcast algorithms
that perform well on a wide variety of source dis-
tributions. Some of our algorithms are tailored
towards meshes, others are based on architecture-independent
approaches. We show that algorithms
that
ffl exhibit a fast increase in the number of processors
actively involved in the broadcasting process
and
ffl increase the message length at these processors
as slowly as possible
give the best performance. We show that achieving
these two goals can be difficult for regular machine
sizes (i.e., machines whose dimensions are a power
of 2). This, in turn, implies that good or bad input
distributions cannot be characterized by the pattern
alone. The dimension of the machine plays a crucial
role as well. The performance obtained on ideal
distributions can vary greatly from that obtained on
poor distributions. We propose the approach of repositioning
sources to guarantee a good performance.
The basic idea is to perform a permutation to transform
the given distribution into an ideal distribution
for a particular algorithm which is then invoked to
perform the actual broadcast.
The paper is organized as follows. In Section 2
we describe the algorithms that do not reposition
their sources. In Section 3 we discuss different repositioning
approaches. Section 4 describes the different
source distributions we consider. In Section 5
we discuss performance and scalability of the proposed
algorithms on the Intel Paragon. Section 6
concludes.
Algorithms without Reposi-
tioning
In this section we describe s-to-p broadcasting algorithms
which do not reposition the sources. Our
first class of broadcast algorithms generalizes an efficient
1-to-p broadcasting approach. S-to-p broadcasting
could be done by having each one of the s
source processor initiate a 1-to-p broadcast. How-
ever, having the s broadcasting processes take place
without interaction is inefficient. Our approach is to
let each processor initiate a broadcast, but whenever
messages from different sources meet at a proces-
sor, messages are combined. Further broadcasting
steps proceed thus with larger messages. We use a
Binomial heap broadcasting tree [6, 9] to guide the
broadcasts.
In Algorithm Br Lin, we view the processors
of the mesh as forming a linear array (by using
a snake-like row-major indexing). The existence
of a linear array is not required and the approach
is architecture-independent. If processors P i and
both contain a message to be
broadcast, they exchange their messages and form
a larger message consisting of the original and the
received message. If only one of the processors contains
a message, it sends it to the other one. Then,
Algorithm Br Lin proceeds recursively on the first
p=2 and the last p=2 processors.
Algorithms Br Lin behaves differently for different
machine sizes. Whether the number of processors actively
involved in the broadcasting process increases,
depends on where the source processors are located.
For example, when the input distribution consists of
columns, the first log p=2 iterations introduce no new
sources. For meshes with an odd number of rows,
new sources are introduced in the case of column
distribution.
In order to study the use of only column links or
row links during a single iteration for arbitrary mesh
sizes, we introduce Algorithm Br xy. In Algorithm
Br xy, we first select either rows or columns. Assume
the rows were selected. We then view each row as
a linear array and invoke Algorithm Br Lin within
each row. After this, we invoke Algorithm Br Lin
within each column.
We consider two versions of Algorithm Br xy
which differ on how dimensions are selected. In Algorithm
Br xy source, the number of sources in the
rows and columns determine the order of the dimen-
sions. Recall that every processor knows the positions
of the sources. Every processor determines
, the maximum number of sources in a row,
and
, the maximum number of sources in a col-
umn. If the rows are selected and Algorithm
Br Lin is invoked on the rows. Otherwise,
the columns are selected first. A reason for choosing
the dimensions in this order is the following. When
the rows contain fewer elements, the broadcasting
done within the rows is likely to generate messages
of smaller size to be broadcast within the columns.
Assume sources are located in a few, say ff columns.
where r is the number
of rows of the mesh. First broadcasting in the
rows results in every processors containing ff messages
at the time the column broadcast starts.
For the sake of comparison, we also consider a version
of Algorithm Br xy which compares the dimensions
and broadcasts first along longer dimension.
Assume the mesh consists of r rows and c columns.
Algorithm Br xy dim selects the rows if r - c and
the columns if r ! c.
In the algorithms described so far processors issue
sends and receives to facilitate communication.
We do not make use of existing communication
operations generally available in communication libraries
[1, 2, 7]. S-to-p broadcasting can easily be
stated in terms of known communication operations.
We considered two such approaches. The first one,
Algorithm Xor, invokes an all-to-all personalized
exchange communication [7]. The second such approach
results in Algorithm 2-Step. This algorithm
performs the broadcast by invoking two regular communication
operations, one s-to-one followed by an
one-to-all operation. In the s-to-one communication,
processor receives the s messages from the source
processors. combines the s messages and initiates
an one-to-all broadcast.
3 Algorithms with Reposi-
tioning
On coarse-grained machines like the Paragon, sending
relatively short messages is cheap compared to
the cost of an entire s-to-p operation. At the same
time, experimental results show that the performance
of our s-to-p algorithms can differ by a factor
up to 2 for the same number of sources, depending
on where the sources are positioned. Each algorithm
has its own ideal source distribution. In this section
we consider the approach of repositioning the
sources and then invoking an s-to-p algorithm on its
ideal input distribution.
Algorithm Repos is invoked with one of the algorithms
described in the previous section. For the sake
of an example, assume it is Algorithm Br Lin. The
first step generates Br Lin's ideal input distribution
for s sources on the given machine size and machine
dimension. This is achieved by each source processor
sending its message to a processor determined by the
ideal distribution. We refer to the next section for
a discussion on ideal distributions. Whether it pays
to perform the redistribution depends on the quality
of the initial distribution of sources. We point out
that our current implementation of Algorithm Repos
does not check whether the initial distribution is
actually close enough to an ideal distribution. We
simply perform the repositioning.
Our second class of repositioning algorithms not
only repositions the sources, but also makes use
of the observation that the time for broadcasting
s=2 sources on a p=2-processor machine is less than
half of the time for broadcasting s sources on a p-processor
machine. Assume we partition the p processors
into a group G 1 consisting of p 1 processors
and into a group G 2 consisting of p 2 processors. The
partition of the processors into two groups is independent
of the position of the sources, and may depend
on the choice of the broadcasting algorithm invoked
on each processor group. After the broadcasting
within G 1 and G 2 is completed, every processor
in G 1 (resp. G 2 ) exchanges its data with a processor
in G 2 (resp. G 1 ). This communication step corresponds
to a permutation between the processors in
G 1 and G 2 .
We refer to Algorithm Part Lin as the algorithm
based on this principle and using Algorithm Br Lin
within the sub-machines. We refer to Algorithm
Part xy source as the algorithm based on this principle
and using Algorithm Br xy-source within the
sub-machines.
4 Source Distributions
In this section we discuss different patterns of source
distribution used in our experiments. Some of these
distributions exploit the strengths while other highlight
the weaknesses of the proposed algorithms.
Some are chosen because we expect them to be difficult
distributions for all algorithms. For the sake
of brevity, distributions may only be explained at an
intuitive level. Assume the machine is a mesh of size
with r - c and that processors are indexed
in row-major order. Let
c
e.
ffl Row and Column Distributions: In row distribution
source processors. These i
rows are spaced evenly. Every row, with the exception
of the last one, contains c source processors. For
mesh, R(30) has the source processors positioned
as shown in Figure 1. Column distribution
C(s) is defined analogously.
ffl Equal Distribution: In equal distribution E(s),
processor (1; 1) is a source processor and every dp=se-
th or bp=sc-th processor is a source processor. For
particular values of s, r, and c, E(s) can turn into
a row, column, or diagonal distribution, or exhibit a
rather irregular position of sources.
ffl Right and Left Diagonal Distributions: The right
diagonal distribution, Dr(s), has the s source processors
positioned on i diagonals. We include the diagonal
from (1; 1) to (r; r). The remaining
are spaced evenly (using modulo arithmetic), with
the last diagonal not necessarily filled with sources.
The left diagonal distribution Dl(s) has source processors
on the diagonal from (1; c) to (r; 1) and spaces
the remaining accordingly.
ffl Band Distribution: The band distribution B(s) is
similar to the right diagonal distribution. The right
diagonal distribution contains i diagonals, each having
width 1. The band distribution B(s) contains
r
e evenly distributed bands, each having width
d s
br
e.
ffl Cross Distribution: The cross distribution C(s)
corresponds to the union of a row and a column dis-
tribution, with the number of source processors in
the row distribution being roughly equal to the number
of processors in the column distribution.
ffl Square Block Distribution: In the square block
distribution, SB(s), the source processors are contained
in a square mesh of size d
se \Theta d
se.
Figure
shows three of the above distributions for
30. The remainder of this section describes how
the algorithms handle different distributions. The
Row
l l l l l l l l l l
l
l l l l l l l l l
l l l l l l l l l l
Cross
l m
l m
l
l
l
l
l
l l l l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
l
Diagonal
l
l
l
l
l
l
l
l
l
l
l
l
Figure
1: Placement of source processors in row, cross, and right diagonal distributions on a
machine.
performance of the algorithms on these distributions
is discussed in the next section.
Consider first Algorithm Br xy source. One expects
row and column distributions to be ideal source
distributions. Algorithm Br xy source will choose
the first dimension so that the number of source
processors is increased as fast as possible, while the
message length increases as slowly as possible. How-
ever, not all row and column distributions are equally
good. For example, in R(20) on a mesh of size 10\Theta10,
the first and the sixth row contain the source processors
and thus the first iteration does not increase the
number of source processors. Having the 20 sources
positioned in the first and the seventh row eliminates
this. This is an important observation for the
algorithms generating ideal distributions. It shows
that the machine dimension effects the ideal distribution
of sources. The diagonal distribution places
the same number of sources in each row and col-
umn. One would expect Algorithm Br xy source to
perform quite well on diagonal distributions. The
performance of Algorithm Br xy source on the equal
distribution will vary. Cross, square block, and band
distributions should be considerably more expensive
since the source positions may not allow a fast increase
in the number of sources.
The behavior of Algorithm Br Lin on these source
distributions is quite different. First, neither row
or column distribution are ideal distributions for
Br Lin. For an even number of rows, an iteration
achieves no increase in the number of sources on
the column distribution. Consider the row distri-
bution. When the number of rows is a power of 2,
Algorithm Br Lin is actually identical to Algorithm
Br xy source. When the number of rows is odd, communication
in an iteration occurs between processors
not in the same column and congestion will increase.
The equal distribution can turn into a row or a column
distribution and will thus not be ideal either.
The behavior of the algorithm on the left and the
right diagonal distribution can differ (no such difference
exists for Algorithm Br xy source). On a machine
of size 10 \Theta 10, Dr(10) experiences no increase
in the number of sources in the first iteration (since
processor P 50 lies on the diagonal). For other machine
sizes, the right diagonal distribution may not
experience such disadvantage. The left diagonal distribution
is least sensitive towards the size of the
machine and it achieves the desired properties of an
efficient broadcasting algorithm. The remaining distributions
appear to be difficult distributions.
Finally, Algorithm Br xy dim suffers the obvious
drawbacks when the selection of the dimension is
done according to the size of the dimensions and not
according to the number of sources. The ideal distribution
for Algorithm Br xy dim will either be a row
or a column distribution, depending on the dimensions
5 Experimental Results
In this section we report performance results for the
broadcasting algorithms on the Intel Paragon.
We consider machine sizes from 4 to 256 processors
and message sizes from 32 bytes to 16K bytes.
We study the performance over a whole spectrum of
source numbers ranging from 1 to p and a representative
selection of source distributions. In this paper
we report only the performance for the case when
all source processors broadcast messages of the same
length. In our experiments, using different length
messages did not influence the performance of the
algorithms. In particular, for a given algorithm a
good distribution remains a good distribution when
the length of messages varies. Throughout this sec-
tion, we use L to denote the size of the messages at
source processors.
Most implementation issues follow in a straightforward
way from the descriptions given in the previous
sections. We point out that we do not synchronize
globally after each iteration or after one dimension
has been handled. In all our algorithms, as soon as
a processor has all relevant data, it continues.
5.1 Performance of Algorithms without
Repositioning
In the following we first study the scalability of the
five algorithms described in Section 2 for standard
scalability parameters such as machine size, number
of source processors, and message length. We then
consider other relevant parameters, including the distribution
of the source processors, the dimension of
the machine, and the interaction of the dimension
of the machine and the source processor distribution
with respect to a particular algorithm. We show that
these parameters have a significant impact on the
performance.
Xor
Br_Lin
Br_xy_source
100103050Number of Sources
Time
(msec)
Figure
2: Performance of algorithms when the number
of sources varies from 1 to 100, assuming
and equal distribution on a
The communication operations invoked in Algorithms
Xor and 2-Step use the implementations described
in [7]. In particular, the all-to-all exchange
algorithm views the exchanges as consisting of p permutations
and it uses the exclusive-or on processor
indices to generate the permutations. The most efficient
Paragon implementation of an one-to-all communication
views the mesh as a linear array and applies
the communication pattern used in Algorithm
Br Lin; i.e., processor P i
exchanges a message with
and then the one-to-all communication is performed
within each machine half. We did not expect
Algorithms Xor and 2-Step to give good per-
formance. Xor simply exchanges too many messages
and Algorithm 2-Step creates unnecessary communication
bottlenecks. However, we did want to see
their performance against the other proposed algorithms
to show the disadvantage of using existing
communication routines in a brute-force way.
Figure
2 shows the performance of the five algo-
rithms. From this figure it is apparent that Algorithms
2-Step and Xor are not efficient. In particu-
lar, for more than 4 sources, Algorithm 2-Step suffers
congestion at the node which receives all the
messages. Algorithm Xor is inherently inefficient
because of the large number of sends issued by the
source processors. For Algorithm 2-Step, the rate of
increase in the execution time is steeper than the increase
in number of sources. This is due to the fact
that as the number of source processors increases,
the bottleneck processor in Algorithm 2-Step receives
more messages in the first step and sends out more
data in the second step. However, in the case of Algorithm
Xor, with the increase in number of source
processors, the increase in the number of sources is
more distributed among all processors.
Xor
Br_Lin
Br_xy_source
Message Length (in bytes)
Time
(msec)
Figure
3: Performance of algorithms when L varies
from 32 bytes to 16K keeping s =30 on a
machine with right diagonal distribution.
Xor
Br_Lin
Br_xy_source
Machine Size
Time
(msec)
Figure
4: Performance of algorithms when the machine
size varies, assuming having approximately
sources in a right diagonal distribution
The bandwidth of the network is high enough to
handle this type of increased communication volume
better. The performance of the other three algo-
rithms, Br Lin, Br xy source, and Br xy dim scales
linearly with the increase in number of sources. Depending
on the number of sources and how the equal
distribution places sources in the machine, the performance
of these algorithms differs slightly.
Figure
3 shows the performance for a right diagonal
distribution with when the message size
changes. As already stated, the diagonal distributions
place the same number of sources in the rows
and columns. Once again, regardless of how small
a message size, Algorithms 2-Step and Xor perform
poorly. The almost flat curve up to a message size of
1K for Algorithm Xor further supports our observation
related to Figure 2. The other three algorithms
experience little increase in the time until
bytes. Then we see a linear increase.
Figure
4 shows the behavior of the five algorithms
when the machine size varies from 4 to 256 processor.
Algorithm Xor is as good as any other algorithm for
small machine sizes (4 to 16 processors). This feature
is also observed when the number of sources is close
to p for small machine sizes.
The first three figures give the impression that algorithms
Br Lin, Br xy source, and Br xy dim give
the same performance. However, this is not true.
In the following we show that different distributions
and different machine sizes effect these algorithms in
different ways.
Br_Lin
Br_xy_source
row dia blk cro6789
Distributions
Time
(msec)
Figure
5: Performance of three algorithms on a 10x10
machine with assuming different source distributions
with
Figure
5 shows the performance for
using different distribution patterns. The figure
confirms the discussion given in Section 4 with respect
to ideal and difficult distributions. Algorithm
Br xy source gives roughly the same performance on
the first 4 distributions, but for the square block and
cross distribution we see a considerable increase in
time. We point out that the same performance on
the first 4 distributions for Br xy source is not true in
general. However, the row and the column distribution
show up as ideal distributions. Square block and
cross distributions require more time for all three al-
gorithms. As expected, Algorithm Br Lin performs
best on them. This is due to the fact that in Algorithm
Br Lin sources can spread to different rows
and columns in the first few iterations, thus utilizing
the links more efficiently. On the other hand, for the
square block distribution, Algorithms Br xy source,
Br xy dim have only few columns and rows available
to generate new sources. The big increase in Algorithm
Br xy dim for the row distribution indicates
the importance of choosing the right dimension first.
Br_Lin
Br_xy_source
407.58.59.510.511.5Number of Sources
Time
(msec)
Figure
Performance of three algorithms on a 10x10
machine with a right diagonal distribution. The total
message size is kept at 80K and the number of sources
varies.
Figure
6 shows the performance of the three algorithms
when the total message size (i.e., the sum of
the message sizes in the source processors) is fixed.
An interesting aspect of the performance curves is
that if the data is spread among a larger number
of sources, the broadcast operation is accomplished
faster. For example the 80K size, data spread among
5 sources takes approximately 11.4 msec using Algorithm
Br xy source. However, the same amount of
data spread among 40 sources to begin with takes
only 7.3 msec. This plot highlights our claim made
earlier that for a given amount of data more number
of sources involved in broadcasting yield faster
execution times.
Time
Figure
7: Performance of Algorithm Br Lin when
and the dimensions vary, assuming equal
distribution. Three source sizes are shown and
4K in all the cases.
Figure
7 shows the performance of three algorithms
when the dimensions of the machine
vary. It demonstrates that performance is related
to the size the dimensions. For the same number
of sources, message size, and number of pro-
cessors, a distribution gives different performance
(hence is considered good or bad) depending on the
dimension of machine. For a small number of sources
(for example the machine dimensions may
not affect the performance. For a large number
of sources, machine dimensions impact the performance
considerably more. It seems like an anomaly
to have faster performance for
The reason lies in the distribution and the number
of rows. When the equal distribution
tends to place the source processors within columns.
This does not allow a fast increase in the number of
sources. On the other hand, for the source
processors are, with the exception of size 4 \Theta 30, positioned
along diagonals.
5.2 Performance of Algorithms with
Repositioning
Algorithms Br xy source and Br Lin exhibit good
performance for a variety of source distributions and
machine dimensions. However, each algorithm has
source distributions which exhibit the algorithm's
weaknesses. In addition, the relationship between
source distribution and machine dimension can result
in a performance loss. The algorithm cannot
change the machine dimension, but the problems
arising from the source distribution can be avoided
by performing a repositioning. In Section 3 we described
a repositioning and a partitioning approach.
We next discuss the performance of the repositioning
approach using Algorithm Br xy source. We use
the row distribution as the ideal source distribution
for Br xy source. Similar results hold for the repositioning
algorithm using Br Lin with the left diagonal
distribution as the ideal source distribution.
Let Algorithm Repos xy source be the repositioning
algorithm invoking Br xy source. In this algorithm
we first perform a permutation to redistribute
source processors according to the row distribution.
We generate a row distribution that positions the
rows so that the number of new sources increases as
fast as possible (the exact position of the rows depends
on the number of rows of the mesh). The cost
of the permutation depends on s, where the s source
processors are located, and the message length.
In
Figure
8 we show the percentage difference between
Algorithms Repos xy source and Br xy source
on four input distributions when the number of
sources increases from 16 to 192. The machine size
is and the message size is 6K bytes. Repositioning
pays for all distributions except the band
distribution. Repositioning for the band distribution
costs up to 6:5% more. Translating this into
actual time, when s is less than 150, Repos xy source
-10103050Number of Sources
Percentage
Difference
o- equal
+- cross
*- band
x- blk
Figure
8: The difference between Repos xy source
and Br xy source on different input distributions for
a machine with varying the number
of sources.
costs between 1 and 2 msec more. For
repositioning costs 7.5 msec more, indicating that
repositioning is expensive for large source numbers.
The repositioning approach results in a significant
gain for the cross and square block distributions. In
terms of actual time, the gain for repositioning on
the cross distributions lies between 13 and 31 msec.
A gain of 13 msec is observed when
for all other source numbers the gain lies between 20
and 31 msec.
The conclusion of our experimental study is that
repositioning pays (i.e., the cost of the initial permutation
is less than the gain resulting from working
with an ideal source distribution) unless:
ffl the number of sources is large (s ? p=2 appears
to be the breakpoint), or
ffl the number of processors is small (for p - 16,
there appears to be little difference between the
algorithms and different source distributions),
or
ffl the message length is small (i.e., less that 1K).
Clearly, if the input distribution is close to an ideal
distribution, it does not pay to reposition. However,
none of our algorithms tries to analyze the input dis-
tribution. The effect of the message length on the
repositioning is illustrated in Figure 9. The figure
shows the percentage difference for the same four
distributions on a 16 \Theta 16 machine and 75 sources
when the message length increases. For a message
size of less than 1K, repositioning pays only for the
cross distribution. As the message size increases, the
benefit of repositioning increases for all distributions.
Not surprisingly, for large message length, the gain
tapers off and decreases for some distributions.
In Section 3 we also proposed to combine the repositioning
with a partitioning approach. We first generate
an ideal source distribution and then create two
Message Length
Percentage
Difference
o- equal
+- cross
*- band
x- blk
Figure
9: The difference between Repos xy source
and Br xy source on different input distributions for
a machine with varying the message
length.
broadcasting problems, each on one half of the ma-
chine. Let Algorithm Part xy source be such a partitioning
algorithm using Br xy source within each
machine half. We have compared Part xy source
against the performance of Repos xy source and
Br xy source. Our results showed that for the Intel
Paragon the partitioning approach hardly ever gives
a better performance than repositioning alone. The
reason lies in the cost of the final permutation. The
exchange of long messages done in the final step dominates
the performance and eliminates the gain obtained
from broadcasting on smaller machines. The
performance of partitioning algorithms could be different
for machines of other characteristics, but we
conjecture that for other currently available MPPs
the outcome will be the same.
6 Conclusions
We described different s-to-p broadcasting algorithms
and analyzed their scalability and performance
on the Intel Paragon. We showed that the
performance of each algorithm is influenced by the
distribution of the source processors and a relationship
between the distribution and the dimension of
the machine. Each algorithm has ideal distributions
and distributions on which the performance
degrades. To reduce the dependence of the performance
on the input distribution we proposed a repositioning
approach. In this approach the given distribution
is turned into an ideal distribution of the target
broadcasting algorithm. We have also compiled
and run our programs under MPI environment, using
MPI point-to-point message passing primitives.
We have observed a performance loss of 2 to 5% in
every implementation.
--R
"CCL: A Portable and Tunable Collective Communication Library for Scalable Parallel Computers,"
"Interprocessor Collective Communication Library,"
"Multiphase Complete Exchange on a Circuit Switched Hypercube,"
"Benchmarking the CM-5 Multicomputer,"
"On the Design and Implementation of Broadcast and Global Combine Operations Using the Postal Model,"
Introduction to Algorithms
"Communication Operations on Coarse-Grained Mesh Architectures,"
"An Architecture for Optimal All-to-All Personalized Communication,"
"Opti- mal Broadcast and Summation in the LogP Model,"
Introduction to Parallel Computing
"Multicast in Hypercube Multiprocessors,"
"Performance Evaluation of Multicast Wormhole Routing in 2D- Mesh Multicomputers,"
"Optimum Broadcasting and Personalized Communication in Hypercubes,"
"Many-to- Many Communication With Bounded Traffic,"
"All-to-all Communication on Meshes with Wormhole Routing,"
--TR
--CTR
Yuh-Shyan Chen , Chao-Yu Chiang , Che-Yi Chen, Multi-node broadcasting in all-ported 3-D wormhole-routed torus using an aggregation-then-distribution strategy, Journal of Systems Architecture: the EUROMICRO Journal, v.50 n.9, p.575-589, September 2004 | scalability;message-passing MPPs;broadcasting;communication operations |
287341 | Deterministic Voting in Distributed Systems Using Error-Correcting Codes. | AbstractDistributed voting is an important problem in reliable computing. In an N Modular Redundant (NMR) system, the N computational modules execute identical tasks and they need to periodically vote on their current states. In this paper, we propose a deterministic majority voting algorithm for NMR systems. Our voting algorithm uses error-correcting codes to drastically reduce the average case communication complexity. In particular, we show that the efficiency of our voting algorithm can be improved by choosing the parameters of the error-correcting code to match the probability of the computational faults. For example, consider an NMR system with 31 modules, each with a state of m bits, where each module has an independent computational error probability of 103. In this NMR system, our algorithm can reduce the average case communication complexity to approximately 1.0825m compared with the communication complexity of 31m of the naive algorithm in which every module broadcasts its local result to all other modules. We have also implemented the voting algorithm over a network of workstations. The experimental performance results match well the theoretical predictions. | Introduction
Distributed voting is an important problem in the creation of fault-tolerant computing systems,
e.g., it can be used to keep distributed data consistent, to provide mutual exclusion in distributed
systems. In an N Modular Redundant (NMR) system, when the N computational modules
execute identical tasks, they need to be synchronized periodically by voting on the current
computation state (or result, and they will be used interchangeably hereafter), and then all
modules set their current computation state to the majority one. If there is no majority result,
then other computations are needed, e.g., all modules recompute from the previous result. This
technique is also an essential tool for task-duplication-based checkpointing [12]. In distributed
storage systems, voting can also used to keep replicated data consistent.
Many aspects of voting algorithms have been studied, e.g., data approximation, reconfigurable
voting and dynamic modification of vote weights, metadata-based dynamic voting[3][5][9]. In
this paper, we focus on the communication complexity of the voting problem. Several voting
algorithms have been proposed to reduce the communication complexity [4][7]. These algorithms
are nondeterministic because they perform voting on signatures of local computation results.
Recently, Noubir et. al. [8] proposed a majority voting scheme based on error-control codes:
each module first encodes its local result into a codeword of a designed error-detecting code
and sends part of the codeword. By using the error-detecting code, discrepancies of the local
results can be detected with some probability, and then by a retransmission of full local results,
a majority voting decision can be made. Though the scheme drastically reduces the average case
communication complexity, it can still fail to detect some discrepancies of the local results and
might reach a false voting result, i.e., the algorithm is still a probabilistic one. In addition, this
scheme is only using the error detecting capabilities of codes. As this paper will show, in general,
using only error-detecting codes ( EDC ) does not help to reduce communication complexity of
a deterministic voting algorithm. Though they have been used in many applications such as
reliable distributed data replication [1], error-correcting codes ( ECC ) have not been applied to
the voting problem.
For many applications[12], deterministic voting schemes are needed to provide more accurate
voting results. In this paper, we propose a novel deterministic voting scheme that uses error-
correcting/detecting codes. The voting scheme generalizes known simple deterministic voting
algorithms. Our main contributions related to the voting scheme include: (i) using the correcting
in addition to the detecting capability of codes ( only the detection was used in known schemes )
to drastically reduce the chances of retransmission of the whole local result of each node thus the
communication complexity of the voting, (ii) a proof that our scheme provably reaches the same
voting result as the naive voting algorithm in which every module broadcasts its local result to
all other modules, and (iii) the tuning of the scheme for optimal average case communication
complexity by choosing the parameters of the error-correcting/detecting code, thus making the
voting scheme adaptive to various application environments with different error rates.
The paper is organized as follows: in Section 2, we describe the majority voting problem in
NMR systems. Our voting algorithm together with its correctness proof are described in Section
3. Section 4 analyzes both the worst case and the average case communication complexity of
the algorithm. Section 5 presents experimental results of performances of the proposed voting
algorithm, as well as two other simple voting algorithms for comparison. Section 6 concludes the
paper.
2 The Problem Definition
In this section, we define the model of the NMR system and its communication complexity. Then
we address the voting problem in terms of the communication complexity.
2.1 NMR System Model
An NMR system consists of N computational modules which are connected via a communication
medium. For a given computational task, each module executes a same set of instructions
with independent computational error probability p. The communication medium could be a
bus, a shared memory, a point-to-point network or a broadcast network. Here we consider the
communication medium as a reliable broadcast network, i.e., each module can send its computation
result to all other modules with only one error-free communication operation. The system
evolution is considered to be synchronous, i.e., the voting process is round-based.
2.2 Communication Complexity
The communication complexity of a task in an NMR system is defined as the total number of
bits that are sent through the communication medium in the whole execution procedure of the
task. In a broadcast network, let m ij be the number of the bits that the ith module sends
at the jth round of the execution of a task, then the communication complexity of the task is
is the number of the modules in the system, and K is the number of
rounds needed to complete the task.
2.3 The Voting Problem
Now consider the voting function in an NMR system. In an NMR system, in order to get a
final result for a given task, after each module completes its own computation separately, it
needs to be synchronized with other modules by voting on the result. Denote X i as the local
computational result of the ith module, the majority function is defined as follows:
OE otherwise
where in general, N is an odd natural number, and OE is any predefined value different from all
possible computing results.
Example 1 If
changes to 0010, and other X 0
then
The result of voting in an NMR system is that each module gets Majority(X
final result , where X is the local computation result of the ith module.
One obvious algorithm for the voting problem is that after each module computes the task,
it broadcasts its own result to all the other modules. When a module receives all other modules'
results, it simply performs the majority voting locally to get the result. The algorithm can be
described as follows:
Algorithm 1 Send-All Voting
For Module
Wait Until Receive all
This algorithm is simple: each module only needs one communication (i.e., broadcast) oper-
ation, but apparently its communication complexity is too high. If the result for the task has
bits, then the communication complexity of the algorithm is Nm bits. In most cases, the
probability of a module to have a computational error is rather low, namely at most times all
modules shall have the same result, thus each module only needs to broadcast part of its result.
If all the results are identical, then each module shall agree with that result. If not, then modules
can use Algorithm 1. Namely, we can get another simple improved voting algorithm as follows:
Algorithm 2 Simple Send-Part Voting
For Module
Partition the local result X i into N symbols: X
Wait Until Receive all
If
Else
Wait Until Receive all
In the above algorithm, * is a concatenation operation of strings, e.g.,
is an equality evaluation:
FALSE otherwise
Some padding may be needed if the local result is not an exact multiple of N. The following
example demonstrates a rough comparison of the two algorithms.
round of communication
is needed, and in total 25 bits are transmitted. On the other hand, with Algorithm
broadcast 0, and
is the majority voting result. In this case, 2 rounds of communication
are done, and 10 bits ( 5 bits for X and 5 bits for F ) are transmitted.
results in and the Else part of the algorithm
is executed, finally the majority voting result is obtained by voting on all the X i 's, i.e.,
rounds of communication are needed, and in total,
bits ( 25 bits for X i 's and 5 bits for F) are transmitted. 2
From the above example, it can be observed:
1. Algorithm 1 always requires only 1 round of communication, and Algorithm 2 requires 2
or 3 rounds of communication;
2. The Else part of Algorithm 2 is actually Algorithm
3. The communication complexity of Algorithm 1 is always Nm, but the communication
complexity of Algorithm 2 may be m+N or Nm+N , depending on the X i 's;
4. In Algorithm 2, by broadcasting and voting on the voting flags, i.e., F i 's, the chance for
getting a false voting result is eliminated.
If the Else part of Algorithm 2, i.e., Algorithm 1, is not executed too often, then the communication
complexity can be reduced to m+N from Nm, and in most cases, m AE N , thus
m. So the key idea used to reduce the communication complexity is to reduce the
chance to execute Algorithm 1. In most computing environments, each module has low computational
error probability p, thus most probably all modules either (1) get the same result or (2)
only few of them get different results from others. In case (1), Algorithm 2 has low communication
complexity, but in case (2), Algorithm 1 is actually used and the communication complexity
is high (i.e., Nm+N) , but if we can detect and correct these discrepancies of the minor modules'
results, then the Else part of the Algorithm 2 does not need to be executed, the communication
complexity can still be low. This detecting and correcting capability can be achieved by using
error-correcting codes.
3 The Solution Based on Error-Correcting Codes
Error-correcting codes ( ECC ) can be used in the voting problem to reduce the communication
complexity. The basic idea is that instead of broadcasting its own computation result X i di-
rectly, P i , the ith module, first encodes its result X i into a codeword Y i of some code, and then
broadcasts one symbol of the codeword to all other modules. After receiving all other symbols of
the codeword, it reassembles them into a vector again. If all modules have the same result, i.e.,
all are equal, then the received vector is the codeword of the result, thus it can be decoded
to the result again. If the majority result exists, i.e., Majority(X OE, and there are t
c) modules whose results are different from the majority result X, then the symbols from
all these modules can be regarded as error symbols with respect to the majority result. As long
as the code is designed to correct up to t errors, these error symbols can be corrected to get the
codeword corresponding to the majority result, thus Algorithm 1 does not need to be executed.
When the code length is less than Nm, the communication complexity is reduced compared to
Algorithm 1. On the other hand, if only error-detecting codes are used, once error results are
detected, Algorithm 1 still needs to be executed, and thus increases the whole communication
complexity of the voting. Thus error-correcting codes are preferable to error-detecting codes
for voting. By properly choosing the error-correcting codes, the communication complexity can
always be lowered than that of Algorithm 1.
But it is possible that the majority result does not exist, i.e., Majority(X
the vector that each module gets can still be decoded to a result. As observed from the above
example, introduction of the voting flags can avoid this false result.
3.1 A Voting Algorithm with ECC
With a properly designed error-correcting code which can detect up to d and correct up to t
error voting algorithm using this code is as follows:
Algorithm 3 ECC Voting
For Module
Wait Until Receive all Y
If Y is undecodable
Execute Algorithm
Else
If
Else
Execute Algorithm
Notice that to execute Algorithm 1, each module P i does not need to send its whole result
It only needs to send additional of its codeword Y i . Since the code is
designed to detect up to d and correct up to t symbols, it can correct up to d
the unsent d+t symbols of Y i can be regarded as erasures and recovered, hence the original X i
can be decoded from Y i .
To see the algorithm more clearly, the flow chart of the algorithm is given in Fig. 1, and the
following example shows how the algorithm works.
Example 3 There are 5 modules in an NMR system, and the task result has 6 bits, i.e.,
Here the EVENODD code [2] is used which divides 6-bit information into 3 symbols
the reassembled vector Y
into N symbols
Broadcast(Y (i)), wait until get
Execute "Send-All Voting"
Majority(F , . , F
Figure
1: Flow Chart of Algorithm 3
and encode information symbols into a 5-symbol codeword. This code can correct 1 error symbol,
Now if then with the EVENODD code,
after each module broadcasts 1 symbol
(i.e., 2 bits) of the codewords, the reassembled vector is Y=0000000001. Since Y has only 1 error
symbol, it can be decoded into X=000000. From the flow chart of the algorithm, we can see that
is the majority
voting result.
In this case, there are 2 rounds of communication, and the communication complexity is 15
bits. As a comparison, algorithm 1 needs 1 round of communication, and its communication
complexity is 30 bits; on the other hand, algorithm needs 3 rounds of communication, and the
communication complexity in this case is 35 bits. In this example, the EVENODD code is used,
but actually the code itself does not affect the communication complexity as long as it has same
properties as the EVENODD code, namely, an MDS code with
From the flow chart of the algorithm, the introduction of voting on F i 's ensures not to reach a
false voting result, and going to the Send-All Voting in worst case guarantees not to fail to reach
the majority result if it exists. Thus the algorithm can give a correct majority voting result. A
rigorous correctness proof of the algorithm is as follows.
3.2 Correctness of the Algorithm
Theorem 1 Algorithm 3 gives Majority(X set of local computational
results
Proof: From the flow chart of the algorithm, it is easy to see that the algorithm terminates in
the following two cases:
1. Executing the Send-All Voting algorithm: the correct majority voting result is certainly
reached;
2. Returning a X: in this case, since Majority(F
are equal to X, X is the majority result. 2
To see how the algorithm works with various cases of the local results X i 's
give two stronger observations about the algorithm, which also help to analyze the communication
complexity of the algorithm.
outputs OE, i.e., Algorithm 3
never gives a false voting result.
Proof: It is easy to see from the flow chart that after the first round of communication, each
module gets a same vote vector Y . According to the decodability of Y, there are two cases:
1. If Y is undecodable, then the Send-All Voting algorithm is executed, and the output will
be OE;
2. If Y is decodable, the decoded result X now can be used as a reference result. But since
there does not exist a majority voting result, majority of the X i 's are not equal to the X, i.e.,
so the Send-All Voting algorithm is executed, and the output
again will be OE. 2
output is exactly the
X, i.e., Algorithm 3 will not miss the majority voting result.
Proof: Suppose there are e modules whose local results are different from the majority result
X, then e -
1. If e - t, then there are e error symbols in the vote vector Y with respect to the corresponding
codeword of the majority result X, so Y can be correctly decoded into X, and majority of
are equal to X, i.e., majority of F i 's is 1, hence the correct majority result X is outputted.
2. If e ? t, then Y is either undecodable or incorrectly decoded into another X 0 , where X 0 6=
X. In either case, the Send-All voting algorithm is executed and the correct majority result X is
reached. 2
3.3 Proper Code Design
In order to reduce the communication complexity, we need an error-correcting code which can be
used in practice for Algorithm 3. Consider a block code with length M. Because of the symmetry
among the N modules, M needs to be a multiple of N, i.e., each codeword consists of N symbols,
and each symbol has k bits, thus Nk. If the minimum distance of the code is d min , then
2 c, since the code should be able to detect up to d error
symbols and correct up to t error symbols[6]. Recall that the final voting result has m bits, the
code to design is a (Nk; m; (d
To get the smallest value for k, by the Singleton Bound in coding theory[6],
we get
(2)
Equality holds for all MDS Codes[6].
So given a designed (d; t), the smallest value for k is d m
e. If m
is an integer, all MDS
Codes can achieve this lower bound of k. One class of commonly used MDS codes for arbitrary
distances is the Reed-Solomon code[6]. If m
is not an integer, then any (Nk; m; (d+ t)k +1)
block code can be used, where
e, one of the examples is the BCH code, which can
also have arbitrary distances[6]. The exact parameters of (k; d; t) can be obtained by shortening
setting some information symbols to zeros ) or puncturing ( deleting some parity symbols
Notice that
In most applications, N - m, thus the N
bits of F i 's can be neglected, and k is approximately the number of the bits that each module
needs to send to get final voting result, so the communication complexity of Algorithm 3 is always
lower than that of Algorithm 1.
In this paper, only the communication complexity of voting is considered, since in many
systems, computations for encoding and decoding on individual nodes are much faster than
reliable communications among these nodes, which need rather complicated data management
in different communication stacks, retransmission of packets between distributed nodes when
packet loss happens. However, in real applications, design of proper codes should also make the
encoding and decoding of the codes as computationally efficient as possible. When the distances
of codes are relatively small, which is the case for most applications when the error probability p
is relatively low, more computation-efficient MDS codes exist, such as codes in [2], [10] and [11],
all of which require only bitwise exclusive OR operations.
Communication Complexity Analysis
4.1 Main Results
From the flow chart of Algorithm 3, we can see if the algorithm terminates in branch 1, i.e., the
algorithm gets a majority result, then the communication complexity is N(k+1); if it terminates
in branch 2, then the communication complexity is N(m+1); finally if the algorithm terminates in
branch 3, the communication complexity is Nm, thus the worst case communication complexity
Cw is N(m
Denote C a as the average case communication complexity of Algorithm 3, and define the
average reduction factor ff as the ratio of C a over the communication complexity of the Send-All
Voting algorithm (i.e., Nm), namely
Nm
, then the following theorem gives the relation
between ff and the parameters of an NMR system and the corresponding code:
Theorem 2 For an NMR system with N modules each of which executes an identical task of m-bit
result and has computational error with probability p independent of other modules' activities,
if Algorithm 3 uses an ECC which can detect up to d and correct up to t error symbols, and
holds for the average reduction factor of Algorithm 3:
where
Proof: To get the average case communication complexity C a of Algorithm 3, we need to analyze
the probability P i of the algorithm terminating in the branch i, 3. First assume that if
a module has an erroneous result X i , then it contributes an error symbol to the voting vector
Y . From the proof of Observation 2, if the algorithm terminates in the branch 1, then at most
modules have computational errors, thus the probability of this event is exactly P 1 . The event
that the algorithm reaches the branch 2 corresponds to the decoder error event of a code with
minimum distance of d+t+1, thus [6]
b d+t
where fA i g is the weight distribution of the code being used, and P ik is the probability that a
received vector Y is exactly Hamming distance k from a weight-i (binary) codeword of the code.
More precisely,
!/
If the weight distribution of the code is unknown, P 2 can be approximately bounded by
b d+t
Since the second term in the right side of the inequality above is just the probability of event
that correctable errors happen. Finally P 3 is the probability of the decoder failure event,
Now notice the fact that a module has an erroneous result can also contribute a correct symbol
to the voting vector, the average case communication complexity is
and the average reduction factor is
Notice that
e, and we get the result of ff as in Eq. (3). 2
Remarks on the theorem : From Eq. (3), we can see the relation between the average
reduction factor ff and each branch of Algorithm 3. The first term relates to the first branch
whose reduction factor is k
, or 1
when m is large enough relative to N , the round-off error
of partition can be neglected, and P 1 is the probability of that branch. One would expect this
term to be the dominant one for the ff, since with a properly designed code tuned to the system,
the algorithm is supposed to terminate at Branch 1 in most cases. The second term is simply the
probability that the algorithm terminates at either Branch 2 or Branch 3, where the reduction
factor is 1 ( i.e., there is no communication reduction since all the local results are transmitted
considering the 1 bit for F i 's in Branch 2. The last term is due to the 1 bit for voting
on F i 's. When the local result size is large enough, i.e., m AE 1, this 1 bit can be neglected in
our model. Thus in most applications, the result in the theorem can be simplified as
since the assumption that m AE 1 is quite reasonable.
From the above theorem and its proof, it can be seen that for a given NMR system (i.e., N
and p), P 1 is only a function of t, so if t is chosen, from Eq. (3) or Eq. (11) it is easy to see that
ff monotonically decreases as d decreases. Recall that 0 - t - d, thus for a chosen t, setting
can make ff minimum when m AE 1. Even though it is not straight forward to get a closed
form of t which minimize ff, it is almost trivial to get such an optimal t by numerical calculation.
Fig. 2 shows relations between ff and (t; p; N ). Fig. 2a and Fig. 2b show how ff (using Eq.
change with t for some setup of (N; p) when t. It is easy to see that for small p and
reasonable N , a small t (e.g., t - 2, for N - 51 with can achieve minimal ff. These
results show that for a quite good NMR system (e.g., p - 0:01), only by putting a small amount
of redundancy of the local results and apply error-correcting codes on them, the communication
complexity of the majority voting can be drastically reduced. Since the majority result is of m
bits, and each module shall get an identical result after the voting, the communication complexity
of the voting problem is at least m bits, thus ff - 1
is the lower bound of ff. Fig. 2c
shows the closeness of the theoretical lower bound of ff and the minimum ff that Algorithm 3
can achieve for some setup of NMR systems.
4.2 More Observations
From the above results, we can see that the communication complexity of the Algorithm 3 is
determined by the code design parameters (d; t). In an NMR system with N modules, we only
need to consider the case where at most b N
modules have different local results with the majority
(a) ff vs. t for different p with fixed vs. t for different N with fixed
p=.1
alpha
alpha
lower bound
Figure
2: Relations between ff and (t; p; N)
result, thus the only constraints of (d,t) is
2 c. For some specific values of (d,t),
the algorithm reduces to the following cases:
1. When repetition code is used, the algorithm becomes Algorithm 1.
Since a repetition code is always the worst code in terms of redundancy, it should always be
avoided for reducing the communication complexity of voting. On the other hand, when d=t=0:
the algorithm becomes Algorithm 2, and from Fig. 2, we can see that for a small enough p and
reasonable N , e.g., actually is a best solution of the majority
voting problem in terms of the communication complexity. Besides, Algorithm 2 has low computational
complexity since it does not need any complex encoding and decoding operations. Thus
the ECC voting algorithm is a generalized voting algorithm, and its communication complexity
is determined by the code chosen.
2. then the code only has detecting capability, but if m AE N , then from the analysis
above, increasing d actually makes ff increasing. Thus it is not good to put some redundancy
to the local results only for detecting capability when m AE N , i.e., using only EDC ( error
detecting code ) does not help to reduce the communication complexity of voting. The scheme
proposed in [8] is in this class with
3.
2 c: as analyzed above, in general, it is not good to have d ? t in terms of ff,
since increase of d will increase ff when t is fixed. But in this case, Algorithm 3 has a special
property: branch 2 of the algorithm can directly come to declare there is no majority result
without executing the Send-All Voting algorithm, simply because the code now can detect up
to b Nc errors, so if there was a majority result, then Y (refer to the Fig. 1) can have at
most b N
erroneous modules, and since Y is decodable, the majority of the local results should
agree with the decoded result X, i.e., Majority(F contradicts with the
actual there is no majority result. By setting d to b N
3 always has 2 rounds of communication and the worst case communication complexity is thus
Nm instead of N(m + 1) for the general case, and this achieves the lower bound of the worst
case communication complexity of the distributed majority voting problem [8].
5 Experimental Results
In this section, we show some experimental results of the three voting algorithms discussed above.
The experiments are performed over a cluster of Intel Pentium/Linux-2.0.7 nodes connected via a
100 Mbps Ethernet. Reliable communication is implemented by a simple improved UDP scheme:
whenever there is a packet loss, the voting operation is considered as a failure and redone from
beginning. By choosing suitable packet size, there is virtually no packet loss using UDP.
To examine real performances of the above three voting algorithms, N nodes vote on a result
of length m using all the three voting algorithms. For the ECC Voting algorithm, an EVENODD
Code is used, which corrects 1 error symbol, i.e., for the ECC Voting algorithm. Random
errors are added to local computing results with a preassigned error probability p, independent
of results at other nodes in the NMR system. Performances are evaluated by two parameters
for each algorithm: the total time to complete the voting operation T and the communication
time for the voting operation C. Among all the local T 's and C's, the maximum T and C are
chosen to be the T and C of the whole NMR system, since if the voting operation is considered
as a collective operation, the system's performance is determined by the worst local performance
in the system. For each set of the NMR system parameters ( N nodes and error probability
voting operation is done 200 times and random computation errors in each run are
independent of those in other runs, and the arithmetic average of C's and T 's are regarded as
the performance parameters for the tested NMR system.
Experimental results are shown in figures 3 through 5. Fig. 3 compares the experimental
average reduction factors of the voting algorithms with the theoretical results as analyzed in
the previous section, when they are applied in an NMR system of 5 nodes. Fig. 4 shows the
performances ( T and C ) of the voting algorithms. Detailed communication patterns of the
voting algorithms are shown in Fig. 5 to provide some deeper insight into the voting algorithms.
Fig. 3a and Fig. 3b show the experimental average reduction factors of the voting communication
for the Simple Send-Part Voting algorithm and the ECC Voting algorithm.
Fig. 3a and Fig. 3b also show the theoretical average reduction factors of the algorithm 2 and 3
as computed from the Eq. (11). Notice that the average communication time reduction factors ff
of both algorithm 2 and algorithm 3 are below 1, and as the computing result size increases, the
reduction factor approaches the theoretical bound, with the exception of the smallest computing
result size of 1 Kbyte.
Fig. 4 shows the performances of each voting algorithm applied in an NMR system of 5
nodes. Fig. 4ab show the total voting time T and Fig. 4cd show the communication time
C for voting. The only different parameter of the NMR systems related to the figures a and
symmetrically c and d ) is the error probability p: in the figures a and c, while
in the figures b and d. It is easy to see from the figures that for the voting algorithm 1
Voting ), T and C are the same, since besides communication, there is no additional
local computation. Fig. 4ab show that the algorithm 2 ( Simple Send-Part Voting ) and 3 ( ECC
Voting ) perform better than the algorithm 1 ( Send-All Voting ) in terms of the total voting
time T . On the other hand, Fig. 4cd show, in terms of C, i.e., the communication complexity,
the ECC Voting algorithm is better than the Simple Send-Part Voting algorithm when the error
(a) error probability
computing result size m ( Kbyte )
average
reduction
factor
a
(b) error probability
computing result size m ( Kbtye )
average
reduction
factor
a
Figure
3: Average Reduction Factors
(C(i) is the experimental average reduction factor of communication time for voting using algorithm i, and
ff(i) is the theoretical bound of the average communication reduction factor using algorithm i,
probability is relatively large ( Fig. 4c ) and worse than the Simple Send-Part Voting algorithm
when the error probability is relatively small ( Fig. 4d ), which is consistent with the analysis
results in the previous section.
In the analysis in the previous section, the size of local computing result m does not show
up as a variable in the average reduction factor function ff, since the communication complexity
is only considered as proportional to the size of the messages that need to be broadcasted. But
practically, communication time is not proportional to the message size, since startup time of
communication also needs to be included. More specially, in the Ethernet environment, since
the maximum packet size of each physical send ( broadcast ) operation is also limited by the
physical ethernet, communication completion time becomes a more complicated function of the
message size. Thus from the experimental results, it can be seen that for a computing result
of small size, e.g., 1 Kbyte, the Send-All Voting algorithm actually performs best in terms of
both C and T , since the startup time dominates the performance of communication. Also, the
communication time for broadcasting the 1-bit voting flags cannot be neglected as analyzed in
the previous section, particularly for a small size computing result. This can also be seen from
the detailed voting communication time pattern in Fig. 5ab: round 2 of the communication is
for the 1-bit voting flag, even though it finishes in much more shorter time than round 1, but is
still not negligibly small. This explains the fact that for small size computing results, the average
communication time reduction factors of algorithm 2 and algorithm 3 are quite apart from their
theoretical bound.
Further examination of the detailed communication time pattern of voting provides a deeper
insight into algorithm 3. From Fig. 5cd, it is easy to see that in the first round of communication,
algorithm 2 needs less time than algorithm 3 since the size of the message to be broadcasted is
smaller for algorithm 2. Besides, the first round of communication time does not vary as the error
probability p varies for the both algorithms. The real difference between the two algorithms lies
in the third round of communication. From Fig. 5c, this time is small for the both algorithms
since the error probability p is small ( 0.01 ). But as the error probability p increases to 0.1,
as shown in Fig. 5d, for algorithm 2, this time also increases to be bigger than the first round
time, since it has no error-correcting capability and once full message needs to be broadcasted,
its size is much bigger than in the first round. On the other hand, for algorithm 3, though it
also increases, the communication time for the third round is still much smaller than in the
first round, this comes from the error-correcting codes that algorithm 3 uses, since the code can
correct errors at one computing node, which is the most frequent error pattern that happens.
Thus even though the error probability is high, in most cases, the most expensive third round of
communication can still be avoided, and algorithm 3 performs better ( in terms of communication
complexity or time ) than algorithm 2 in high error probability systems, just as the predicted
analysis in the previous section.
6 Conclusions
We have proposed a deterministic distributed voting algorithm using error-correcting codes to
reduce the communication complexity of the voting problem in NMR systems. We also have given
a detailed theoretical analysis of the algorithm. By choosing the design parameters of the error-correcting
code, i.e., (d; t), the algorithm can achieve a low communication complexity which is
quite close to its theoretical lower bound. We have also implemented the voting algorithm over
a network of workstations, and the experimental performance results match well the theoretical
analysis. The algorithm proposed here needs 2 or 3 rounds of communication. It is left as an
open problem whether there is an algorithm for the distributed majority voting problem with its
average case communication complexity less than Nm using only 1 round of communication.
--R
"An Optimal Strategy for Computing File Copies"
"EVENODD: An Efficient Scheme for Tolerating Double Disk Failures in RAID Architectures"
"Voting Using Predispositions"
"Fault-Masking with Reduced Redundant Communication"
"Voting Without Version Numbers,"
The Theory of Error Correcting Codes
"Parallel Data Compression for Fault Tolerance"
"Using Codes to Reduce the communication Complexity of Voting in NMR Systems"
"Voting Algorithms"
"X-Code: MDS Array Codes with Optimal Encoding"
"Checkpointing in Parallel and Distributed Systems"
--TR | MDS code;error-correcting codes;NMR system;majority voting;communication complexity |
287352 | A comparison of reliable multicast protocols. | We analyze the maximum throughput that known classes of reliable multicast transport protocols can attain. A new taxonomy of reliable multicast transport protocols is introduced based on the premise that the mechanisms used to release data at the source after correct delivery should be decoupled from the mechanisms used to pace the transmission of data and to effect error recovery. Receiver-initiated protocols, which are based entirely on negative acknowledgments (NAKS) sent from the receivers to the sender, have been proposed to avoid the implosion of acknowledgements (ACKS) to the source. However, these protocols are shown to require infinite buffers in order to prevent deadlocks. Two other solutions to the ACK-implosion problem are tree-based protocols and ring-based protocols. The first organize the receivers in a tree and send ACKS along the tree; the latter send ACKS to the sender along a ring of receivers. These two classes of protocols are shown to operate correctly with finite buffers. It is shown that tree-based protocols constitute the most scalable class of all reliable multicast protocols proposed to date. | Introduction
The increasing popularity of real-time applications supporting
either group collaboration or the reliable dissemination
of multimedia information over the Internet is making the
provision of reliable and unreliable end-to-end multicast services
an integral part of its architecture. Minimally, an end-
to-end multicast service ensures that all packets from each
source are delivered to each receiver in the session within a
finite amount of time and free of errors and that packets are
safely deleted within a finite time. Additionally, the service
may ensure that each packet is delivered only once and in the
# Supported in part by the Office of Naval Research under Grant N00014-
94-1-0688, and by the Defense Advanced Research Projects Agency
(DARPA) under grant F19628-96-C-0038
Correspondence to: B.N. Levine
order sent by the source. Although reliable broadcast protocols
have existed for quite some time [3], viable approaches
on the provision of end-to-end reliable multicasting over the
Internet are just emerging. The end-to-end reliable multicast
problem facing the future Internet is compounded by its current
size and continuing growth, which makes the handling
of acknowledgements a major challenge commonly referred
to as the acknowledgement (ack) implosion problem.
The two most popular approaches to end-to-end reliable
multicasting proposed to date are called sender-initiated
and receiver-initiated. In the sender-initiated approach, the
sender maintains the state of all the receivers to whom it
has to send information and from whom it has to receive
acknowledgments (acks). Each sender's transmission or re-transmission
is multicast to all receivers; for each packet
that each receiver obtains correctly, it sends a unicast ack
to the sender. In contrast, in the receiver-initiated approach,
each receiver informs the sender of the information that is
in error or missing; the sender multicasts all packets, giving
priority to retransmissions, and a receiver sends a negative
acknowledgement (nak) when it detects an error or a lost
packet.
The first comparative analysis of ideal sender-initiated
and receiver-initiated reliable multicast protocols was presented
by Pingali et al. [17, 18]. This analysis showed
that receiver-initiated protocols are far more scalable than
sender-initiated protocols, because the maximum through-put
of sender-initiated protocols is dependent on the number
of receivers, while the maximum throughput of receiver-initiated
protocols becomes independent of the number of
receivers as the probability of packet loss becomes negligi-
ble. However, as this paper demonstrates, the ideal receiver-initiated
protocols cannot prevent deadlocks when they operate
with finite memory, i.e., when the applications using the
protocol services cannot retransmit any data themselves, and
existing implementations of receiver-initiated protocols have
inherent scaling limitations that stem from the use of messages
multicast to all group members and used to set timers
needed for nak avoidance, the need to multicast naks to
all hosts in a session, and to a lesser extent, the need to store
all messages sent in a session.
This paper addresses the question of whether a reliable
multicast transport protocol (reliable multicast protocol, for
short) can be designed that enjoys all the scaling properties
of the ideal receiver-initiated protocols, while still being
able to operate correctly with finite memory. To address this
question, the previous analysis by Pingali et al. [17, 18, 22]
is extended to consider the maximum throughput of generic
ring-based protocols, which organize receivers into a ring,
and two classes of tree-based protocols, which organize receivers
into ack trees. These classes are the other three
known approaches that can be used to solve the ack implosion
problem. Our analysis shows that tree- and ring-based
protocols can work correctly with finite memory, and that
tree-based protocols are the best choice in terms of processing
and memory requirements.
The results presented in this paper are theoretical in
nature and apply to generic protocols, rather than to specific
implementations; however, we believe that they provide
valuable architectural insight for the design of future reliable
multicast protocols. Section 2 presents a new taxonomy of
reliable multicast protocols that organizes known approaches
into four protocol classes and discusses how many key papers
in the literature fit within this taxonomy. This taxonomy
is based on the premise that the analysis of the mechanisms
used to release data from memory after their correct reception
by all receivers can be decoupled from the study of the
mechanisms used to pace the transmission of data within
the session and the detection of transmission errors. Using
this taxonomy, we argue that all reliable unicast and multicast
protocols proposed to date that use naks and work
correctly with finite memory (i.e., without requiring the application
level to store all data sent in a session) use acks to
release memory and naks to improve throughput. Section 3
addresses the correctness of the various classes of reliable
multicast protocols introduced in our taxonomy. Section 4
extends the analysis by Pingali et al. [17, 18, 22] by analyzing
the maximum throughput of three protocol classes:
tree-based, tree-based with local nak avoidance and periodic
polling (tree-NAPP), and ring-based protocols. Section
5 provides numerical results on the performance of the
protocol classes under different scenarios, and discusses the
implications of our results in light of recent work on reliable
multicasting. Section 6 provides concluding remarks.
new taxonomy of reliable multicast protocols
We now describe the four generic approaches known to date
for reliable multicasting. Well-known protocols (for unicast
and multicast purposes) are mapped into each class. Our
taxonomy differs from prior work [8, 17, 18, 22] addressing
receiver-initiated strategies for reliable multicasting in that
we decouple the definition of the mechanisms needed for
pacing of data transmission from the mechanisms needed for
the allocation of memory at the source. Using this approach,
the protocol can be thought as using two windows: a congestion
window (cw ) that advances based on feedback from
receivers regarding the pacing of transmissions and detection
of errors, and a memory allocation window (mw ) that
advances based on feedback from receivers as to whether
the sender can erase data from memory. In practice, proto-010101000000000000000000000000000000000000000011111111111111111111111111111111111111111111111111000000000000000000111111111111111111111111111000000000000000000111111111111111111111111111000000000000000000000000111111111111111111111111000000000000000000000000000000000000000011111111111111111111111111111111111111111111111111
Ack
Source Receiver
Nak
Fig. 1. A basic diagram of a sender-initiated protocol
cols may use a single window for pacing and memory (e.g.,
TCP [10]) or separate windows (e.g., NETBLT [4]).
Each reliable protocol assumes the existence of multi-cast
routing trees provided by underlying multicast routing
protocols. In the Internet, these trees will be built using such
protocols as DVMRP [6], Core-Based Trees (CBT) [1], Ordered
Core-Based Trees (OCBT) [20], Protocol-Independent
Multicast (PIM) [7], or the Multicast Internet Protocol (MIP)
[14].
2.1 Sender-initiated protocols
In the past [17, 18], sender-initiated protocols have been
characterized as placing the responsibility of reliable delivery
at the sender. However, this characterization is overly
restrictive and does not reflect the way in which several reliable
multicast protocols that rely on positive acknowledgements
from the receivers to the source have been designed.
In our taxonomy, a sender-initiated reliable multicast protocol
is one that requires the source to receive acks from all
the receivers, before it is allowed to release memory for the
data associated with the acks. Receivers are not restricted
from directly contacting the source. It is clear that the source
is required to know the constituency of the receiver set, and
that the scheme suffers from the ack implosion problem.
However, this characterization leaves unspecified the mechanism
used for pacing of transmissions and for the detection
of transmission errors. Either the source or the receivers can
be in charge of the retransmission timeouts!
The traditional approach to pacing and transmission error
detection (e.g., TCP in the context of reliable unicasting) is
for the source to be in charge of the retransmission timeout.
However, as suggested by the results reported by Floyd et
al. [8], a better approach for pacing a multicast session is
for each receiver to set its own timeout. A receiver sends
acks to the source at a rate that it can accept, and sends a
nak to the source after not receiving a correct packet from
the source for an amount of time that exceeds its retransmission
timeout. An ack can refer to a specific packet or a
window of packets, depending on the specific retransmission
strategy. A simple illustration of a sender-initiated protocol
is presented in Fig. 1.
Notice that, regardless of whether a sender-driven or
receiver-driven retransmission strategy is used, the source is
still in charge of deallocating memory after receiving all the
acks for a given packet or set of packets. The source keeps
packets in memory until every receiver node has positively
acknowledged receipt of the data. For a sender-initiated pro-
tocol, if a sender-driven retransmission strategy is used, the
sender "polls" the receivers for acks by retransmitting after
a timeout. If a receiver-driven retransmission strategy is
used, the receivers "poll" the source (with an ack) after
they time out. 1
It is important to note that, just because a reliable multi-cast
protocol uses naks, it does not mean that it is receiver-
initiated, i.e., that naks can be the basis for the source to
ascertain when it can release data from memory. The combination
of acks and naks has been used extensively in the
past for reliable unicast and multicast protocols. For exam-
ple, NETBLT is a unicast protocol that uses a nak scheme
for retransmission, but only on small partitions of the data
(i.e., its cw ). In between the partitions, called "buffers", are
acks for all the data in the buffer (i.e., the mw ). Only upon
receipt of this ack does the source release data from mem-
therefore, NETBLT is really sender-initiated. In fact,
naks are unnecessary in NETBLT for its correctness, i.e.,
a buffer can be considered one large packet that eventually
must be acked, and are important only as a mechanism to
improve throughput by allowing the source to know sooner
when it should retransmit some data.
A protocol similar to NETBLT is the "Negative Acknowledgments
with Periodic Polling" (NAPP) protocol [19].
This protocol is a broadcast protocol for local area networks
(LANs). Like NETBLT, NAPP groups together large partitions
of the data that are periodically acked, while lost
packets within the partition are naked. NAPP advances the
cw by naks and periodically advances the mw by acks.
Because the use of naks can cause a nak implosion at the
source, NAPP uses a nak avoidance scheme. As in NET-
BLT, naks increase NAPP's throughput, but are not necessary
for its correct operation, albeit slow. The use of periodic
polling limits NAPP to LANs, because the source can still
suffer from an ack implosion problem even if acks occur
less often.
Other sender-initiated protocols, like the Xpress Transfer
Protocol (XTP) [21], were created for use on an internet, but
still suffer from the ack implosion problem.
The main limitation of sender-initiated protocols is not
that acks are used, but the need for the source to process all
of the acks and to know the receiver set. The two known
methods that address this limitation are: (a) using naks instead
of acks, and (b) delegating retransmission responsibility
to members of the receiver set by organizing the receivers
into a ring or a tree. We discuss both approaches
subsequently.
2.2 Receiver-initiated protocols
Previous work [17, 18] characterizes receiver-initiated protocols
as placing the responsibility for ensuring reliable packet
delivery at each receiver. The critical aspect of these protocols
for our taxonomy is that no acks are used. The receivers
send naks back to the source when a retransmission
Of course, the source still needs a timer to ascertain when its connection
with a receiver has failed.010011010000000000000000000000000000000000000000111111111111111111111111111111111111111111111111110000000000000000001111111111111111111111111110000000000000000001111111111111111111111111110000000000000000000000000000000000000000000000000011111111111111111111111111111111111111111111111111
Nak
Source Receiver
Fig. 2. A basic diagram of a receiver-initiated protocol
is needed, detected by either an error, a skip in the sequence
numbers used, or a timeout. Receivers are not restricted from
directly contacting the source. Because the source receives
feedback from receivers only when packets are lost and not
when they are delivered, the source is unable to ascertain
when it can safely release data from memory. There is no
explicit mechanism in a receiver-initiated protocol for the
source to release data from memory (i.e., advance the mw ),
even though its pacing and retransmission mechanisms are
scalable and efficient (i.e., advancing the cw ). Figure 2 is a
simple illustration of a receiver-initiated protocol.
Because receivers communicate naks back to the source,
receiver-initiated protocols have the possibility of experiencing
a nak implosion problem at the source if many receivers
detect transmission errors. To remedy this problem, previous
work on receiver-initiated protocols [8, 17, 18] adopts the
nak avoidance scheme first proposed for NAPP, which is a
sender-initiated protocol. Receiver-initiated with nak avoidance
(RINA) protocols have been shown [17, 18, 22] to have
better performance than the basic receiver-initiated protocol.
The resulting generic RINA protocol is as follows [17, 18].
The sender multicasts all packets and state information, giving
priority to retransmissions. Whenever a receiver detects
a packet loss, it waits for a random time period and then multicasts
a nak to the sender and all other receivers. When a
receiver obtains a nak for a packet that it has not received
and for which it has started a timer to send a nak, the receiver
sets a timer and behaves as if it had sent a nak.
The expiration of a timer without the reception of the corresponding
packet is the signal used to detect a lost packet.
With this scheme, it is hoped that only one nak is sent back
to the source for a lost transmission for an entire receiver
set. Nodes farther away from the source might not even get
a chance to request a retransmission. The generic protocol
does not describe how timers are set accurately.
The generic RINA protocol we have just described constitutes
the basis for the operation of the Scalable Reliable
Multicasting (SRM) algorithm [8]. SRM has been embedded
into an internet collaborative whiteboard application called
wb. SRM sets timers based on low-rate, periodic "session
messages" multicast by every member of the group. The
messages specify a time stamp used by the receivers to estimate
the delay from the source, and the highest sequence
number generated by the node as a source. 2 The average
Multiple sources are supported in SRM, we focus on the single-source
case for simplicity.
bandwidth consumed by session messages is kept small (e.g.,
by keeping the frequency of session messages low). SRM's
implementation requires that every node stores all packets,
or that the application layer store all relevant data.
We note that naks from receivers are used to advance the
cw , which is controlled by the receivers, and the sequence
number in each multicast session message is used to "poll"
the receiver set, i.e., to ensure that each receiver is aware
of missing packets. Although session messages implement
a "polling" function [19], they cannot be used to advance
the mw , as in a sender-initiated protocol, because a sender
specifies its highest sequence number as a source, not the
highest sequence number heard from the source. 3
In practice, the persistence of session messages forces the
source to process the same number of messages that would
be needed for the source to know the receiver set over time
(one periodic message from every receiver). Accordingly,
as defined, the basic dissemination of session messages in
SRM does not scale, because it defeats one of the goals of
the receiver-initiated paradigm, i.e., to keep the receiver set
anonymous from the source for scaling purposes.
There are other issues that limit the use of RINA protocols
for reliable multicasting. First, as we show in the next
section, a RINA protocol requires that data needed for re-transmission
be rebuilt from the application. This approach
is reasonable only for applications in which the immediate
state of the data is exclusively desired, which is the case of
a distributed whiteboard. However, the approach does not
apply for multimedia applications that have no current state,
but only a stream of transition states.
Second, naks and retransmissions must be multicast to
the entire multicast group to allow suppression of naks. The
nak avoidance scheme was designed for a limited scope,
such as a LAN, or a small number of Internet nodes (as it
is used in tree-NAPP protocols, described in the next sec-
tion). This is because the basic nak avoidance algorithm
requires that timers be set based on updates multicast by every
node. As the number of nodes increases, each node must
do increasing amount of work! Furthermore, nodes that are
on congested links, LANs or regions may constantly bother
the rest of the multicast group by multicasting naks. Approaches
to limit the scope of naks and retransmission are
still evolving [8]. However, current proposals still rely on
session messages that reach all group members.
Another example of a receiver-initiated protocol is the
"log-based receiver-reliable multicast" (LBRM) [9], which
uses a hierarchy of log servers that store information indefinitely
and receivers recover by contacting a log server.
Using log servers is feasible only for applications that can
afford the servers and leaves many issues unresolved. If a
single server is used, performance can degrade due to the
load at the server; if multiple servers are used, mechanisms
must still be implemented to ensure that such servers have
consistent information.
The ideal receiver-initiated protocol has three main advantages
over sender-initiated protocols, namely: (a) the
source does not know the receiver set, (b) the source does
3 Our prior description of SRM [11, 12] incorrectly assumed that session
messages contained the highest sequence number heard from the source.
We thank Steve McCanne for pointing this out.
not have to process acks from each receiver, and (c) the
receivers pace the source. The limitation of this protocol is
that it has no mechanism for the source to know when it
can safely release data from memory. Furthermore, as we
have argued, the practical implementations of the receiver-initiated
approach fail to provide advantages (a) and (b). The
following two protocol classes organize the receiver set in
ways that permit the strengths of receiver-initiated protocols
to be applied on a local scale, while providing explicit
mechanisms for the source to release memory safely (i.e.,
efficient management of the mw ).
2.3 Tree-based protocols
Tree-based protocols are characterized by dividing the receiver
set into groups, distributing retransmission responsibility
over an acknowledgement tree (ack tree) structure
built from the set of groups, with the source as the root of
the tree. A simple illustration of a tree-based protocol is presented
in Fig. 3. The ack tree structure prevents receivers
from directly contacting the source, in order to maintain
scalability with a large receiver set.
The ack tree consists of the receivers and the source
organized into local groups , with each such group having a
group leader in charge of retransmissions within the local
group. The source is the group leader in charge of retransmissions
to its own local group. Each group leader other than
the source communicates with another local group (to either
a child or the group leader) closer to the source to request
retransmissions of packets that are not received correctly.
Group leaders may be children of another local group, or
minimally, may just be in contact with another local group.
Each local group may have more than one group leader to
handle multiple sources. Group leaders could also be chosen
dynamically, e.g., through token passing within the local
group.
Hosts that are only children are at the bottom of the
ack tree, and are termed leaves. Obviously, an ack tree
consisting of the source as the only leader and leaf nodes
corresponds to the sender-initiated scheme.
Acknowledgments from children in a group, including
the source's own group, are sent only to the group leader.
The children of a group send their acknowledgements to
the group leader as soon as they receive correct packets,
advancing the cw ; we refer to such acknowledgements as
local acks or local naks, i.e., retransmissions are triggered
by local acks and local naks unicast to group leaders by
their children. Similar to sender-initiated schemes, the use
of local naks is unnecessary for correct operation of the
protocol.
Tree-based protocols can also delegate to leaders of sub-trees
the decision of when to delete packets from memory
(i.e., advance the mw ), which is conditional upon receipt of
aggregate acks from the children of the group. Aggregate
acks start from the leaves of the ack tree, and propagate
toward the source, one local group at a time. A group leader
cannot send an aggregate ack until all its children have
sent an aggregate ack. Using aggregate acks is necessary
to ensure that the protocol operates correctly even if group
leaders fail, or if the ack tree is partitioned for long periods
Group
Leaf
Source
Fig. 3. A basic diagram of a tree-based protocol
of time [12]. If aggregate acks are not used, i.e., if a group
leader only waits for all its children to send local acks before
advancing the mw , then correct operation after group
leaders fail can only be guaranteed by not allowing nodes
to delete packets; this is the approach used in all tree-based
protocols [13, 16, 24] other than Lorax [12]. The Lorax protocol
[12] is the first tree-based protocol to build a single
shared ack tree for use by multiple sources in a single ses-
sion, and to use aggregate acks to ensure correct operation
after hosts in the ack tree fail.
The use of local acks and local naks for requesting
retransmissions is important for throughput. If the source
scheduled retransmissions based on aggregate acks, it would
have to be paced based on the slowest path in the ack tree.
Instead, retransmissions are scheduled independently in each
local group.
Tree-based protocols eliminate the ack-implosion prob-
the source from having to know the receiver set,
and operate solely on messages exchanged in local groups
(between a group leader and its children in the ack tree).
Furthermore, if aggregate acks are used, a tree-based protocol
can work correctly with finite memory even in the
presence of receiver failures and network partitions.
To simplify our analysis and description of this protocol,
we assume that the group leaders control the retransmission
timeouts; however, such timeouts can be controlled by the
children of the source and group leaders. Accordingly, when
the source sends a packet, it sets a timer, and each group
leader sets a timer as it becomes aware of a new packet. If
there is a timeout before all local acks have been received,
the packet is assumed to be lost and is retransmitted by the
source or group leader to its children.
The first application of tree-based protocols to reliable
multicasting over an internet was reported by Paul et al. [15],
who compared three basic schemes for reliable point-to-
multipoint multicasting using hierarchical structures. Their
results have been fully developed as the reliable multicast
transport protocol (RMTP) [13, 16]. While our generic protocol
sends a local ack for every packet sent by the source,
RMTP sends local acks only periodically, so as to conserve
bandwidth and to reduce processing at each group leader, increasing
attainable throughput.
We define a tree-NAPP protocol as a tree-based protocol
that uses nak avoidance and periodic polling [19] in the
local groups. Naks alone are not sufficient to guarantee
reliability with finite memory, so receivers send a periodic
positive local ack to their parents to advance the cw . Note
that messages sent for the setting of timers needed for nak000000111111001111001110000000000001111111111111111
Receiver Set
Source
Nak
Ack
Fig. 4. A basic diagram of a ring-based protocol
avoidance are limited to the local group, which is scalable.
The tree-based multicast transport protocol (TMTP) [24] is
an example of a tree-NAPP protocol.
2.4 Ring-based protocols
Ring-based protocols for reliable multicast were originally
developed to provide support for applications that require an
atomic and total ordering of transmissions at all receivers.
One of the first proposals for reliable multicasting is the
token ring protocol (TRP) [3]; its aim was to combine the
throughput advantages of naks with the reliability of acks.
The Reliable Multicast Protocol (RMP) [23] discussed an
updated WAN version of TRP. Although multiple rings are
used in a naming hierarchy, the same class of protocol is
used for the actual rings. Therefore, RMP has the same
throughput bounds as TRP.
We base our description of generic ring-based protocols
on the LAN protocol TRP and the WAN protocol RMP. A
simple illustration of a ring-based protocol is presented in
Fig. 4. The basic premise is to have only one token site responsible
for acking packets back to the source. The source
times out and retransmits packets if it does not receive an
ack from the token site within a timeout period. The ack
also serves to timestamp packets, so that all receiver nodes
have a global ordering of the packets for delivery to the
application layer. The protocol does not allow receivers to
deliver packets until the token site has multicast its ack.
Receivers send naks to the token site for selective repeat
of lost packets that were originally multicast from the
source. The ack sent back to the source also serves as a token
passing mechanism. If no transmissions from the source
are available to piggyback the token, then a separate unicast
message is sent. Since we are interested in the maximum
throughput, we will not consider the latter case in this pa-
per. The token is not passed to the next member of the
ring of receivers until the new site has correctly received all
packets that the former site has received. Once the token is
passed, a site may clear packets from memory; accordingly,
the final deletion of packets from the collective memory of
the receiver set is decided by the token site, and is conditional
on passing the token. The source deletes packets only
when an ack/token is received. Note that both TRP and
RMP specify that retransmissions are sent unicast from the
token site. Because our analysis focuses on maximum attainable
throughput of protocol classes, we will assume that
the token is passed exactly once per message.
3 Protocol correctness
A protocol is considered correct if it is shown to be both
safe and live [2]. Given the minimum definition of reliable
service we have assumed, for any reliable multicast protocol
to be live, no deadlock should occur at any receiver or at the
source. For the protocol to be safe, all data sent by the source
must be delivered to a higher layer within a finite time. To
address the correctness of protocol classes, we assume that
nodes never fail during the duration of a reliable multicast
session and that a multicast session is established correctly
and is permanent. Therefore, our analysis of correctness focuses
only on the ability of the protocol classes to sustain
packet losses or errors. We assume that there exists some
non-zero probability that a packet is received error-free, and
that all senders and receivers have finite memory.
The proof of correctness for ring-based protocols is
given by Chang and Maxemchuk [3]. The proof that sender-initiated
unicast protocols are safe and live is available from
many sources (e.g., Bertsekas and Gallager [2]). The proof
does not change significantly for the sender-initiated class
of reliable multicast protocols and is omitted for brevity.
The liveness property at each receiver is not violated, because
each node can store a counter of the sequence number
of the next packet to be delivered to a higher layer. The
safety property proof is also essentially the same, because
the source waits for acks from all members in the receiver
set before sliding the cw and mw forward. Theorems 1 and 2
below demonstrate that the generic tree-based reliable multi-cast
protocol (TRMP for short) is correct, and that receiver-initiated
reliable multicast protocols are not live.
Theorem 1: TRMP is safe and live.
Proof. Let R be the set of all the nodes that belong to the reliable
multicast session, including a source s. The receivers
in the set are organized into a B-ary tree of height h. The
proof proceeds by induction on h.
For the case in which reduces to a non-hierarchical
sender-initiated scheme of
with each of the B receivers practicing a given retransmission
strategy with the source. Therefore, the proof follows
from the correctness proof of unicast retransmission protocols
presented by Bertsekas and Gallager [2].
For h > 1, assume the theorem holds for any t such that
must prove the theorem holds for some
Liveness. We must prove that each member of a tree of
height t is live. Consider a subset of the tree that starts at
the source and includes all nodes of the tree up to a height
of (t - 1); the leaves of this subtree are also group leaders
in the larger tree, i.e., group leaders of the nodes at the
bottom of the larger tree. By the inductive hypothesis, the
liveness property is true in this subtree. We must only show
that TRMP is live for a second subset of nodes consisting
of leaves of the larger tree and their group leader parents.
Each group in this second subset follows the same protocol,
and it suffices to prove that an arbitrary group are live.
The arbitrary group in the second subset of the tree constitutes
a case of sender-initiated reliable multicast, with the
only difference that the original transmission is sent from the
source (external to the group), not the group leader. Since
leaves can only contact the group leader, we must prove this
relationship is live. The inductive hypothesis guarantees that
the group leader and its parent is live.
Assume the source transmits a packet i at time c 1 , and
that it is received correctly and delivered at all leaves of
the arbitrary group at time c 2 . Let c 3 be the time at which
the group leader deletes the packet and advances the mw .
The protocol is live and will not enter into deadlock if
finite. The rest of the proof follows
from the proof by Bertsekas and Gallager [2] for unicast
protocols, where the group leader takes the place of
the source. Therefore, TRMP is live.
Safety. The safety of TRMP follows directly, because
our proof of liveness shows that any arbitrary packet i is
delivered at each receiver within a finite time. QED
Theorem 2. A receiver-initiated reliable protocol is not live.
Proof. The proof is by example focusing on the sender and
an arbitrary member of the receiver set R (where R # 1).
- Sender node, X , has enough memory to store up to M
packets.
Each packet takes 1 unit of time to reach a receiver node
Y . Naks take a finite amount of time to reach the sender.
- Let p i denote the i th packet, i beginning from zero. p 0
is sent at start time 0, but it is lost in the network.
sends the next (M - 1) packets to Y successfully.
sends a nak stating that p 0 was not received. The
nak is either lost or reaches the sender after time M
when the sender decides to send out packet pM .
store up to M packets, and it has not
received any naks for p 0 by time M , it must clear
assuming that it has been received correctly.
receives the nak for p 0 at time M
deadlocked, unable to retransmit p 0 . QED
The above indicates that the ideal receiver-initiated protocol
requires an infinite memory to work correctly. In prac-
tice, this requirement implies that the source must keep in
memory every packet that it sends during the lifetime of a
session.
Theorem 1 assumes that no node failures or network
traffic occur. However, node failures do happen in practice,
which changes the operational requirements of practical tree-based
protocols. For tree-based protocols, it can be shown
that deleting packets from memory after a node receives local
acks from its children is not live. Aggregate acks are
necessary to ensure correct operation of tree-based protocols
in the presence of failures. Lorax [12] is the only tree-based
protocol that uses aggregate acks and can operate with finite
memory in the presence of node failures or network
partitions.
4 Maximum throughput analysis
4.1 Assumptions
To analyze the maximum throughput that each of the generic
reliable multicast protocols introduced in Sect. 2 can achieve,
we use the same model as Pingali et al. [17, 18], which
focuses on the processing requirements of generic reliable
multicast protocols, rather than the communication band-width
requirements. Accordingly, the maximum throughput
of a generic protocol is a function of the per-packet processing
rate at the sender and receivers, and the analysis focuses
on obtaining the processing times per packet at a given node.
We assume a single sender, X , multicasting to R identical
receivers. The probability of packet loss is p for any
node.
Figure
5 summarizes all the notation used in this sec-
tion. For clarity, we assume a single ack tree rooted at a
single source in the analysis of tree-based protocols. A selective
repeat retransmission strategy is assumed in all the protocol
classes since it is well known to be the retransmission
strategy with the highest throughput [2], and its requirement
of keeping buffers at the receivers is a non-issue given the
small cost of memory. Assumptions specific to each protocol
are listed in Sect. 2, and are in the interest of modeling
maximum throughput.
We make two additional assumptions: (1) no acknowledgements
are ever lost, and (2) all loss events at any node
in the multicast of a packet are mutually independent.
Such multicast routing protocols as CBT, OCBT, PIM,
MIP, and DVMRP [1, 5, 7, 14, 20] organize routers into
trees, which means that there is a correlation between packet
loss at each receiver. Our first assumption benefits all classes,
but especially favors protocols that multicast acknowledge-
ments. In fact, this assumption is essential for RINA proto-
cols, in order to analyze their maximum attainable through-
put, because nak avoidance is most effective if all receivers
are guaranteed to receive the first nak multicast to the receiver
set. As the number of nodes involved in nak avoidance
increases, the task of successful delivery of a nak to all
receivers becomes less probable. Both RINA and tree-NAPP
protocols are favored by the assumption, but RINA protocols
much more so, because the probability of delivering naks
successfully to all receivers is exaggerated.
Our second assumption is equivalent to a scenario in
which there is no correlation among packet losses at receivers
and the location of those receivers in the underlying
multicast routing tree of the source. Protocols that can
take advantage of the relative position of receivers in the
multicast routing tree for the transmission of acks, naks,
or retransmissions would possibly attain higher throughput
than predicted by this model. However, no class is given
any relative advantage with this assumption.
Table
1 summarizes the bounds on maximum throughput
for all the known classes of reliable multicast protocols. Our
results clearly show that tree-NAPP protocols constitute the
most scalable alternative.
4.2 Sender- and receiver-initiated protocols
Following the notation introduced by Pingali et al. [17, 18],
we place a superscript A on any variable related to the
sender-initiated protocol, and N1 and N2 on variables related
to the receiver-initiated and RINA protocols, respec-
tively. The maximum throughput of the protocols for a constant
stream of packets to R receivers is [17, 18]
1/# A
Table
1. Analytical bounds
Protocol Processor requirements
p as a
con-
stant
Sender-initiated
[17,
O
Receiver-
initiated nak
avoidance
[17,
O
Ring-based (uni-
cast retrans.)
O
Tree-based O(B(1 - p)
Tree-NAPP O
# . (3)
Even as the probability of packet loss goes to zero, the
throughput of the sender-initiated protocol is inversely dependent
on R, the size of the receiver set, because an ack
must be sent by every receiver to the source once a transmission
is correctly received. In contrast, as p goes to zero,
the throughput of receiver-initiated protocols becomes independent
of the number of receivers. Notice, however, that
the throughput of a receiver-initiated protocol is inversely
dependent with R, the number of receivers, or with ln R,
when the probability of error is not negligible. We note that
this result assumes perfect setting of the timers used in a
RINA protocol without cost and that a single nak reaches
the source, because we are only interested in the maximum
attainable throughput of protocols.
4.3 Tree-based protocols
We denote this class of protocols simply by H1, and use
that superscript in all variables related to the protocol class.
In the following, we derive and bound the expected cost
at each type of node and then consider the overall system
throughput. To make use of symmetry, we assume, without
loss of generality, that there are enough receivers to form a
full tree at each level.
Without loss of generality, we assume that each local
group in the ack tree consists of B children and a group
leader. This allows us to make use of symmetry in our
throughput calculations. We also assume that local acks
advance the mw rather than aggregate acks, because by assumption
no receiver fails in the system. We assume perfect
setting of timers without cost and that a single nak reaches
the source, because we are only interested in the maximum
attainable throughput of protocols.
4.3.1 Source node
We consider first X H1 , the processing costs required by the
source to successfully multicast an arbitrarily chosen packet
Branching factor of a tree, the group size.
R - Size of the receiver set.
Time to feed in a new packet from the higher protocol layer.
Xp - Time to process the transmission of a packet.
Xa , Times to process transmission of an ack, nak, or local ack, respectively.
Time to process a timeout at a sender or receiver node, respectively.
Yp - Time to process a newly received packet.
Time to deliver a correctly received packet to a higher layer.
Ya , Yn , Y h - Times to process and transmit an ack, nak, or local ack, respectively.
Probability of loss at a receiver; losses at different receivers are assumed to be independent events.
r - Number of local acks sent by receiver r per packet using a tree-based protocol.
r - Number of acks sent by a receiver r per packet using a unicast protocol.
Total number of local acks received from all receivers per packet.
Mr - Number of transmissions necessary for receiver r to successfully receive a packet.
- Number of transmissions for all receivers to receive the packet correctly (for protocols A, N1 and N2);
Number of transmissions for all receivers to receive the packet correctly for protocols H1 and H2.
Processing time per packet at sender and receiver, respectively, in protocol w # {A, N1, N2, H1, H2, R}.
Processing time per packet at a group leader in tree-based and tree- NAPP protocols, respectively.
per packet at the token-site in ring-based protocols.
# w x - Throughput for protocol w # {A, N1, N2, H1, H2, R} where x is one of the source s, receiver (leaf) r, group leader h,
or token-site t. No subscript denotes overall system throughput.
X# , Y# - Times to process the reception and transmission, respectively, of a periodic local ack.
Fig. 5. Notation
to all receivers using the H1 protocol. The processing requirement
for an arbitrary packet can be expressed as a sum
of costs:
+(receiving acks)
where X f is the time to get a packet from a higher layer,
is the time taken on attempt m at successful transmission
of the packet, X t (m) is the time to process a timeout
interrupt for transmission attempt m, X h (i) is the time to
process local ack i, M H1 is the number of transmissions
that the source will have to make for this packet using the
H1 protocol, and L H1 is the number of local acks received
using the H1 protocol. Taking expectations, we have
What we have derived so far is extremely similar to Eqs. 1
and 2 in the analysis by Pingali et al. [17, 18]. In fact, we
can use all of their analysis, with the understanding that B is
the size of the receiver subset from which the source collects
local acks. Therefore, the expected number of local acks
received at the sender is
Substituting Eq. 6 into Eq. 5, we can rewrite the expected
cost at the source node as
Pingali et al. [17, 18] have shown that the expected number
of transmissions per packet in A, N1, and N2 equals
# . (8)
Because in H1 the number of receivers R = B, the expected
number of transmissions per packet in the H1 protocol is
which can be simplified to [17, 18, 19]
Pingali et al. [17, 18] provide a bound of E[M ] that we
apply to E[M H1 ] with
Using Eq. 11, we can bound Eq. 7 as follows
It then follows that, when p is a constant, E[X H1
O(B
4.3.2 Leaf nodes
Let Y H1 denote the requirement on nodes that do not have to
forward packets (leaves). Notice that leaf nodes in the H1
protocol will process fewer retransmissions and thus send
fewer acknowledgements than receivers in the A protocol.
We can again use an analysis similar to the one by Pingali
et al. [17, 18] for receivers using a sender-initiated protocol.
(receiving transmissions)
(sending local acks)
where Y p (i) is the time it takes to process (re)transmission i,
Y h (i) is the time it takes to send local ack i, Y f is the time
to deliver a packet to a higher layer, and L H1
is the number
of local acks generated by this node h (i.e., the number of
transmissions correctly received). Since each receiver is sent
transmissions with probability p that a packet will be
lost, we obtain
Taking expectations of Eq. 13 and substituting Eq. 14, we
have
Again, noting the bound of E[M H1 ] given in Eq. 11,
When p is treated as a constant, E[Y H1
4.3.3 Group leaders
To evaluate the processing requirement at a group leader, h,
we note that a node caught between the source and a node
with no children has a two jobs: to receive and to retransmit
packets. Because it is convenient, and because a group leader
is both a sender and receiver, we will express the costs in
terms of X and Y . Our sum of costs is
(receiving transmissions)
(sending local acks)
(collecting local acks)
Just as in the case for the source node, L H1 is the expected
number of local acks received from node h's children for
this packet, and L H1
h is the number of local acks generated
by node h.
We can substitute Eqs. 6 and 14 into Eq. to obtain
The first two terms are equivalent to the processing requirements
of a leaf node. The last two are almost the cost for
a source node. Substituting and subtracting the difference
yields
In other words, the cost on a group leader is the same as
a source and a leaf, without the cost of receiving the data
from higher layers and one less transmission (the original
one). Substituting Eqs. 12 and 16 into Eq. 20, we have
When p is a constant, E[H H1 which is the
dominant term in the throughput analysis of the overall system
4.3.4 Overall system analysis
Let the throughput at the sender # H1
s
be 1/E[X H1 ], at the
group leaders # H1
be 1/E[H H1 ], at the leaf nodes # H1
r
be
1/E[Y H1 ]. The throughput of the overall system is
r }. (22)
From Eqs. 12, 16, and 21 it follows that
If p is a constant and if p # 0, we obtain
Therefore, the maximum throughput of this protocol, as well
as the throughput with non-negligible packet loss, is independent
of the number of receivers. This is the only class
of reliable multicast protocols that exhibits such degree of
scalability with respect to the number of receivers.
4.4 Tree-based protocols with local nak avoidance
and periodic polling
To bound the overall system throughput in the generic Tree-
NAPP protocol, we repeat the method used for the tree-based
class; we first derive and bound the expected cost at the
source, group leaders, and leaves. As we did for the case
of tree-based protocols, we assume that there are enough
receivers to form a full tree at each level. We place a superscript
H2 on any variables relating to the generic Tree-NAPP
protocol.
4.4.1 Source node
We consider first X H2 , the processing costs required by the
source to successfully multicast an arbitrarily chosen packet
to all receivers using the H2 protocol. The processing requirement
for an arbitrary packet can be expressed as a sum
of costs:
+(receiving local naks)
+(receiving periodic local acks)
where X f is the time to get a packet from a higher layer,
(i) is the time for (re)transmission attempt i, X n (m) is
the time for receiving local nak m from the receiver set,
# is the amortized time to process the periodic local ack
associated with the current congestion window, and M H2 is
the number of transmission attempts the source will have to
make for this packet. Taking expectations, where
we have
Using Eq. 11, the bound of E[M H1 ], we can bound Eq. 28
as follows
It then follows that, when p is a constant, E[X H2
4.4.2 Leaf nodes
Let Y H2 denote the processing requirement on nodes that
do not have to forward packets (leaves). The sum of cost
can be expressed as
(receiving transmissions)
(sending periodic local acks)
(sending local naks)
(receiving local naks)
Let Y p (i) be the time it takes to process the (re)transmission
r be the number of transmissions required for the packet
to be received by receiver r, Y n (j) be the time it takes to
send local nak j, X n (j) be the time it takes to receiver
local nak j (from another receiver), Y t (k) be the time to
set timer k, Y f be the time to deliver a packet to a higher
layer, and Y # be the amortized cost of sending a periodic
local ack for a group of packets of which this packet is a
member. Taking expectations of Eq. 30,
It follows from the distribution of M r that [17, 18]
. (32)
Therefore, noting Eq. 32 and that Prob{M r >
derive from Eq. 31 the expected cost as
Again, using the bound of E[M H1 ] given in Eq. 11, we can
bound Eq. 33 by
# . (34)
When p is treated as a constant, E[Y H2
4.4.3 Group leaders
The sum of costs for group leaders, which have the job of
both sender and receiver, is
(receiving transmissions)
(sending periodic local acks)
(receiving periodic local acks)
(receiving local naks)
(sending local naks)
(retransmissions to children)
Taking expectations and substituting Eq. 32, we obtain
Similar to group leaders in the H1 protocol, the processing
cost at a group leader is the same as a source and a leaf,
without the cost of receiving the data from a higher layer
and one less transmission. Substituting Eq. 28 and Eq. 33
into Eq. 36 and subtracting the difference, the expected cost
can be expressed as
Therefore, Eq. 36 can be bounded by
# . (38)
When p is a constant, E[H H2 ] # O(1). Therefore, all nodes
in the Tree-NAPP protocol have a constant amount of work
to do with regard to the number of receivers.
4.4.4 Overall system analysis
The overall system throughput for the H2 protocol is the
minimum throughput attainable at each type of node in the
tree, that is,
r }. (39)
From Eqs. 29, 34, and 38, it follows that
Accordingly, if either p is constant or p # 0, we obtain
from Eq. 40 that
# O(1). (41)
Therefore, the maximum throughput of the Tree-NAPP pro-
tocol, as well as the throughput with non-negligible packet
loss, is independent of the number of receivers.
4.5 Ring-based protocols
In this section, we analyze the throughput of ring-based pro-
tocols, which we denote by a superscript R, using the same
assumptions as in Sects. 4.3 and 4.4. Because we are interested
in the maximum attainable throughput, we are assuming
a constant stream of packets, which means we can
ignore the overhead that occurs when there are no acks on
which to piggyback token-passing messages.
4.5.1 Source
Source nodes practice a special form of unicast with a roaming
token site. The sum of costs incurred is
(processing acks)
r
X a (i)
Mr
where M r is the number of transmissions required for the
packet to be received by the token site, and has a mean of
r
be the number of acks from
a receiver r (in this case the token site) sent unicast, i.e.,
the number of packets correctly received at r. This number
is always 1, accordingly:
Taking expectations of Eq. 42, we obtain
r
If we again assume constant costs for all operations, it can
be shown that
which, when p is a constant, is O(1) with regard to the size
of the receiver set.
4.5.2 Token site
The current token site has the following costs: (note both
TRP and RMP specify that retransmissions are sent unicast
to other R - 1 receivers.)
(multicasting ack/token )
(processing naks)
(unicasting retransmissions)
r
(R - 1)Prob{M r >1}
Mr
where L R is the number of naks received at the token site
when using a ring protocol. To derive L R , consider M r , the
number of transmissions necessary for receiver r to successfully
receive a packet. M r has an expected value of 1/(1-p),
and the last transmission is not naked. Because there are
(R - 1) other receivers sending naks to the token site, we
obtain
(R - 1)p
Therefore, the mean processing time at the token site is
(R
(R - 1)p
The expected cost at the token site can be bounded by
(R - 1)p
with regard to the number of receivers. When p is a constant,
4.5.3 Receivers
Receivers practice a receiver-initiated protocol with the current
token site. We assume there is only one packet for the
ack, token, and time stamp multicast from the token site
per data packet. The cost associated with an arbitrary packet
are therefore
(receiving first transmission)
(sending naks)
(receiving retransmissions)
r
Mr
Mr
The first term in the above equation is the cost of receiving
the ack/token/time stamp packet from the token site; the
second is the cost of receiving the first transmission sent
from the sender, assuming it is received error free; the third
is the cost of delivering an error-free transmission to a higher
layer; the fourth is the cost of receiving the retransmissions
from the token site, assuming that the first failed; and the
last two terms consider that a nak is sent only if the first
transmission attempt fails and that an interrupt occurs only
if a nak was sent. Taking expectations, we obtain
As shown previously [17, 18],
. (52)
Substituting Eqs. 43, 52, and 32 into Eq. 51, we have
Assuming all operations have constant costs, it can be shown
that
with regard to the size of the receiver set. If we consider p
as a constant, then E[Y R
4.5.4 Overall system analysis
The overall system throughput of R, the generic token ring
protocol, is equal to the minimum attainable throughput at
each of its parts:
r }. (55)
From Eqs. 45, 49 and 54 it follows that, if p is a constant
and for p # 0, we obtain
(R - 1)p
5 Numerical results
To compare the relative performance of the various classes
of protocols, all mean processing times are set equal to 1,
except for the periodic costs X # and Y # which are set to
0.1.
Figure
6 compares the relative throughputs of the protocols
A, N1, N2, H1, H2, and R as defined in Sect. 2.
The graph represents the inverse of Eqs. 19, 36, and 48, re-
spectively, which are the throughputs for the tree-based, tree-
NAPP, and ring-based protocols, as well as the inverse of the
throughput equations derived previously [17, 18] for sender-
and receiver-initiated protocols. The top, middle and bottom
graphs correspond to increasing probabilities of packet loss,
10%, and 25%, respectively. Exact values of E[M H1
were calculated using a finite version of Eq. 9; Exact values
of E[M ] were similarly calculated [22].
The performance of nak avoidance protocols, especially
tree-NAPP protocols, is clearly superior. However, our assumptions
place these two subclasses at an advantage over
their base classes. First, we assume that no acknowledgements
are lost or are received in error. The effectiveness
of nak avoidance is dependent on the probability of naks
reaching all receivers, and thus, without our assumption, the
effectiveness of nak avoidance decreases as the number of
receivers involved increases. Accordingly, tree-NAPP protocols
have an advantage that is limited by the branching
factor of the ack tree, while RINA protocols have an advantage
that increases with the size of the entire receiver
set. Second, we assume that the timers used for nak avoidance
are set perfectly. In reality, the messages used to set
timers would be subject to end-to-end delays that exhibit no
regularity and can become arbitrarily large.
1000 2000 3000 4000 5000 6000 7000 8000 9000 100000.2Throughput H2
A
1000 2000 3000 4000 5000 6000 7000 8000 9000 100000.2Number ofReceivers
Fig. 6. The throughput graph from the exact equations for each protocol.
The probability of packet loss is respectively. The
branching factor for trees is set at 10
We conjecture that the relative performance of nak avoidance
would actually lie closer to their respective
base classes, depending on the effectiveness of the nak
avoidance scheme; in other words, the curves shown are
upper bounds of nak avoidance performance. Our results
show that, when considering only the base classes (since
not one has an advantage over another), the tree-based class
performs better than all the other classes. When considering
only the subclasses that use nak avoidance, tree-NAPP
protocols perform better than RINA protocols, even though
our model provides an unfair advantage to RINA protocols.
It is the hierarchical structure organization of the receiver
set in tree-based protocols that guarantees scalability and improves
performance over other protocols. Using nak avoidance
on a small scale increases performance even further.
In addition, if nak avoidance failed for a tree-NAPP protocol
(e.g., due to incorrect setting of timers), the performance
would still be independent of the size of the receiver set.
RINA protocols do not have this property. Failure of the
nak avoidance for RINA protocols would result in unscalable
performance like that of a receiver-initiated protocol,
which degrades quickly with increasing packet loss.
Any increase in processor speed, or a smaller branching
factor would also increase throughput for all tree-based pro-
tocols. However, for the same number of receivers, a smaller
branching factor implies that some retransmissions must traverse
a larger number of tree-hops towards receivers expecting
them further down the tree. For example, if a packet is
lost immediately at the source, the retransmission is multi-cast
only to its children and all other nodes in the tree must
wait until the retransmission trickles down the tree struc-
ture. This poses a latency problem that can be addressed
by taking advantage of the dependencies in the underlying
multicast routing tree. Retransmissions could be multicast
only toward all receivers attached to routers on the subtree
of the router attached to the receiver which has requested
the missing data. The number of tree-hops from the receiver
to the source is also a factor in how quickly the source can
release data from memory in the presence of node failures,
as discussed by Levine et al. [12].
Supportable
receivers H2 N2
R A
Fig. 7. Number of supportable receivers for each protocol. The probability
of packet loss is respectively. The branching factor for
trees is set at 10
Figure
7 shows the number of supportable receivers by
each of the different classes, relative to processor speed
requirements. This number is obtained by normalizing all
classes to a baseline processor, as described by Pingali et
al. [17, 18]. The baseline uses protocol A and can support exactly
one
is the speed of the processor that can support at most R
receivers under protocol #, we set - A 1. The baseline
cost is equal to [17, 18]
=- A [1]
. (58)
Using Eqs. 18, 36, 48, and 58, we can derive the following
-s for tree-based, tree-NAPP, and ring-based protocols,
respectively:
(R - 1)p
# . (61)
The number of supportable receivers derived for sender- and
receiver-initiated protocols are shown to be [17, 18]
(2E[M ]) . (64)
From Fig. 6 and 7, it is clear that tree-based protocols
can support any number of receivers for the same processor
speed bound at each node, and that tree-NAPP protocols attain
the highest maximum throughput. It is also important to
note that the maximum throughput that RINA protocols can
attain becomes more and more insensitive to the size of the
receiver set as the probability of error decreases. Because
we have assumed that a single nak reaches the source, that
naks are never lost, and that session messages incur no processing
load, we implicitly assume the optimum behavior of
RINA protocols. The simulation results reported for SRM
by Floyd et al. [8] agree with our model and result from
assuming no nak losses and a single packet loss in the ex-
periments. Figure 7 shows that tree-NAPP protocols can be
made to perform better than the best possible RINA protocol
by limiting the size of the local groups.
Because of the unicast nature of retransmissions in ring-based
protocols, these protocols approach sender-initiated
protocols; this indicates that allowing only multicast retransmissions
would improve performance greatly.
6 Conclusions
We have compared and analyzed the four known classes of
reliable multicast protocols. Of course, our model constitutes
only a crude approximation of the actual behavior of reliable
multicast protocols. In the Internet, an ack or a nak
is simply another packet, and the probability of an ack or
nak being lost or received in error is much the same as
the error probability of a data packet. This assumption gives
protocols that use nak avoidance an advantage over other
classes. Therefore, it is more reasonable to compare them
separately: our results show that tree-based protocols without
nak avoidance perform better than other classes that do not
use nak avoidance, and that tree-NAPP protocols perform
better than RINA protocols, even though RINA protocols
have an artificial advantage over every other class. We conjecture
that, once the effects of ack or nak failure, and the
correlation of failures along the underlying multicast routing
trees are accounted for, the same relative performance
of protocols will be observed.
The results are summarized in Table 1. It is already
known that sender-initiated protocols are not scalable because
the source must account for every receiver listening.
Receiver-initiated protocols are far more scalable, unless
nak avoidance schemes are used to avoid overloading the
source with retransmission requests. However, because of
the unbounded-memory requirement, this protocol class can
only be used efficiently with application-layer support, and
only for a limited set of applications. Furthermore, to set the
timers needed for nak avoidance, existing instantiations of
RINA protocols require all group members to transmit session
messages periodically, which makes them unscalable.
Ring-based protocols were designed for atomic and total ordering
of packets. TRP and RMP limit their throughput by
requiring retransmissions to be unicast. It would be possible
to reduce the cost bound to O(ln R), assuming p to be
a constant, if the nak avoidance techniques presented by
Ramakrishnan and Jain [19] were used.
Our analysis shows that ack trees are a good answer
to the scalability problem for reliable multicasting. Practical
implementations of tree-based protocols maintain the
anonymity of the receiver set, and only the tree-based and
tree-NAPP classes have throughputs that are constant with
respect to the number of receivers, even when the probability
of packet loss is not negligible (which would preclude
accurate setting of nak avoidance timers). Because
tree-based protocols delegate responsibility for retransmission
to receivers and because they employ techniques applicable
to either sender- or receiver-initiated protocols within
local groups (i.e., a node and its children in the tree) of the
ack tree only, any mechanism that can be used with all the
receivers of a session in a receiver-initiated protocol can be
adopted in a tree-based protocol, with the added benefit that
the throughput and number of supportable receivers is completely
independent of the size of the receiver set, regardless
of the likelihood with which packets, acks, and naks are
received correctly.
On the other hand, while the scope of naks and retransmissions
can be reduced without establishing a structure in
the receiver set [8], limiting the scope of the session messages
needed to set nak avoidance timers and to contain the
scope of naks and retransmissions require the aggregation
of these messages. This leads to organizing receivers into
local groups that must aggregate sessions messages sent to
the source (and local groups). Doing this efficiently, how-
ever, leads to a hierarchical structure of local groups much
like what tree-based protocols require. Hence, it appears that
organizing the receivers hierarchically (in ack trees or oth-
erwise) is a necessity for the scaling of a reliable multicast
protocol.
--R
Core based trees (CBT): An architecture for scalable inter-domain multicast routing
Data Networks
Reliable broadcast protocols.
NETBLT: A high-throughput transport protocol
Multicast routing in a datagram internetwork.
Multicast routing in datagram internetworks and extended lans.
An architecture for wide-area multicast routing
A reliable multicast framework for light-weight sessions and application level framing
Transmission control protocol.
RMTP: A reliable multicast transport protocol.
Multicast transport protocols for high-speed networks
Reliable multicast transport protocol (RMTP).
Protocol and Real-Time Scheduling Issues for Multi-media Applications
A comparison of sender-initiated and receiver-initiated reliable multicast protocols
A negative acknowledgement with periodic polling protocol for multicast over lan.
XTP: The Xpress Transfer Protocol.
A reliable dissemination protocol for interactive collaborative applications.
He received the B.
His current research interest is the analysis and design of algorithms and protocols for computer communication.
--TR
Data networks
NETBLT: a high throughput transport protocol
Multicast routing in datagram internetworks and extended LANs
XTP: the Xpress Transfer Protocol
Multicast routing in a datagram internetwork
Core based trees (CBT)
A comparison of sender-initiated and receiver-initiated reliable multicast protocols
An architecture for wide-area multicast routing
A reliable dissemination protocol for interactive collaborative applications
Log-based receiver-reliable multicast for distributed interactive simulation
A reliable multicast framework for light-weight sessions and application level framing
Protocol and real-time scheduling issues for multimedia applications
The case for reliable concurrent multicasting using shared ACK trees
Reliable broadcast protocols
A High Performance Totally Ordered Multicast Protocol
The Ordered Core Based Tree Protocol
A Comparison of Known Classes of Reliable Multicast Protocols
--CTR
V. Ramakrishna , Max Robinson , Kevin Eustice , Peter Reiher, An Active Self-Optimizing Multiplayer Gaming Architecture, Cluster Computing, v.9 n.2, p.201-215, April 2006
Christian Maihfer, A bandwidth analysis of reliable multicast transport protocols, Proceedings of NGC 2000 on Networked group communication, p.15-26, November 08-10, 2000, Palo Alto, California, United States
Shuju Wu , Sujata Banerjee , Xiaobing Hou, A Comparison of Multicast Feedback Control Mechanisms, Proceedings of the 38th annual Symposium on Simulation, p.80-87, April 04-06, 2005
Shuju Wu , Sujata Banerjee , Xiaobing Hou, Performance Evaluation and Comparison of Multicast Feedback Control Mechanisms, Simulation, v.82 n.5, p.345-362, May 2006
Brian Neil Levine , Sanjoy Paul , J. J. Garcia-Luna-Aceves, Organizing multicast receivers deterministically by packet-loss correlation, Proceedings of the sixth ACM international conference on Multimedia, p.201-210, September 13-16, 1998, Bristol, United Kingdom
Maurice Herlihy , Srikanta Tirthapura , Roger Wattenhofer, Ordered Multicast and Distributed Swap, ACM SIGOPS Operating Systems Review, v.35 n.1, p.85-96, January 1, 2001
Athina P. Markopoulou , Fouad A. Tobagi, Hierarchical reliable multicast: performance analysis and placement of proxies, Proceedings of NGC 2000 on Networked group communication, p.27-35, November 08-10, 2000, Palo Alto, California, United States
Ryan G. Lane , Scott Daniels , Xin Yuan, An empirical study of reliable multicast protocols over Ethernet-connected networks, Performance Evaluation, v.64 n.3, p.210-228, March, 2007
Suman Banerjee , Seungjoon Lee , Ryan Braud , Bobby Bhattacharjee , Aravind Srinivasan, Scalable resilient media streaming, Proceedings of the 14th international workshop on Network and operating systems support for digital audio and video, June 16-18, 2004, Cork, Ireland
Suman Banerjee , Seungjoon Lee , Bobby Bhattacharjee , Aravind Srinivasan, Resilient multicast using overlays, ACM SIGMETRICS Performance Evaluation Review, v.31 n.1, June
Philip K. McKinley , Ravi T. Rao , Robin F. Wright, H-RMC: a hybrid reliable multicast protocol for the Linux kernel, Proceedings of the 1999 ACM/IEEE conference on Supercomputing (CDROM), p.8-es, November 14-19, 1999, Portland, Oregon, United States
Brian Neil Levine , Sanjoy Paul , J. J. Garcia-Luna-Aceves, Organizing multicast receivers deterministically by packet-loss correlation, Multimedia Systems, v.9 n.1, p.3-14, July
Suman Banerjee , Seungjoon Lee , Bobby Bhattacharjee , Aravind Srinivasan, Resilient multicast using overlays, IEEE/ACM Transactions on Networking (TON), v.14 n.2, p.237-248, April 2006
Carlos A. S. Oliveira , Panos M. Pardalos, A survey of combinatorial optimization problems in multicast routing, Computers and Operations Research, v.32 n.8, p.1953-1981, August 2005 | ACK implosion;multicast transport protocols;reliable multicast;tree-based protocols |
287532 | Using the Matrix Sign Function to Compute Invariant Subspaces. | The matrix sign function has several applications in system theory and matrix computations. However, the numerical behavior of the matrix sign function, and its associated divide-and-conquer algorithm for computing invariant subspaces, are still not completely understood. In this paper, we present a new perturbation theory for the matrix sign function, the conditioning of its computation, the numerical stability of the divide-and-conquer algorithm, and iterative refinement schemes. Numerical examples are also presented. An extension of the matrix-sign-function-based algorithm to compute left and right deflating subspaces for a regular pair of matrices is also described. | Introduction
. Since the matrix sign function was introduced in early 1970s,
it has been the subject of numerous studies and used in many applications. For
example, see [30, 31, 11, 26, 23] and references therein. Our main interest here is
to use the matrix sign function to build parallel algorithms for computing invariant
subspaces of nonsymmetric matrices, as well as their associated eigenvalues. It is
a challenge to design a parallel algorithm for the nonsymmetric eigenproblem that
uses coarse grain parallelism effectively, scales for larger problems on larger machines,
does not waste time dealing with the parts of the spectrum in which the user is not
interested, and deals with highly nonnormal matrices and strongly clustered spectra.
In the work of [2], after reviewing the existing approaches, we proposed a design of a
parallel nonsymmetric eigenroutine toolbox, which includes the basic building blocks
(such as LU factorization, matrix inversion and the matrix sign function), standard
eigensolver routines (such as the QR algorithm) and new algorithms (such spectral
divide-and-conquer using the sign function). We discussed how these tools could be
used in different combinations on different problems and architectures, for extracting
all or some of the eigenvalues of a nonsymmetric matrix, and/or their corresponding
invariant subspaces. Rather than using "black box" eigenroutines such as provided
by EISPACK [32, 21] and LAPACK [1], we expect the toolbox approach to allow us
more flexibility in developing efficient problem-oriented eigenproblem solvers on high
performance machines, especially on parallel distributed memory machines.
However, the numerical accuracy and stability of the the matrix sign function and
divide-and-conquer algorithms based on it are poorly understood. In this paper, we
will address these issues. Much of this work also appears in [3].
Let us first restate some of basic definitions and ideas to establish notation. The
matrix sign function of a matrix A is defined as follows [30]: Let
be the Jordan canonical form of a matrix A 2 C n\Thetan , where the eigenvalues of J+ lie
in the open right half plane (C+ ) and those of J \Gamma lie in the open left half plane (C \Gamma ).
Published at SIAM J. Mat. Anal. Appl., Vol.19, pp.205-225, 1998
y Department of Mathematics, University of Kentucky, Lexington, KY 40506 (bai@ms.uky.edu).
z Computer Science Division and Mathematics Department, University of California, Berkeley,
Z. BAI AND J. DEMMEL
Then the matrix sign function of A is:
We assume that no eigenvalue of A lies on the imaginary axis; otherwise, sign(A)
is not defined. It is easy to show that the spectral projection corresponding to the
eigenvalues of A in the open right and left half planes are P
respectively. Let the leading columns of an orthogonal matrix Q span the range space
of P+ (for example, Q may be computed by the rank-revealing QR decomposition of
the spectral decomposition
(1)
where are the eigenvalues of A in C+ , and -(A 22 ) are the eigenvalues of A
in C \Gamma . The algorithm proceeds in a divide-and-conquer fashion, by computing the
eigenvalues of A 11 and A 22 .
Rather than using the Jordan canonical form to compute sign(A), it can be shown
that sign(A) is the limit of the following Newton iteration
A k+1 =2
(2)
The iteration is globally and ultimately quadratic convergent. There exist different
scaling schemes to speedup the convergence of the iteration, and make it more suitable
for parallel computation. By computing the matrix sign function of a M-obius
transformation of A, the spectrum can be divided along arbitrary lines and circles,
rather than just along the imaginary axis. See the report [2] and the references therein
for more details.
Unfortunately, in finite precision arithmetic, the ill conditioning of a matrix A k
with respect to inversion and rounding errors, may destroy the convergence of the
Newton iteration (2), or cause convergence to the wrong answer. Consequently, the
left bottom corner block of the matrix Q T AQ in (1) may be much larger than
where u denotes machine precision. This means that it is not numerically stable to
approximate the eigenvalues of A by the eigenvalues of A 11 and A 22 , as we would like.
In this paper, we will first study the perturbation theory of the matrix sign func-
tion, its conditioning, and the numerical stability of the overall divide-and-conquer
algorithm based on the matrix-sign function. We realize that it is very difficult to give
a complete and clear analysis. We only have a partial understanding of when we can
expect the Newtion iteration to converge, and how accurate it is. In a coarse analysis,
we can also bound the condition numbers of intermediate matrices in the Newton iter-
ation. Artificial and possibly very pathological test matrices are constructed to verify
our theoretical analysis. Besides these artificial tests, we also test a large number of
eigenvalue problems of random matrices, and a few eigenvalue problems from appli-
cations, such as electrical power system analysis, numerical simulation of chemical
reactions, and areodynamics stability analysis. Through these examples, we conclude
that the most bounds for numerical sensitivity and stability of matrix sign function
computation and its based algorithms are reachable for some very pathological cases,
but they are often very pessimistic. The worst cases happen rarely.
In addition, we discuss iterative refinement of an approximate invariant subspace,
and outline an extension of the matrix sign function based algorithms to compute both
left and right deflating subspaces for a regular matrix pencil A \Gamma -B.
MATRIX SIGN FUNCTION FOR COMPUTING INVARIANT SUBSPACES 3
The rest of this paper is organized as the following: Section 2 presents a new
perturbation bound for the matrix sign function. Section 3 discusses the numerical
conditioning of the matrix sign function. The backward error analysis of computed
invariant subspace and remarks on the matrix sign function based algorithm versus the
QR algorithm are presented in section 4. Section 5 presents some numerical examples
for the analysis of sections 2, 3 and 4. Section 6 describes the iteration refinement
scheme to improve an approximate invariant subspace. Section 7 outlines an extension
of the matrix sign function based algorithms for the generalized eigenvalue problem.
Concluding remarks are presented in section 8.
2. A perturbation bound for the matrix sign function. When a matrix A
has eigenvalues on pure imaginary axis, its matrix sign function is not defined. In other
words, the set of ill-posed problems for the matrix sign function, is the set of matrices
with at least one pure-imaginary eigenvalue. Computationally, we have observed that
when there are the eigenvalues of A close the pure imaginary axis, the Newton iteration
and its variations are very slowly convergent, and may be misconvergent. Moreover,
even when the iteration converges, the error in the computed matrix sign function
could be too large to use. It is desirable to have a perturbation analysis of the matrix
sign function related to the distance from A to the nearest ill-posed problem.
Perturbation theory and condition number estimation of the matrix sign function
are discussed in [25, 23, 29]. However, none of the existing error bounds explicitly
reveals the relationship between the sensitivity of the matrix sign function and the
distance to the nearest ill-posed problem. In this section, we will derive a new perturbation
bound which explicitly reveals such relationship. We will denote all the eigen-values
of A with positive real part by -+ (A), i.e.,
denotes the smallest singular value of A. In addition, we recall the well-known
inequality
is the matrix 2-norm.
Theorem 2.1. Suppose A has no pure imaginary and zero eigenvalues,
is a perturbation of A and ffl j kffiAk. Let
Then
3:
Furthermore, if
then
4 Z. BAI AND J. DEMMEL
O
Re
Im
r
-r
r
Fig. 1. The semi-circle \Gamma
Proof. We only prove the bound (7). The bound (5) can be proved by using a
similar technique. Following Roberts [30] (or Kato [24]), the matrix sign function can
also be defined using Cauchy integral representation:
where
Z
\Gamma is any simple closed curve with positive direction enclosing -+ (A). sign
the spectral projector for -+ (A). Here, without loss of generality, we take \Gamma to be a
semi-circle with radius Figure 1). From the definition
(8) of sign(A), it is seen that to study the stability of the matrix sign function of A
to the perturbation ffi A, it is sufficient to just study the sensitivity of the projection
be the projection corresponding to -+ A), from the
condition (6), no eigenvalues of A are perturbed across or on the pure imaginary axis,
and the semi-circle \Gamma also encloses -+ Therefore we have
Z
Z r
\Gammar
Z -=2
\Gamma-=2
[(re
where the first integral, denoted I 1 , is the integral over the straight line of the semi-circle
\Gamma, the second integral, denoted I 2 , is the integral over the curved part of the
semi-circle \Gamma. Now, by taking the spectral norm of the first integral term, and noting
the definition of !, the condition (6) and the inequality (3), we have
Z r
\Gammar
MATRIX SIGN FUNCTION FOR COMPUTING INVARIANT SUBSPACES 5
Z r
\Gammar
Z r
\Gammar
Z r
\Gammar
By taking the spectral norm of the second integral term I 2 , we have
Z -=2
\Gamma-=2
k(re
Z -=2
\Gamma-=2
re i'
re i'
r
r
where the third inequality follows from (3) and the fourth from the choice of the
radius r of the semi-circle \Gamma. The desired bound (7) follows from the bounds on kI 1 k
and kI 2 k and the identity
A few remarks are in order:
1. In the language of pseudospectra [35], the condition (6) means that the kffiAk-
pseudospectra of A do not cross the pure imaginary axis.
2. From the perturbation bound (7), we see that the stability of the matrix sign
function to the perturbation requires not only the kffiAk-pseudospectra of the
A to be bounded away from the pure imaginary axis, but also
A to
be small (recall that dA is the distance from A to the nearest matrix with a
pure-imaginary eigenvalue).
3. It is natural to take
A as the condition number of the matrix sign
function. Algorithms for computing dA and related problems can be found
in [14, 9, 8, 12].
4. The bound (7) is similar to the bound of the norm of the Fr'echet derivative
of the matrix sign function of A at X given by Roberts [30]:
is the length of the closed contour \Gamma.
Recently, an asymptotic perturbation bound of sign(A) was given by Byers, He
and Mehrmann [13]. They show that to first order in ffi A,
kffiAk;
6 Z. BAI AND J. DEMMEL
where A is assumed to have the form of (1), kffiAk is sufficiently small and
the separation of the matrices A 11 and A 22
[33].\Omega is the Kronecker product. Comparing
the bounds (7) and (9), we note that first the bound (7) is a global bound and
is an asymptotic bound. Second, the assumption (6) for the bound (7) has a simple
geometric interpretation (see Remark 2 above). It is unspecified how to interpret
the assumption on sufficient small kffiAk for the bound (9).
3. Conditioning of matrix sign function computation. In [2], we point out
that it may be much more efficient to compute to half machine precision
only, i.e., to compute S with an absolute error bounded by u 1=2 kSk. To avoid ill
conditioning in the Newton iteration and achieve the half machine precision, we believe
that the matrix A must have condition number less than u \Gamma1=2 . If A is ill conditioned,
say having singular values less than u 1=2 kAk, we need to use a preprocessing step to
deflate small singular values by a unitary similarity transformation, and obtain a
submatrix having condition number less than u \Gamma1=2 , and then compute the matrix
sign function of this submatrix. Such a deflation procedure may be also needed for
the intermediate matrices in the Newton iteration in the worst case.
We now look more closely at the situation of near convergence of the Newton
iteration, and relate the error to the distance to the nearest ill-posed problem [18].
As before, the ill-posed problems are those matrices with pure-imaginary eigenvalues.
Without loss of generality, let us assume A is of the form
A 11 A 12
where Otherwise, for any matrix B, by the Schur
decomposition, we can where A has the above form, and then
R be the solution of the Sylvester equation
A
which must exist and be unique since A 11 and A 22 have no common eigenvalues. Then
it is known that the spectral projector P corresponding to the eigenvalues of A 11 is
' I R
and
. The following lemma relates R and the norm of the projection
P to sign(A) and its condition number.
Lemma 3.1. Let A and R be as above. Let ae
1.
I \Gamma2R
2. ae, and therefore
Proof.
1. Let
' I R
I
. It is easy to verify that if R satisfies (12), then
I \Gamma2R
MATRIX SIGN FUNCTION FOR COMPUTING INVARIANT SUBSPACES 7
2. Using the singular value decomposition (SVD) of R: URV
one can reduce computing the SVD of S to computing the SVD of
I \Gamma2\Sigma
which, by permutations, is equivalent to computing the SVDs of the 2 by 2
matrices
. This is in turn a simple calculation.
We note that for the solution R of the Sylvester equation (12), we have
where the equality is attainable [33]. From Lemma 3.1, we see that the conditioning of
the matrix sign function computation is closely related to the norm of the projection P ,
therefore the norm of R, which in turn is closely related to the quantity
Specifically, when kRk is large,
and
If kA 12 k is moderate, an ill conditioned matrix sign function means large kRk, which
in turn means small Following Stewart [33], it means that it is harder
to separate the invariant subspaces corresponding to the matrices A 11 and A 22 .
The following theorem discusses the conditioning of the eigenvalues of sign(A),
and the distance from sign(A) to the nearest ill-posed problem.
Theorem 3.2. Let A and R be as in Lemma 3.1. Then
1. Let ffi S have the property that S + ffi S has a pure imaginary eigenvalue. Then
may be chosen with no smaller. In the language of
[35], the ffl-pseudospectrum of S excludes the imaginary axis for ffl ! 1=kSk,
and intersects it for ffl - 1=kSk.
2. The condition number of the eigenvalues of S is kPk. In other words, perturbing
S by a small ffi S perturbs the eigenvalues by at most kPk kffiSk+O(kffiSk 2 ).
3. If A is close to S and -(S) ! u \Gamma1=2 , then Newton iteration (2) in floating
point arithmetic will compute S with an absolute error bounded by u 1=2 kSk.
Proof.
1. The problem is to minimize oe min (S \Gamma iiI) over all real i, where oe min is
the smallest singular value of S \Gamma iiI. Using the same unitary similarity
transformation and permutation as in the part 1 of Lemma 3.1, we see that
this is equivalent to minimizing
oe min
over all oe j and real i. This is a straightforward calculation, with the minimum
being obtained for
8 Z. BAI AND J. DEMMEL
2. The condition number of a semi-simple eigenvalue is equal to the secant of the
acute angle between its left and right eigenvectors [24, 17]. Using the above
reduction to 2 by 2 subproblems (this unitary transformation of coordinates
does not changes angles between vectors), this is again a straightforward
calculation.
3. Since the absolute error ffi S in computing 1
essentially by the error in computing S
For the Newton iteration to converge, ffi S cannot be so large that S + ffi S has
pure imaginary eigenvalues; from the result 1, this means kffiSk
Therefore, if u iteration (2)
will compute S with an absolute error bounded by u 1=2 kSk.
It is naturally desired to have an analysis from which we know the conditioning
of the intermediate matrices A k in the Newton iteration. It will help us in addressing
the question of how to detect possible appearance of pure imaginary eigenvalues and
to modify or terminate the iteration early if necessary. Unfortunately, it is difficult to
make a clean analysis far from convergence, because we are unable to relate the error
in each step of the iteration to the conditioning of the problem. We can do a coarse
analysis, however, in the case that the matrix is diagonalizable.
Theorem 3.3. Let A have eigenvalues - j (none pure imaginary or zero), right
eigenvectors x j and left eigenvectors y j , normalized so
Let A k be the matrix obtained at the kth Newton iteration (2). Then for all k,
oe
oe min
Proof. We may express the eigen-decomposition of A as
We
wish to bound j- j;k j from above and below for all k. This is easily done by noting
that
so that all - j;k lie inside a disk defined by
This disk is symmetric about the real axis, so its points of minimum and maximum
absolute value are both real. Solving for these extreme points yields
MATRIX SIGN FUNCTION FOR COMPUTING INVARIANT SUBSPACES 9
This means
oe
Similarly
oe
which proves the bound (15).
As we know, the error introduced at each step of the iteration is mainly caused
by the computation of matrix inverse, which is approximately bounded in norm by
when oe - 1. If uoe \Gamma3 ! oe min error cannot make an intermediate
A k become singular and so cause the iteration to fail. Our analysis shows that if
uoe \Gamma3 ! oe, or oe ? u 1=4 , then the iteration will not fail. This very coarse bound
generalizes result 3 of Theorem 2.
We note that if A is symmetric, by the orthonormal eigendecomposition of
then from Theorem 3, we have
Therefore,
if
It shows that when A is symmetric, the condition number of the intermediate matrices
A k , which affects the numerical stability of the Newton iteration, is essentially
determined by the square of the distance of the eigenvalues to the imaginary axis. 1
When A is nonsymmetric and diagonalizable, from Theorem 3.3, we also see
that the condition number of the intermediate matrices A k is related to the norms
of the spectral projectors
corresponding to the eigenvalues - j
and the quantities of the form
~
by a simple algebraic manipulation,
we have
referee predicted that in the symmetric case, the condition number of A k might be determined
only by the distance, not the square of the distance. We were not able to prove such prediction.
Z. BAI AND J. DEMMEL
From this expression, we see that if there is an eigenvalue - j of A very near to the
pure imaginary axis, i.e, ff j is small, then by the first order Taylor expansion of ~
oe j in
term of ff j , we have
Therefore, to first order in ff j , the condition numbers of the intermediate matrices A k
O
This implies that even if the eigenvalues of A are well-conditioned (i.e., the kP j k are
not too large), if there are also eigenvalues of A closer to the imaginary axis than u 1=2 ,
then the condition number of A k could be large, so the Newton
iteration could fail to converge.
4. Backward Stability of Computed Invariant Subspace. As discussed in
the previous section, because of possible ill conditioning of a matrix with respect to
inversion and rounding errors during the Newton iteration, we generally only expect
to be able to compute the matrix sign function to the square root of the machine
precision, provided that the initial matrix A has condition number smaller than u \Gamma1=2 .
This means that when Newtion iteration converges, the computed matrix sign function
u)kSk:
Under this assumption, b
is an approximate spectral projection corresponding
to -+ (A). Therefore, if
P ), the first ' columns b
in the rank revealing QR decomposition of b
span an approximate invariant
subspace. b
Q has the form
A 11
A 12
A 22
with -( b
being the approximate eigenvalues of A in C+ , and -( b
A 22 ) being the
approximate eigenvalues of A in C \Gamma . Since we expect the computed matrix sign
function to be of half machine precision, it is reasonable to expect computing the
invariant subspace to half precision too. This in turn means that the backward error
in the computed decomposition b
Q is bounded by O(
that the problem is not very ill conditioned. In this section, we will try to justify such
expectation.
To this end, we first need to bound the error in the space spanned by the leading
columns of the transformation matrix Q, i.e., we need to know how much
a right singular subspace of the exact projection matrix perturbed
when P is perturbed by a matrix of norm j. Since P is a projector, the subspace is
spanned by the right singular vectors corresponding to all nonzero singular values of
P (call the set of these singular values S). In practice, of course, this is a question of
rank determination. From the well-known perturbation theory of the singular value
MATRIX SIGN FUNCTION FOR COMPUTING INVARIANT SUBSPACES 11
decomposition [34, page 260], the space spanned by the corresponding singular vectors
is perturbed by at most O(j)=gap S , where gap S is defined by
To compute gap S , we note that there is always a unitary change of basis in which
a projector is of the form
I \Sigma
, where diagonal with
straightforward calculation, we find that the singular values
of the projector are f
where the number
of ones in the set of singular values is equal to maxf2' \Gamma n; 0g. Since
f
Thus, the error ffi Q in Q is bounded by
O(
Hence, the backward error in the computed spectral decomposition is bounded by
is the second order perturbation term of kffiQk. Therefore, if 2' - n, we
have the following first order bound on the backward stability of computed invariant
subspace:
O(
u)kSk
O(
u)kSk
If we use the bound (5) of the matrix sign function S, then from (21), we have
O(
dA
where dA , defined in (4), is the distance to the ill-posed problem. On the other hand,
if we use the bound (13) for the matrix sign function S, then from (21) again, we have
O(
22 ) is the separation of the matrices A 11 and A 22 , if A is assumed
to have the form (11). We note that the error bound (23) is essentially the same as
the error bound given by Byers, He and Mehrmann [13], although we use a different
approach. In [13], it is assumed that in (19), where F 21 is the (2,1)
block of the matrix F . Therefore, O(
u) term in (23) is replaced by O(u).
Z. BAI AND J. DEMMEL
The bounds (22) and (23) reveal two important features of the matrix sign function
based algorithm for computing the invariant subspace. First, they indicate that
the backward error in the computed approximate invariant subspace appears no larger
than the absolute error in the computed matrix sign function, provided that the spectral
decomposition problem is not very ill conditioned (i.e., dA or ffi is not tiny).
Second, if 2' - n, the backward error is a decreasing function of oe l . If oe ' is large, this
means oe 1 and so
are large, and this in turn means the eigenvalues
close the imaginary axis are ill conditioned. It is harder to divide these eigenvalues.
Of course as they become ill conditioned, dA decreases at the same time, which must
counterbalance the increase in oe ' in a certain range.
It is interesting to ask which error bound (22) and (23) is sharper, i.e., which
one of the quantities dA and 22 ) is larger. In [13], an example of a
2 by 2 matrix is given to show that the quantity ffi is larger than the quantity dA .
However, we can also devise simple examples to show that dA can be larger than
More generally, by choosing A 11 to be a large Jordan block with a tiny eigenvalue, and
dA to be close to the square root of ffi . dA is computed using "numerical
brute force" to plot the function dA (- ) on a wide the range of - 2 IR, and search for
the minimal value.
Note that by modifying A to be A \Gamma oeI, where oe is a (sufficiently small) real
number, dA will change, but ffi will not. Thus dA and ffi are not completely comparable
quantities. We believe dA to be a more natural quantity to use than ffi , since ffi does not
always depend on the distance to the nearest ill-posed problem. This is reminiscent
of the difference between the quantities
In practice, we will use the a posteriori bound kE 21 k=kAk anyway, since if we
block upper-triangularize b
Q by setting the (2; 1) block to zero, kE 21 k=kAk is
precisely the backward error we introduce.
Before ending of this section, let us comment on stability of the matrix sign function
based algorithm versus the QR algorithm. The QR algorithm is a numerical
backward stable method for computing the Schur decomposition of a general non-symmetric
matrix A. The computed Schur form b
T and Schur vectors b
Q by the QR
algorithm satisfy
where E is of the order of ukAk. Numerical software for the QR algorithm is available
in EISPACK [32] and LAPACK [1]. Although nonconvergent examples have been
found, they are quite rare in practice [6, 16]. We note that the eigenvalues on the
(block)-diagonal of b
may appear in any order. Therefore, if an application requires
an invariant subspace corresponding to the eigenvalues in a specific region in complex
plane, a second step of reordering eigenvalues on the diagonal of b
T is necessary. A
guaranteed stable implementation of this reordering is described in [7].
The matrix sign function based algorithm can be regarded as an algorithm to
combine these two steps into one. If the matrix sign function can be computed
MATRIX SIGN FUNCTION FOR COMPUTING INVARIANT SUBSPACES 13
within the order of ukSk, then the analysis in this section shows that the matrix
sign function based algorithm could be as stable as the QR algorithm plus reordering.
Unfortunately, if the matrix is ill conditioned with respect to matrix inversion (which
does not affect the QR algorithm), numerical unstable is anticipated in the computed
matrix sign function. Therefore, in general, the matrix sign function is less stable
than the QR algorithm plus reordering.
5. Numerical Experiments. In this section, we will present numerical examples
to verify the above analysis. We will see that the numerical stability of the
Newton iteration (2) and the backward accuracy of computed spectral decomposition
(1) under the influence of the conditioning of the matrix A with respect to inversion,
the condition number -(S) of and the distance \Delta(A) of the eigenvalues
of A to the pure imaginary axis, where We use the easily
computed quantity \Delta(A) as an surrogate of the quantity dA in (4).
Let us recall that the analysis of sections 3 and 4 essentially claims that
(1) If \Delta(A) ! u 1=2 , then the Newton iteration may fail to converge or fail to
compute the matrix sign function within the absolute error u 1=2 kSk, even
when the matrix sign function is well-conditioned. See (18).
(2) If -(S) ? u \Gamma1=2 , then even the distance \Delta(A) is not small, the Newton
iteration may still fail to compute the matrix sign function in the absolute
error of O(u 1=2 kSk). See the part 3 of Theorem 3.2.
(3) In general, the backward error in the computed spectral decomposition will
be smaller than the absolute error in the computed matrix sign function. See
(21).
The following numerical examples will illustrate these claims. Our numerical experiments
were performed on a SUN workstation 10 with machine precision "
u. All the algorithms are implemented in Matlab 4.0a. We use the
simple Newtion iteration (2) to compute the matrix sign function with the stopping
criterion
The maximal number of iterations is set to be 70. At the convergence, we have
S, the computed matrix sign function. We use the QR decomposition
with column pivoting as the rank revealing scheme: 1( b
R\Pi, and finally
compute
A 11
A 12
A 22
where the first
R) columns of b
Q spans the invariant subspaces corresponding
to -( b
A 11 ), which are the approximate eigenvalues of A in C+ . kE 21 k=kAk is the
backward error committed by the algorithm.
All our test matrices are constructed of the form
where U is an orthogonal matrix generated from the QR decomposition a random
matrix with normal distribution having mean 0.0 and variance 1.0. We will choose
different submatrices A 11 , A 22 and A 12 so that the generated matrices A have different
specific features in order to observe our theoretical results in practice.
14 Z. BAI AND J. DEMMEL
Table
Numerical Results for Example 1
The exact matrix sign function of A and the condition number of S
are computed as described in Lemma 3.1. The condition number of A is computed
by Matlab function cond.
In the following tables, iter is the number of iterations of the Newton iteration.
number 10 ff in parenthesis next to an iteration number iter indicates that the
convergence of the Newton iteration was stationary about O(10 ff ) from the iter th
iteration forward, and failed to satisfy the stopping criterion even after the allowed
maximal number of iterations.
We have experimented numerous matrices with different pathologically ill conditioning
in terms of the distance to the pure imaginary axis, the condition numbers
of -(A) and -(S), and the different values of sep(A 11 ; A 22 ) and so on. Two selected
examples presented here are of typical behaviors we observed.
Example 1. In this example, the matrices A are of the form (24) with
and A is a random 2 by 2 matrix with normal distribution
multiplying by a parameter c. The generated matrix A have two complex
conjugate eigenpairs s \Sigma i and \Gammas \Sigma i. As s ! 0, the distance too. The
size of the parameter c will adjust the conditioning of the resulted matrix A and its
matrix sign function.
Table
1 reports the computed results for different values of From the
table, we see that when the matrices are well conditioned and the corresponding the
matrix sign function is also well-conditioned, as stated in the claim (1), the convergence
rate and accuracy of the Newton iteration is clearly determined by the distance
\Delta(A). When the distance becomes smaller, there is a steady increase in the number
of the Newton iteration required to convergence and the loss of the accuracy in the
computed matrix sign function and therefore the desired invariant subspace. From the
table, we also see that when both \Delta(A) and -(S) are moderate, the Newton iteration
fails to compute the matrix sign function in half machine precision. Nevertheless, the
computed invariant subspace seems still have half machine precision, see the claim
MATRIX SIGN FUNCTION FOR COMPUTING INVARIANT SUBSPACES 15
Table
Numerical Results of Example 2
Example 2. In this example, the test matrices A are of the form (24). A 12 are 5 by 5
0)-normally distributed random matrices. The submatrices A 11 and A 22 are first
set by 5 by 5 (1; 0)-normally distributed random upper tridiagonal matrices, and then
the diagonal elements of A 11 and A 22 are replaced by dja ii j and \Gammadja ii j, respectively,
where a ii (1 - i - n) are random numbers with normal distribution (0; 1), d is a
positive parameter. A 12 are 5 by 5 (1; 0)-normally distributed random matrices.
The numerical results are reported in Table 2. For the given parameter d, the
eigenvalues are well-separated away from the pure imaginary axis (\Delta(A) is not small),
however, as stated in the claim (2), we see the influence of the condition numbers -(S)
to the convergence of the Newton iteration and therefore the accuracy of the computed
matrix sign function and the invariant subspace.
6. Refining Estimates of Approximate Invariant Subspaces. When we
use the matrix sign function based algorithm to deflate an invariant subspace of matrix
A, we end up with the form
A 11
A 12
A 22
where the size of kE 21 k=kAk reveals the accuracy and backward stability of computed
invariant subspace spanning by b
of A. If higher accuracy is desired, we may use
iterative refinement techniques to improve the accuracy of computed invariant sub-
space. The methods are due to Stewart [33], Dongarra, Moler and Wilkinson [20], and
Chatelin [15]. Even though these methods all apparently solve different equations,
as shown by Demmel [19], after changing variables, they all solve the same Riccati
equation in the inner loop.
Let us follow Stewart's approach to present the first class of methods. From
(25), we know that b
spans an approximate invariant subspaces and b
spans an
orthogonal complementary subspace. If let the true invariant subspace is represented
as b
its orthogonal complementary subspace as b
Then Y is derived as follows: b
will be an invariant subspace if and only if
the lower left block of
is zero, i.e. if the lower left corner of
I \GammaY H
Y I
A 11
A 12
A 22
\GammaY I
Z. BAI AND J. DEMMEL
is zero. Thus, Y must satisfy the equation
A 12 Y;
which is the well-known algebraic Riccati equation. We may use the following two
iterative methods to solve it:
1. The simple Newton iteration:
A 22 Y
with
2. The modified Newtion iteration:
with
Therefore, we only need to solve a Sylvester equation in the inner loop of the iterative
refinement.
In following numerical example, we only use the simple Newton iteration (26) to
refine the approximate invariant subspace computed by matrix sign function based
algorithm, with the following stopping criterion:
Example 3 We continue the Example 2. Table 3 lists the 22 ), the number of
iterative refinement steps and the backward accuracy of improved invariant subspace.
As shown in the convergence analysis for the iterative solvers (26) and (27) of the
Riccati equation by Stewart [33] and Demmel [18], if we let
A 22 );
then under the assumptions k ! 1=4 and k ! 1=12, the iterations (26) and (27)
converge, respectively. Therefore, sep( b
A 22 ) is a key factor to the convergence of
the iterative refinement schemes. The above examples verify such analysis. From the
analysis of section 3, we recall that sep( b
A 22 ) also affects the backward stability
of the computed invariant subspace by the matrix sign function based algorithm in
the first place (before iterative refinement).
7. Extension to the Generalized Eigenproblem. In this section, we outline
a scheme to extend the matrix sign function based algorithm to solve the generalized
eigenvalue problem of a regular matrix pencil A \Gamma -B. A matrix pencil A \Gamma -B is
regular if A\Gamma-B is square and det(A\Gamma-B) is not identically zero. In [22], Gardiner and
Laub have considered an extension of the Newton iteration for computing the matrix
sign function to a matrix pencil for solving generalized algebraic Riccati equations.
Here we discuss another possible approach, which includes the computation of both
left and right deflating subspaces.
For the given matrix pencil A \Gamma -B, the problem of the spectral decomposition
is to seek a pair of left and right deflating subspaces L and R corresponding to the
eigenvalues of the pencil in a specified region D in complex plane. In other words,
we want to find a pair of unitary matrices QL and QR so that if
MATRIX SIGN FUNCTION FOR COMPUTING INVARIANT SUBSPACES 17
Table
Iterative Refinement Reresults of Example 2
d
28
where the eigenvalues of A 11 \Gamma -B 11 are the eigenvalues of A \Gamma -B in a selected region
D in complex plane. Here, we will only discuss the region D to be the open right half
complex plane. As the same treatment in the standard eigenproblem, by employing
M-obius transformations (ffA divide-and-conquer, D can be
the union of intersections of arbitrary half planes and (complemented) disks, and so
a rather general region.
To this end, by directly applying the Newton iteration to AB \Gamma1 , we have
At convergence, In practice, we do not want to invert B if it is ill
conditioned. Hence, by letting Z then the above iteration becomes
This leads to the following iteration:
converges quadratically to a matrix Z1 . Then
Next, to find the desired deflating subspace, we use
the rank revealing QR decomposition to calculate the range space of the projection
corresponding to the spectral in the open right half plane, which
has the same range space as Thus by computing the rank revealing
QR decomposition of Z1 we obtain the invariant subspace of AB \Gamma1
without inverting B, i.e.,
where -(CR ) are the eigenvalues of the pencil A \Gamma -B in the open right half plane,
-(CL ) are the ones of A \Gamma -B in the open left half plane. Therefore, we have obtained
the left deflating subspace of A \Gamma -B.
Z. BAI AND J. DEMMEL
To compute the right deflating subspace of A\Gamma-B, we can applying the above idea
to A H \Gamma -B H , since transposing swaps right and left spaces. The Newton iteration
implicitly applying to A H B \GammaH turns out to be
~
converges quadratically to a matrix ~
Z1 . Using
the same arguments as above, after computing the rank revealing QR decomposition
of ~
QRRR \Pi R , we have
~
R A H B \GammaH ~
where -(DL ) are the eigenvalues of the pencil A\Gamma-B in the open left half plane, -(DR )
are the ones of A \Gamma -B in the open right half plane. Note that for the desired spectral
decomposition, after transposing, we need to first compute the deflating subspace
corresponding to the eigenvalues in the open left half plane. Let
\Pi, where
~
\Pi is anti-diagonal identity matrix 2 , then we have
From (29) and (30), we immediately have
R D H0 D H
L AQR and Q H
BQR have the partitions
A 21 A 22
we have
R D H0 D H
Note that -(CL ) are the eigenvalues of the pencil A \Gamma -B in the open left half plane,
-(DR ) are the eigenvalues of the pencil A \Gamma -B in the open right half plane. Therefore,
the above homogeneous Sylvester equation has only solution B From (31) or
(32), we have A The computed unitary orthogonal matrices QL and QR give
the desired spectral decomposition (28).
2 The permutation ~
\Pi can be avoided if we use the rank revealing QL decomposition.
MATRIX SIGN FUNCTION FOR COMPUTING INVARIANT SUBSPACES 19
8. Closing Remarks. In this paper, we have presented a number of new results
and approaches to further analyze the numerical behavior of the matrix sign function
and algorithms using it to compute spectral decompositions of nonsymmetric matri-
ces. From this analysis and numerical experiments, we conclude that if the spectral
decomposition problem is not ill conditioned, the algorithm is a practical approach
to solve the nonsymmetric eigenvalue problem. Performance evaluation of the matrix
sign function based algorithm on parallel distributed memory machines, such as the
Intel Delta and CM-5, is reported in [4].
During the course of this work, we have discovered a new approach which essentially
computes the same spectral projection matrix as the matrix sign function
approach does, and also uses basic matrix operations, namely, matrix multiplication
and the QR decomposition. However, it avoids the matrix inverse. From the point of
view of accuracy, this is a more promising approach. The new approach is based on
the work of Bulgakov, Godunov and Malyshev [10, 27, 28]. In [5], we have improved
their results in several important ways, and made it a truly practical and inverse free
highly parallel algorithm for both the standard and generalized spectral decomposition
problems. In brief, the difference between the matrix sign function and inverse
methods is as follows. The matrix sign function method is significantly faster
than it converges, but there are some very difficult problems
where the inverse free algorithm gives a more accurate answer than the matrix
sign function algorithm. The interested reader may see the paper [5] for details.
Acknowledgements
. The first author was supported in part by ARPA grant
DM28E04120 and P-95006 via a subcontract from Argonne National Laboratory and
by an NSF grant ASC-9313958 and in part by an DOE grant DE-FG03-94ER25219 via
subcontracts from University of California at Berkeley. The second author was funded
in part by the ARPA contract DAAH04-95-1-0077 through University of Tennessee
subcontract ORA7453.02, ARPA contract DAAL03-91-C-0047 through University of
Tennessee subcontract ORA4466.02, NSF contracts ASC-9313958 and ASC-9404748,
contracts DE-FG03-94ER25219, DE-FG03-94ER25206, DOE contract No. W-
through subcontract No. 951322401 with Argonne National Labora-
tory, and NSF Infrastructure Grant Nos. CDA-8722788 and CDA-9401156.
The information presented here does not necessarily reflect the position or the
policy of the Government and no official endorsement should be inferred.
The authors would like to acknowledge Ralph Byers, Chunyang He, Nick Higham
and Volker Mehrmann for fruitful discussions on the subject. We would also like to
thank the referees for their valuable comments on the manuscript.
--R
Design of a parallel nonsymmetric eigenroutine toolbox
Design of a parallel nonsymmetric eigenroutine toolbox
The spectral decomposition of nonsymmetric matrices on distributed memory parallel computers.
Inverse free parallel spectral divide and conquer algorithms for nonsymmetric eigenproblems.
Convergence of the shifted QR algorithm on 3 by 3 normal matrices.
Reordering diagonal blocks in real schur form.
A regularity result for the singular values of a transfer matrix and a quadratically convergent algorithm for computing its L1-norm
A bisection method for computing the H1 norm of a transfer matrix and related problems.
Circular dichotomy of the spectrum of a matrix.
Solving the algebraic Riccati equation with the matrix sign function.
A bisection method for measuring the distance of a stable matrix to the unstable matrices.
The matrix sign function method and the computation of invariant subspaces.
On the stability radius of a generalized state-space system
Simultaneous Newton's iteration for the eigenproblem.
How the QR algorithm fails to converge and how to fix it.
The condition number of equivalence transformations that block diagonalize matrix pencils.
On condition numbers and the distance to the nearest ill-posed problem
Three methods for refining estimates of invariant subspaces.
Improving the accuracy of computed eigenvalues and eigenvectors.
Matrix Eigensystem Routines - EISPACK Guide Extension
A generalization of the matrix-sign function solution for algebraic Riccati equations
The matrix sign decomposition and its relation to the polar decomposition.
Perturbation Theory for Linear Operators.
Polar decomposition and matrix sign function condition estimates.
Matrix sign function algorithms for Riccati equa- tions
Guaranteed accuracy in spectral problems of linear algebra
Parallel algorithm for solving some spectral problems of linear algebra.
Condition estimation for the matrix function via the Schur decomposition.
Linear model reduction and solution of the algebraic Riccati equation.
Separation of matrix eigenvalues and structural decomposition of large-scale systems
Matrix Eigensystem Routines - EISPACK Guide
and perturbation bounds for subspaces associated with certain eigenvalue problems.
Matrix Perturbation Theory.
Pseudospectra of matrices.
--TR
--CTR
Daniel Kressner, Block algorithms for reordering standard and generalized Schur forms, ACM Transactions on Mathematical Software (TOMS), v.32 n.4, p.521-532, December 2006 | newton's method;matrix sign function;eigenvalue problem;deflating subspaces;invariant subspace |
287537 | Parameter Estimation in the Presence of Bounded Data Uncertainties. | We formulate and solve a new parameter estimation problem in the presence of data uncertainties. The new method is suitable when a priori bounds on the uncertain data are available, and its solution leads to more meaningful results, especially when compared with other methods such as total least-squares and robust estimation. Its superior performance is due to the fact that the new method guarantees that the effect of the uncertainties will never be unnecessarily over-estimated, beyond what is reasonably assumed by the a priori bounds. A geometric interpretation of the solution is provided, along with a closed form expression for it. We also consider the case in which only selected columns of the coefficient matrix are subject to perturbations. | Introduction
. The central problem in estimation is to recover, to good ac-
curacy, a set of unobservable parameters from corrupted data. Several optimization
criteria have been used for estimation purposes over the years, but the most im-
portant, at least in the sense of having had the most applications, are criteria that
are based on quadratic cost functions. The most striking among these is the linear
least-squares criterion, which was first developed by Gauss (ca. 1795) in his work on
celestial mechanics. Since then, it has enjoyed widespread popularity in many diverse
areas as a result of its attractive computational and statistical properties (see, e.g.,
[4, 8, 10, 13]). Among these attractive properties, the most notable are the facts
that least-squares solutions can be explicitly evaluated in closed forms, they can be
recursively updated as more input data is made available, and they are also maximum
likelihood estimators in the presence of normally distributed measurement noise.
Alternative optimization criteria, however, have been proposed over the years
including, among others, regularized least-squares [4], ridge regression [4, 10], total
addresses: shiv@ece.ucsb.edu, golub@sccm.stanford.edu, mgu@math.ucla.edu, and
sayed@ee.ucla.edu.
least-squares [2, 3, 4, 7], and robust estimation [6, 9, 12, 14]. These different formulations
allow, in one way or another, incorporation of further a priori information about
the unknown parameter into the problem statement. They are also more effective in
the presence of data errors and incomplete statistical information about the exogenous
signals (or measurement errors).
Among the most notable variations is the total least-squares (TLS) method, also
known as orthogonal regression or errors-in-variables method in statistics and system
identification [11]. In contrast to the standard least-squares problem, the TLS formulation
allows for errors in the data matrix. But it still exhibits certain drawbacks that
degrade its performance in practical situations. In particular, it may unnecessarily
over-emphasize the effect of noise and uncertainties and can, therefore, lead to overly
conservative results.
More specifically, assume A 2 R m\Thetan is a given full rank matrix with m n,
is a given vector, and consider the problem of solving the inconsistent linear
system A"x b in the least-squares sense. The TLS solution assumes data uncertainties
in A and proceeds to correct A and b by replacing them by their projections, "
A
and " b, onto a specific subspace and by solving the consistent linear system of equations
b. The spectral norm of the correction
in the TLS solution is
bounded by the smallest singular value of
\Theta
A b
. While this norm might be small
for vectors b that are close enough to the range space of A, it need not always be so.
In other words, the TLS solution may lead to situations in which the correction term
is unnecessarily large.
Consider, for example, a situation in which the uncertainties in A are very small,
say A is almost known exactly. Assume further that b is far from the column space of
A. In this case, it is not difficult to visualize that the TLS solution will need to rotate
may therefore end up with an overly corrected approximant for
A, despite the fact that A is almost exact.
These facts motivate us to introduce a new parameter estimation formulation with
prior bounds on the size of the allowable corrections to the data. More specifically, we
formulate and solve a new estimation problem that is more suitable for scenarios in
which a-priori bounds on the uncertain data are known. The solution leads to more
meaningful results in the sense that it guarantees that the effect of the uncertainties
will never be unnecessarily over-estimated, beyond what is reasonably assumed by the
a-priori bounds.
We note that while preparing this paper, the related work [1] has come to our
attention, where the authors have independently formulated and solved a similar estimation
problem by using (convex) semidefinite programming techniques and interior-point
methods. The resulting computational complexity of the proposed solution is
is the smaller matrix dimension.
The solution proposed in this paper proceeds by first providing a geometric formulation
of the problem, followed by an algebraic derivation that establishes that the
optimal solution can in fact be obtained by solving a related regularized problem. The
parameter of the regularization step is further shown to be obtained as the unique
positive root of a secular equation and as a function of the given data. In this sense,
the new formulation turns out to provide automatic regularization and, hence, has
some useful regularization properties: the regularization parameter is not selected by
the user but rather determined by the algorithm. Our solution involves an SVD step
and its computational complexity amounts to O(mn 2 is again the
smaller matrix dimension. A summary of the problem and its solution is provided in
A NEW METHOD FOR PARAMETER ESTIMATION WITH UNCERTAIN
Sec. 3.4 at the end of this paper. [Other problem formulations are studied in [15].]
2. Problem Formulation. Let A 2 R m\Thetan be a given matrix with m n and
be a given vector, which are assumed to be linearly related via an unknown
vector of parameters x 2 R n ,
The vector v 2 R m denotes measurement noise and it explains the mismatch between
Ax and the given vector (or observation) b.
We assume that the "true" coefficient matrix is A + ffiA, and that we only know
an upper bound on the 2\Gammainduced norm of the perturbation ffiA:
with j being known. Likewise, we assume that the "true" observation vector is b
and that we know an upper bound j b on the Euclidean norm of the perturbation ffib:
We then pose the problem of finding an estimate that performs "well" for any allowed
perturbation (ffiA; ffib). More specifically, we pose the following min-max problem:
Problem 1. Given A 2 R m\Thetan , with m n, nonnegative real
numbers (j; j b ). Determine, if possible, an "
x that solves
x
The situation is depicted in Fig. 2.1. Any particular choice for "
x would lead to
many residual norms,
one for each possible choice of A in the disc in the disc (b
second choice for "
x would lead to other residual norms, the maximum value of which
need not be the same as the first choice. We want to choose an estimate " x that
minimizes the maximum possible residual norm. This is depicted in Fig. 2.2 for two
choices, say "
. The curves show the values of the residual norms as a function
of
A
Fig. 2.1. Geometric interpretation of the new least-squares formulation.
We note that if problem (2.4) reduces to a standard least squares
problem. Therefore we shall assume throughout that j ? 0. [It will turn out that the
solution to the above min-max problem is independent of j b ].
kresidualk
Fig. 2.2. Two illustrative residual-norm curves.
2.1. A Geometric Interpretation. The min-max problem admits an interesting
geometric formulation that highlights some of the issues involved in its solution.
For this purpose, and for the sake of illustration, assume we have a unit-norm
vector b, kbk uncertainties in it (j Assume further that A is
simply a column vector, say a, with j 6= 0. That is, only A is assumed to be uncertain
with perturbations that are bounded by j in magnitude (as in (2.2)). Now consider
problem (2.4) in this context, which reads as follows:
x
This situation is depicted in Fig. 2.3. The vectors a and b are indicated in thick
black lines. The vector a is shown in the horizontal direction and a circle of radius j
around its vertex indicates the set of all possible vertices for a
a
Fig. 2.3. Geometric construction of the solution for a simple example.
For any " x that we pick, the set f(a+ ffia)"xg describes a disc of center a"x and radius
j"x. This is indicated in the figure by the largest rightmost circle, which corresponds
to a choice of a positive "
x that is larger than one. The vector in f(a + ffia)"xg that
is furthest away from b is the one obtained by drawing a line from b through the
center of the rightmost circle. The intersection of this line with the circle defines a
A NEW METHOD FOR PARAMETER ESTIMATION WITH UNCERTAIN
residual vector r 3 whose norm is the largest among all possible residual vectors in the
set f(a ffia)"xg.
Likewise, if we draw a line from b that passes through the vertex of a, it will
intersect the circle at a point that defines a residual vector r 2 . This residual will have
the largest norm among all residuals that correspond to the particular choice "
More generally, any "
x that we pick will determine a circle, and the corresponding
largest residual is obtained by finding the furthest point on the circle from b. This is
the point where the line that passes through b and the center of the circle intersects
the circle on the other side of b.
We need to pick an "
x that minimizes the largest residual. For example, it is clear
from the figure that the norm of r 3 is larger than the norm of r 2 . The claim is that
in order to minimize the largest residual we need to proceed as follows: we drop a
perpendicular from b to the lower tangent line denoted by ' 1 . This perpendicular
intersects the horizontal line in a point where we draw a new circle (the leftmost
circle) that is tangent to both ' 1 and ' 2 . This circle corresponds to a choice of " x
such that the furthest point on it from b is the foot of the perpendicular from b to ' 1 .
The residual indicated by r 1 corresponds to the desired solution (it has the minimum
norm among the largest residuals).
To verify this claim, we refer to Fig. 2.4 where we have only indicated two circles;
the circle that leads to a largest residual that is orthogonal to ' 1 and a second circle
to its left. For this second leftmost circle, we denote its largest residual by r 4 . We
also denote the segment that connects b to the point of tangency of this circle with ' 1
by r. It is clear that r is larger than r 1 since r and r 1 are the sides of a right triangle.
It is also clear that r 4 is larger than r, by construction. Hence, r 4 is larger than r 1 .
A similar argument will show that r 1 is smaller than residuals that result from circles
to its right.
The above argument shows that the minimizing solution can be obtained as fol-
lows: drop a perpendicular from b to ' 1 . Pick the point where the perpendicular
meets the horizontal line and draw a circle that is tangent to both ' 1 and ' 2 . Its
radius will be j"x, where "
x is the optimal solution. Also, the foot of the perpendicular
on ' 1 will be the optimal " b.
The projection " b (and consequently the solution " x) will be nonzero as long as b
is not orthogonal to the direction ' 1 . This imposes a condition on j. Indeed, the
direction ' 1 will be orthogonal to b only when j is large enough. This requires that
the circle centered around a has radius a T b, which is the length of the projection of
a onto the unit norm vector b. This is depicted in Fig. 2.5.
Hence, the largest value that can be allowed for j in order to have a nonzero
solution "
x is
Indeed, if j were larger than or equal to this value, then the vector in the set (a
that would always lead to the maximum residual norm is the one that is orthogonal
to b, in which case the solution will be zero again. The same geometric argument will
lead to a similar conclusion had we allowed for uncertainties in b as well.
For a non-unity b, the upper bound on j would take the form
We shall see that in the general case a similar bound holds, for nonzero solutions, and
6 CHANDRASEKARAN, GOLUB, GU, AND SAYED
r
Fig. 2.4. Geometric construction of the solution for a simple example.
a
Fig. 2.5. Geometric condition for a nonzero solution.
is given by
We now proceed to an algebraic solution of the min-max problem. A final statement
of the form of the solution is given further ahead in Sec. 3.4.
3. Reducing the Minimax Problem to a Minimization Problem. We
start by showing how to reduce the min-max problem (2.4) to a standard minimization
A NEW METHOD FOR PARAMETER ESTIMATION WITH UNCERTAIN
problem. To begin with, we note that
which provides an upper bound for k
. But this upper bound
is in fact achievable, i.e., there exist (ffiA; ffib) for which
To see that this is indeed the case, choose ffiA as the rank one matrix
and choose ffib as the vector
For these choices of perturbations in A and b, it follows that
are collinear vectors that point in the same direction. Hence,
which is the desired upper bound. We therefore conclude that
which establishes the following result.
Lemma 3.1. The min-max problem (2.4) is equivalent to the following minimization
problem. Given A 2 R m\Thetan , with m n, nonnegative real numbers
possible, an " x that solves
3.1. Solving the Minimization Problem. To solve (3.2), we define the cost
function
It is easy to check that L("x) is a convex continuous function in "
x and hence any local
minimum of L("x) is also a global minimum. But at any local minimum of L("x), it
either holds that L("x) is not differentiable or its gradient 5L("x) is 0. In particular,
note that L("x) is not differentiable only at " and at any "
x that satisfies A"x
8 CHANDRASEKARAN, GOLUB, GU, AND SAYED
We first consider the case in which L("x) is differentiable and, hence, the gradient
of L("x) exists and is given by
A T A+ ffI
where we have introduced the positive real number
By setting we obtain that any stationary solution " x of L("x) is given by
We still need to determine the parameter ff that corresponds to "
x, and which is defined
in (3.3).
To solve for ff, we introduce the singular value decomposition (SVD) of A:
where U 2 R m\Thetam and V 2 R n\Thetan are orthogonal, and
nal, with
being the singular values of A. We further partition the vector U T b into
m\Gamman .
In this case, the expression (3.4) for "
x can be rewritten in the equivalent form
and hence,
Likewise,
ff
A NEW METHOD FOR PARAMETER ESTIMATION WITH UNCERTAIN
which shows that
Therefore, equation (3.3) for ff reduces to the following nonlinear equation that is
only a function of ff and the given data (A; b; j),
Note that only the norm of b 2 , and not b 2 itself, is needed in the above expression.
Remark. We have assumed in the derivation so far that A is full rank. If this were not
the case, i.e., if A (and hence \Sigma) were singular, then equation (3.8) can be reduced to
an equation of the same form but with a non-singular \Sigma of smaller dimension. Indeed,
if we partition
k be the first k components of b 1 ;
n\Gammak be the last components of b 1 ; and let
Then equation (3.8) reduces to
r
the same form as (3.8). From now on, we assume that A is full rank and, hence, \Sigma is
A full rank is a standing assumption in the sequel :
3.2. The Secular Equation. Define the nonlinear function in ff,
It is clear that ff is a positive solution to (3.8) if, and only if, it is a positive root of
G(ff). Following [4], we refer to the equation
as a secular equation.
The function G(ff) has several useful properties that will allow us to provide
conditions for the existence of a unique positive root ff. We start with the following
result.
Lemma 3.2. The function G(ff) in (3.10) can have at most one positive root. In
ff ? 0 is a root of G(ff), then "
ff is a simple root and G 0 ("ff) ? 0.
Proof. We prove the second conclusion first. Partition
where the diagonal entries of \Sigma 1 2 R k\Thetak are those of \Sigma that are larger than j, and the
diagonal entries of \Sigma 2 2 R (n+1\Gammak)\Theta(n+1\Gammak) are the remaining diagonal entries of \Sigma
and one 0. It follows that (in terms of the 2\Gammainduced norm for the diagonal matrices
for all ff ? 0.
Let u 2 R k be the first k components of
the last components of
It follows that we can rewrite G(ff) as the difference
and, consequently,
ff ? 0 be a root of G(ff). This means that
ffI
ffI
which leads to the following sequence of inequalities:
ffI
ffI
ffI
ffI
ffI
ffI
ffI
Combining this relation with the expression for G 0 (ff), it immediately follows that
ff must be a simple root of G(ff).
Furthermore, we note that G(ff) is a sum of n rational functions in ff and
hence can have only a finite number of positive roots. In the following we show by
contradiction that G(ff) can have no positive roots other than "
ff. Assume to the
contrary that "
ff 1 is another positive root of G(ff). Without loss of generality, we
further assume that "
ff 1 and that G(ff) does not have any root within the open
It follows from the above proof that
A NEW METHOD FOR PARAMETER ESTIMATION WITH UNCERTAIN
But this implies that G(ff) ? 0 for ff slightly larger than "
ff and G(ff) ! 0 for ff slightly
smaller than "
consequently, G(ff) must have a root in the interval ("ff; "
contradiction to our assumptions. Hence G(ff) can have at most one positive root.
Now we provide conditions for G(ff) to have a positive root. [The next result was
in fact suggested earlier by the geometric argument of Fig. 2.3]. Note that A"x can be
written as
Therefore solving possible, is equivalent to solving
This shows that a necessary and sufficient condition for b to belong to the column
span of A is b
Lemma 3.3. Assume j ? 0 (a standing assumption) and b 2 6= 0, i.e., b does
not belong to the column span of A. Then the function G(ff) in (3.10) has a unique
positive root if, and only if,
Proof. We note that
lim
and that
lim
First we assume that condition (3.13) holds. It follows then that G(ff) changes
sign on the interval (0; +1) and therefore has to have a positive root. By Lemma 3.2
this positive root must also be unique.
On the other hand, assume that
This condition implies, in view of (3.14), that G(ff) ! 0 for sufficiently large ff. We
now show by contradiction that G(ff) does not have a positive root. Assume to the
contrary that "
ff is a positive root of G(ff). It then follows from Lemma 3.2 that G(ff)
is positive for ff slightly larger than "
ff since G 0 ("ff) ? 0, and hence G(ff) must have a
root in ("ff; +1), which is a contradiction according to Lemma 3.2. Hence G(ff) does
not have a positive root in this case.
Finally, we consider the case
We also show by contradiction that G(ff) does not have a positive root. Assume to
the contrary that "
ff is a positive root of G(ff). It then follows from Lemma 3.2 that
ff must be a simple root, and a continuous function of the coefficients in G(ff). In
ff is a continuous function of j. Now we slightly increase the value of j so
that
By continuity, G(ff) has a positive root for such values of j, but we have just shown
that this is not possible. Hence, G(ff) does not have a positive
root in this case either.
We now consider the case b lies in the column span of A. This case
arises, for example, when A is a square invertible matrix
and
It follows from b
Now note that
Therefore, by using the Cauchy-Schwarz inequality, we have
and we obtain, after applying the Cauchy-Schawrtz inequality one more time, that
Lemma 3.4. Assume j ? 0 (a standing assumption) and b lies in
the column span of A. Then the function G(ff) in (3.10) has a positive root if, and
only, if
Proof. It is easy to check that
lim
A NEW METHOD FOR PARAMETER ESTIMATION WITH UNCERTAIN
and that
lim
lim
Arguments similar to those in the proof of Lemma 3.3 show that G(ff) does not have
a positive root. Similarly G(ff) does not have a positive root if
arguments similar to those in the proof of Lemma 3.3 show that G(ff) does not have
a positive root if
However, if
lim
Hence G(ff) must have a positive root. By Lemma 3.2 this positive root is unique.
3.3. Finding the Global Minimum. We now show that whenever G(ff) has a
positive root "
ff, the corresponding vector " x in (3.4) must be the global minimizer of
L("x).
Lemma 3.5. Let "
ff be a positive root of G(ff) and let " x be defined by (3.4) for
ff.
x is the global minimum of L("x).
Proof. We first show that
where 4L("x) is the Hessian of L at " x. We take the gradient of L,
Consequently,
A T A \GammakA"x \Gamma bk 3\Gamma
k"xk 3"
We now simplify this expression. It follows from (3.4) that
ffI
and hence
Substituting this relation into the expression for the Hessian matrix 4L("x), and
simplifying the resulting expression using equation (3.3), we obtain
ffI
x
14 CHANDRASEKARAN, GOLUB, GU, AND SAYED
Observe that the matrix
ffI
is positive definite since "
can have at most one non-positive eigenvalue. This implies that 4L("x) is
positive definite if and only if det (4L("x)) ? 0. Indeed,
det
ffI
x
x
ffI
x
The last expression can be further rewritten using the SVD of A and (3.8):
det
x
ffI
x
ffI
="
x
ffI
x
ffI
ff
x
ffI
Comparing the last expression with the function G(ff) in (3.10), we finally have
det
ff
By Lemma 3.2, we have that G 0 ("ff) ? 0. Consequently, 4L("x) must be positive
definite, and hence " x must be a local minimizer of L("x). Since L("x) is a convex
function, this also means that " x is a global minimizer of L("x).
We still need to consider the points at which L("x) is not differentiable. These
include " any solution of
Consider first the case b 2 6= 0. This means that b does not belong to the column
span of A and, hence, we only need to check "
follows from Lemma 3.3 that G(ff) has a unique positive root "
ff and it follows from
Lemma 3.5 that
ffI
is the global minimum. On the other hand, if condition (3.13) does not hold, then it
follows from Lemma 3.3 that G(ff) does not have any positive root and hence
is the global minimum.
Now consider the case b which means that b lies in the column span of
A. In this case L("x) is not differentiable at both "
A NEW METHOD FOR PARAMETER ESTIMATION WITH UNCERTAIN
condition (3.16) holds, then it follows from Lemma 3.4 that G(ff) has a unique positive
ff and it follows from Lemma 3.5 that
ffI
is the global minimum. On the other hand, if
where we have used the Cauchy-Schwartz inequality. It follows that
is the global minimum in this case. Similarly, if j 2 , then
is the global minimum.
We finally consider the degenerate case j. Under this condition, it
follows from (3.15) that
Hence,
This shows that L
L(0). But since L("x) is a convex function in "
x, we
conclude that for any "
x that is a convex linear combination of 0 and
we also obtain Therefore, when there are many solutions " x
and these are all scaled multiples of V \Sigma as in (3.17).
3.4. Statement of the Solution of the Constrained Min-Max Problem.
We collect in the form of a theorem the conclusions of our earlier analysis.
Theorem 3.6. Given A 2 R m\Thetan , with m n and A full rank, b
nonnegative real numbers (j; j b ). The following optimization problem:
min
x
always has a solution "
x. The solution(s) can be constructed as follows.
ffl Introduce the SVD of A,
where U 2 R m\Thetam and V 2 R n\Thetan are orthogonal, and
is diagonal, with
being the singular values of A.
ffl Partition the vector U T b into
m\Gamman .
ffl Introduce the secular function
and
First case: b does not belong to the column span of A.
1. If j 2 then the unique solution is "
2. If j ! 2 then the unique solution is "
is the
unique positive root of the secular equation
Second case: b belongs to the column span of A.
1. If j 2 then the unique solution is "
2. If then the unique solution is "
ff is
the unique positive root of the secular equation
3. If j 1 then the unique solution is "
4. If then there are infinitely many solutions that are given by
The above solution is suitable when the computation of the SVD of A is feasible.
For large sparse matrices A, it is better to reformulate the secular equation as follows.
Squaring both sides of (3.3) we obtain
After some manipulation, we are led to
\Theta
where we have defined Therefore, finding ff reduces to finding
the positive-root of
\Theta
In this form, one can consider techniques similar to those suggested in [5] to find ff
efficiently.
A NEW METHOD FOR PARAMETER ESTIMATION WITH UNCERTAIN
4. Restricted Perturbations. We have so far considered the case in which all
the columns of the A matrix are subject to perturbations. It may happen in practice,
however, that only selected columns are uncertain, while the remaining columns are
known precisely. This situation can be handled by the approach of this paper as we
now clarify.
Given A 2 R m\Thetan , we partition it into block columns,
\Theta
and assume, without loss of generality, that only the columns of A 2 are subject to
perturbations while the columns of A 1 are known exactly. We then pose the following
Given A 2 R m\Thetan , with m n and A full rank, b 2 R m , and nonnegative real
numbers (j
x
\Theta
If we partition "
x accordingly with A 1 and A 2 , say
then we can write
\Theta
Therefore, following the argument at the beginning of Sec. 3, we conclude that the
maximum over (ffiA 2 ; ffib) is achievable and is equal to
In this way, statement (4.1) reduces to the minimization problem
min
\Theta
This statement can be further reduced to the problem treated in Theorem 3.6 as
follows. Introduce the QR decomposition of A, say
R 11 R 12
where we have partitioned R accordingly with the sizes of A 1 and A 2 . Define4
Then (4.2) is equivalent to
min
R 11 R 12
\Gamma4
which can be further rewritten as
min
This shows that once the optimal "
x 2 has been determined, the optimal choice for "
is necessarily the one that annihilates the entry R
That is,
The optimal "
x 2 is the solution of
R 22
This optimization is of the same form as the problem stated earlier in Lemma 3.1
with " x replaced by "
replaced by j 2 , A replaced by
R 22
, and b replaced by
Therefore, the optimal "
x 2 can be obtained by applying the result of Theorem 3.6.
Once "
x 2 has been determined, the corresponding " x 1 follows from (4.5).
5. Conclusion. In this paper we have proposed a new formulation for parameter
estimation in the presence of data uncertainties. The problem incorporates a-priori
bounds on the size of the perturbations and admits a nice geometric interpretation. It
also has a closed form solution that is obtained by solving a regularized least-squares
problem with a regression parameter that is the unique positive root of a secular
equation.
Several other interesting issues remain to be addressed. Among these, we state
the following:
1. A study of the statistical properties of the min-max solution is valuable for a
better understanding of its performance in stochastic settings.
2. The numerical properties of the algorithm proposed in this paper need also
be addressed.
3. Extensions of the algorithm to deal with perturbations in submatrices of A
are of interest and will be studied elsewhere.
We can also extend the approach of this paper to other variations that include
uncertainties in a weighting matrix, multiplicatives uncertainties, etc (see, e.g., [15]).
--R
Robust solutions to least-squares problems with uncertain data
Some modified matrix eigenvalue problems
An analysis of the total least squares problem
Generalized cross-validation for large scale problems
Linear estimation in Krein spaces - Part I: Theory
The Total Least Squares Problem: Computational Aspects and Analysis
Fundamentals of
Filtering and smoothing in an H 1
Society for Industrial and Applied Mathematics
Fundamental Inertia Conditions for the Minimization of Quadratic Forms in Indefinite Metric Spaces
"Parameter estimation in the presence of bounded modeling errors,"
--TR
--CTR
Arvind Nayak , Emanuele Trucco , Neil A. Thacker, When are Simple LS Estimators Enough? An Empirical Study of LS, TLS, and GTLS, International Journal of Computer Vision, v.68 n.2, p.203-216, June 2006
Mohit Kumar , Regina Stoll , Norbert Stoll, Robust Solution to Fuzzy Identification Problem with Uncertain Data by Regularization, Fuzzy Optimization and Decision Making, v.3 n.1, p.63-82, March 2004
Pannagadatta K. Shivaswamy , Chiranjib Bhattacharyya , Alexander J. Smola, Second Order Cone Programming Approaches for Handling Missing and Uncertain Data, The Journal of Machine Learning Research, 7, p.1283-1314, 12/1/2006
Ivan Markovsky , Sabine Van Huffel, Overview of total least-squares methods, Signal Processing, v.87 n.10, p.2283-2302, October, 2007 | least-squares estimation;total least-squares;regularized least-squares;ridge regression;secular equation;modeling errors;robust estimation |
287637 | Computing rank-revealing QR factorizations of dense matrices. | We develop algorithms and implementations for computing rank-revealing QR (RRQR) factorizations of dense matrices. First, we develop an efficient block algorithm for approximating an RRQR factorization, employing a windowed version of the commonly used Golub pivoting strategy, aided by incremental condition estimation. Second, we develop efficiently implementable variants of guaranteed reliable RRQR algorithms for triangular matrices originally suggested by Chandrasekaran and Ipsen and by Pan and Tang. We suggest algorithmic improvements with respect to condition estimation, termination criteria, and Givens updating. By combining the block algorithm with one of the triangular postprocessing steps, we arrive at an efficient and reliable algorithm for computing an RRQR factorization of a dense matrix. Experimental results on IBM RS/6000 SGI R8000 platforms show that this approach performs up to three times faster that the less reliable QR factorization with column pivoting as it is currently implemented in LAPACK, and comes within 15% of the performance of the LAPACK block algorithm for computing a QR factorization without any column exchanges. Thus, we expect this routine to be useful in may circumstances where numerical rank deficiency cannot be ruled out, but currently has been ignored because of the computational cost of dealing with it. | INTRODUCTION
We briefly summarize the properties of a rank-revealing QR factorization (RRQR
factorization). Let A be an m \Theta n matrix (w.l.o.g. m - n) with singular values
and define the numerical rank r of A with respect to a threshold - as follows:
oe r
oe r+1
Also, let A have a QR factorization of the form
R 11 R 12
where P is a permutation matrix, Q has orthonormal columns, R is upper triangular
and R 11 is of order r. Further, let -(A) denote the two-norm condition number of
a matrix A. We then say that (2) is an RRQR factorization of A if the following
properties are satisfied:
Whenever there is a well-determined gap in the singular value spectrum between oe r
and oe r+1 , and hence the numerical rank r is well defined, the RRQR factorization
(2) reveals the numerical rank of A by having a well-conditioned leading submatrix
R 11 and a trailing submatrix R 22 of small norm. We also note that the matrix
\GammaI
which can be easily computed from (2), is usually a good approximation of the
nullvectors, and a few steps of subspace iteration suffice to compute nullvectors
that are correct to working precision [Chan and Hansen 1992].
The RRQR factorization is a valuable tool in numerical linear algebra because
it provides accurate information about rank and numerical nullspace. Its main
use arises in the solution of rank-deficient least-squares problems, for example,
in geodesy [Golub et al. 1986], computer-aided design [Grandine 1989], nonlinear
least-squares problems [Mor'e 1978], the solution of integral equations [Eld'en and
Schreiber 1986], and the calculation of splines [Grandine 1987]. Other applications
arise in beamforming [Bischof and Shroff 1992], spectral estimation [Hsieh et al.
1991], and regularization [Hansen 1990; Hansen et al. 1992; Wald'en 1991].
Stewart [1990] suggested another alternative to the singular value decomposition
(SVD), a complete orthogonal decomposition called URV decomposition. This
factorization decomposes
R 11 R 12
where U and V are orthogonal and both kR 12 k 2 and kR 22 k 2 are of the order oe r+1 .
In particular, compared with RRQR factorizations, URV decompositions employ a
general orthogonal matrix V instead of the permutation matrix P . URV decompositions
are more expensive to compute, but they are well suited for nullspace updat-
ing. RRQR factorizations, on the other hand, are more suited for the least-squares
setting, since one need not store the orthogonal matrix V (the other orthogonal
matrix is usually applied to the right-hand side "on the fly"). Of course, RRQR
factorizations can be used to compute an initial URV decomposition, where
We briefly review the history of RRQR algorithms. From the interlacing theorem
for singular values [Golub and Loan 1983, Corollary 8.3.3], we have
oe
Hence, to satisfy condition (3), we need to pursue two tasks:
Task 1. Find a permutation P that maximizes oe min (R 11 ).
Task 2. Find a permutation P that minimizes oe max (R 22 ).
Businger and Golub [1965] suggested what is commonly called the "QR factorization
with column pivoting." Given a set of already selected columns, this algorithm
chooses as the next pivot column the one that is "farthest away" in the Euclidean
norm from the subspace spanned by the columns already chosen [Golub and Loan
1983, p.168, P.6.4-5]. This intuitive strategy addresses task 1.
While this greedy algorithm is known to fail on the so-called Kahan matrices
[Golub and Loan 1989, p. 245, Example 5.5.1], it works well in practice and
forms the basis of the LINPACK [Dongarra et al. 1979] and LAPACK [Anderson
et al. 1992a; Anderson et al. 1994b] implementations. Recently, Quintana-Ort'i,
Sun, and Bischof [1995] developed an implementation of the Businger/Golub algorithm
that allows half of the work to be performed with BLAS-3 kernels. Bischof
also had developed restricted-pivoting variants of the Businger/Golub strategy to
enable the use of BLAS-3 type kernels [1989] for almost all of the work and to
reduce communication cost on distributed-memory machines [1991].
One approach to task-2 is based, in essence, on the following fact, which is proved
in [Chan and Hansen 1992].
Lemma 1. For any R 2 IR n\Thetan and any W =
n\Thetap with a nonsingular
This means that if we can determine a matrix W with p linearly independent
columns, all of which lie approximately in the nullspace of R (i.e., kRWk 2 is small),
and if W 2 is well conditioned such that (oe min (W
is not large, then
we are guaranteed that the elements of the bottom right p \Theta p block of R will be
small.
Algorithms based on computing well-conditioned nullspace bases for A include
these by Golub, Klema, and Stewart [1976], Chan [1987], and Foster [1986]. Other
algorithms addressing task-2 are these by Stewart [1984] and Gragg and Stewart
[1976]. Algorithms addressing task 1 include those of Chan and Hansen [1994]
and Golub, Klema, and Stewart [1976]. In fact, the latter achieves both task 1 and
task 2 and, therefore, reveals the rank, but it is too expensive in comparison with
the others.
Bischof and Hansen combined a restricted-pivoting strategy with Chan's algorithm
[Chan 1987] to arrive at an algorithm for sparse matrices [Bischof and Hansen
1991] and also developed a block-variant of Chan's algorithm [Bischof and Hansen
1992]. A Fortran 77 implementation of Chan's algorithm was provided by Reichel
and Gragg [1990].
Chan's algorithm [Chan 1987] guaranteed
and
That is, as long as the rank of the matrix is close to n, the algorithm is guaranteed
to produce reliable bounds, but reliability may decrease with the rank of the matrix.
Hong and Pan [1992] then showed that there exists a permutation matrix P such
that for the triangular factor R partitioned as in (2) we have
and
oe min (R 11
are low-order polynomials in n and r (versus an exponential factor
in Chan's algorithm).
Chandrasekaran and Ipsen [1994] were the first to develop RRQR algorithms that
satisfy (8) and (9). Their paper also reviews and provides a common framework
for the previously devised strategies. In particular, they introduce the so-called
unification principle, which says that running a task-1 algorithm on the rows of the
inverse of the matrix yields a task-2 algorithm. They suggest hybrid algorithms that
alternate between task-1 and task-2 steps to refine the separation of the singular
values of R.
Pan and Tang [1992] and Gu and Eisenstat [1992] presented different classes of
algorithms for achieving (8) and (9), addressing the possibility of nontermination
of the algorithms because of floating-point inaccuracies.
The goal of our work was to develop an efficient and reliable RRQR algorithm
and implementation suitable for inclusion in a numerical library such as LAPACK.
Specifically, we wished to develop an implementation that was both reliable and
close in performance to the QR factorization without any pivoting. Such an implementation
would provide algorithm developers with an efficient tool for addressing
potential numerical rank deficiency by minimizing the computational penalty for
addressing potential rank deficiency. Our strategy involves the following ingredients:
-An efficient block-algorithm for computing an approximate RRQR factorization
based on the work by Bischof [1989], and
-efficient implementations of RRQR algorithms well suited for triangular matrices
based on the work by Chandrasekaran and Ipsen [1994] and Pan and Tang
[1992]. These algorithms seemed better suited for triangular matrices than those
suggested by Gu and Eisenstat [1992].
We expect that
1.
2. foreach i ng do res do
3. for to min(m; n) do
4. Let i - pvt - n be such that respvt is maximal
5.
7.
8. foreach ng do
9. res j :=
res 2
10. end foreach
11. end for
Fig. 1. The QR Factorization Algorithm with Traditional Column Pivoting
-in most cases the approximate RRQR factorization computed by the block algorithm
is very close to the desired RRQR factorization, requiring little postpro-
cessing, and
-the almost entirely BLAS-3 based preprocessing algorithm performs considerably
faster than the QR factorization with column pivoting and close to the performance
of the QR factorization without pivoting.
The paper is structured as follows. In the next section, we review the block
algorithm for computing an approximate RRQR factorization based on a restricted-
pivoting approach. In Section 3, we describe our modifications to Chandrasekaran
and Ipsen's ``Hybrid-III'' algorithm and Pan and Tang's "Algorithm 3." Section 4
presents our experimental results on IBM RS/6000, and SGI R8000 platforms. In
Section 5, we summarize our results.
2. A BLOCK QR FACTORIZATION WITH RESTRICTED PIVOTING
In this section, we describe a block QR factorization algorithm which employs
a restricted pivoting strategy to approximately compute a RRQR factorization,
employing the ideas described in Bischof [1989].
We compute Q by a sequence of Householder matrices
For any given vector x, we can choose a vector u so that
the first canonical unit vector and j ff (see, for example,[Golub and Loan
1989, p. 196]). The application of a Householder matrix B := H(u)A involves a
matrix-vector product z := A T u and a rank-one update B := A \Gamma 2uz T .
Figure
1 describes the Businger/Golub Householder QR factorization algorithm
with traditional column pivoting [Businger and Golub 1965] for computing the QR
decomposition of an m \Theta n matrix A. The primitive operation [u; y] := genhh(x)
computes u such that y = H(u)x is a multiple of e 1 , while the primitive operation
After step i is completed, the values res are the length of the
projections of the j th column of the currently permuted AP onto the orthogonal
complement of the subspace spanned by the first i columns of AP . The values res j
can be updated easily and do not have to be recomputed at every step, although
e
e
e
e
e
e
@
@
Fig. 2. Restricting Pivoting for a Block Algorithm
roundoff errors may make it necessary to recompute res
periodically [Dongarra et al. 1979, p. 9.17] (we suppressed this detail
in line 9 of Figure 1).
The bulk of the computationalwork in this algorithm is performed in the apphh ker-
nel, which relies on matrix-vector operations. However, on today's cache-based
architectures (ranging from workstations to supercomputers) matrix-matrix operations
perform much better. Matrix-matrix operations are exploited by using
so-called block algorithms, whose top-level unit of computation is matrix blocks
instead of vectors. Such algorithms play a central role, for example, in the LAPACK
implementations [Anderson et al. 1992a; Anderson et al. 1994b]. LAPACK
employs the so-called compact WY representation of products of Householder matrices
[Schreiber and Van Loan 1989], which expresses the product
of a series of m \Theta m Householder matrices (10) as
where Y is an m \Theta nb matrix and T is an nb \Theta nb upper triangular matrix. Stable
implementations for generating Householder vectors as well as forming and applying
compact WY factors are provided in LAPACK.
To arrive at a block QR factorization algorithm, we would like to avoid updating
part of A until several Householder transformations have been computed. This
strategy is impossible with traditional pivoting, since we must update res j before
we can choose the next pivot column. While we can modify the traditional approach
to do half of the work using block transformations, this is the best we can do (these
issues are discussed in detail in [Quintana-Ort'i et al. 1995]). Therefore, we instead
limit the scope of pivoting as suggested in [Bischof 1989], Thus, we do not have
to update the remaining columns until we have computed enough Householder
transformations to make a block update worthwhile.
The idea is graphically depicted in Figure 2. At a given stage we are done with
the columns to the left of the pivot window. We then try to select the next pivot
columns exclusively from the columns in the pivot window, not touching the part of
A to the right of the pivot window. Only when we have combined the Householder
vectors defined by the next batch of pivot columns into a compact WY factor, do
we apply this block update to the columns on the right.
Since the leading block of R is supposed to approximate the large singular values
of A, we must be able to guard against pivot columns that are close to the span
of columns already selected. That is, given the upper triangular matrix R i defined
by the first i columns of Q T AP and a new column
determined by the new
candidate pivot column, we must determine whether
has a condition number that is larger than a threshold - , which defines what we
consider a rank-deficient matrix.
We approximate
oe (R
which is easy to compute. To cheaply estimate oe min (R i+1 ), we employ incremental
condition estimation (ICE) [Bischof 1990; Bischof and Tang 1991]. Given a good
estimate b oe min (R i defined by a large norm solution x to R T
1 and a new column
, incremental condition estimation, with only 3k flops,
computes s and c, s
oe min (R
oe min (R
sx
c
A stable implementation of ICE based on the formulation in [Bischof and Tang
1991] is provided by the LAPACK routine xLAIC1. 1 ICE is an order of magnitude
cheaper than other condition estimators (see, for example, [Higham 1986]). More-
over, it is considerably more reliable than simply using j fl j as an estimate for
oe min (R i+1 ) (see, for example, [Bischof 1991]). We also define
b oe min (R
The restricted block pivoting algorithm proceeds in four phases:
Phase 1: Pivoting of largest column into first position. This step is motivated by
the fact that the norm of the largest column of A is usually a good estimate for
Phase 2: Block QR factorization with restricted pivoting. Given a desired block
size nb and a window size ws, ws - nb, we try to generate nb Householder transformations
by applying the Businger/Golub pivoting strategy only to the columns
in the pivot window, using ICE to assess the impact of a column selection on the
condition number via ICE. When the pivot column chosen from the pivot window
would lead to a leading triangular factor whose condition number exceeds - , we
mark all remaining columns in the pivot window (k, say) as "rejected," pivot them
to the end of the matrix, generate a block transformation (of width not more than
nb), apply it to the remainder of the matrix, and then reposition the pivot window
1 Here as in the sequel we use the conventionthat the prefix "x" generically refers to the appropriate
one of the four different precision instantiations: SLAIC1, DLAIC1, CLAIC1, or ZLAIC1.
a
to encompass the next ws not yet rejected columns. When all columns have been
either accepted as part of the leading triangular factor or rejected at some stage of
the algorithm, this phase stops.
Assuming we have included r 2 columns in the leading triangular factor, we have
at this point computed an r 2 \Theta r 2 upper triangular matrix R
that satisfies
That is, r 2 is our estimate of the numerical rank with respect to the threshold - at
this point.
In our experiments, we chose
This choice tries to ensure a suitable pivot window and "loosens up" a bit as matrix
size increases. A pivot window that is too large will reduce performance because of
the overhead in generating block orthogonal transformations and the larger number
of unblocked operations. On the other hand, a pivot window that is too small will
reduce the pivoting flexibility and thus increase the likelihood that the restricted
pivoting strategy will fail to produce a good approximate RRQR factorization. In
our experiments, the choice of w had only a small impact (not more than 5%) on
overall performance and negligible impact on the numerical behavior.
Phase 3: Traditional pivoting strategy among "rejected" columns. Since phase 2
rejects all remaining columns in the pivot window when the pivot candidate is
rejected, a column may have been pivoted to the end that should not have been
rejected. Hence, we now continue with the traditional Businger/Golub pivoting
strategy on the remaining updating (14) as an estimate of the
condition number. This phase ends at column r 3 , say, where
and the inclusion of the next pivot column would have pushed the condition number
beyond the threshold. We do not expect many columns (if any) to be selected in
this phase. It is mainly intended as a cheap safeguard against possible failure of
the initial restricted-pivoting strategy.
Phase 4: Block QR factorization without pivoting on remaining columns. The
columns not yet factored (columns r 3 are with great probability linearly
dependent on the previous ones, since they have been rejected in both phase 2
and phase 3. Hence, it is unlikely that any kind of column exchanges among the
remaining columns would change our rank estimate, and the standard BLAS-3
block QR factorization as implemented in the LAPACK routine xGEQRF is the
fastest way to complete the triangularization.
After the completion of phase 4, we have computed a QR factorization Kthat
satisfies
and for any column y in R(:; r 3 n) we have
R r3'
This result suggests that this QR factorization is a good approximation to a RRQR
factorization and r 3 is a good estimate of the rank.
However, this QR factorization does not guarantee to reveal the numerical rank
correctly. Thus, we back up this algorithm with the guaranteed reliable RRQR
implementations introduced in the next two sections.
3. POSTPROCESSING ALGORITHMS FOR AN APPROXIMATE RRQR FACTOR-
IZATION
In 1991, Chandrasekaran and Ipsen [1994] introduced a unified framework for
RRQR algorithms and developed an algorithm guaranteed to satisfy (8) and
and thus to properly reveal the rank. Their algorithm assumes that the initial matrix
is triangular and thus is well suited as a postprocessing step to the algorithm
presented in the prexeding section. Shortly thereafter, Pan and Tang [1992] introduced
another guaranteed reliable RRQR algorithm for triangular matrices. In the
following subsections, we describe our improvements and implementations of these
algorithms.
3.1 The RRQR Algorithm by Pan and Tang
We implement a variant of what Pan and Tang [1992] call "Algorithm 3." Pseudocode
for our algorithm is shown in Figure 3. It assumes as input an upper
triangular matrix R. \Pi R (i; denotes a right cyclic permutation that exchanges
columns i and j, e.g.,
denotes a left cyclic permutation that exchanges columns i and j, i.e., j /
j. In the algorithm, triu(A) denotes the upper triangular factor
R in a QR factorization A = QR of A. As can be seen from Figure 3, we use this
notation as shorthand for retriangularizations of R after column exchanges.
Given a value for k, and a so-called f-factor
1, the algorithm is
guaranteed to halt and produce a triangular factorization that satisfies
oe min (R 11
oe (R 22
f
Our implementation incorporates the following features:
(1) Incremental condition estimation (ICE) is used to arrive at estimates for smallest
singular values and vectors. Thus, oe (line 5) and v (line of Figure 3
can be computed inexpensively from u (line 2). The use of ICE significantly
reduces implementation cost.
(2) The QR factorization update (line 4) must be performed only when the if-test
(line Thus, we delay it if possible.
(3) For the algorithm to terminate, all columns need to be checked, and no new
permutations must occur. In Pan and Tang's algorithm, rechecking of columns
Algorithm
1.
2. u := left singular vector corresponding to oe min (R(1: k; 1: k))
3. while ( accepted col -
4. R := triu(R \Delta \Pi R
5. Compute
7. accepted col := accepted col
8. else
9. v := right singular vector corresponding to oe
10. Find index q, 1 - q
11. R := triu(R \Delta \Pi L
12. u := left singular vector corresponding to oe min (R(1:k; 1: k))
13. end if
14. if (i == n) then i
15. end while
Fig. 3. Variant of Pan/Tang RRQR Algorithm
after a permutation always starts at column k + 1. We instead begin checking
at the column right after the one that just caused a permutation. Thus, we
first concentrate on the columns that have not just been "worked over."
(4) The left cyclic shift permutes the triangular matrix into an upper Hessenberg
form, which is then retriangularized with Givens rotations. Applying Givens
rotations to rows of R in the obvious fashion (as done, for example, in [Re-
ichel and Gragg 1990]) is expensive in terms of data movement, because of the
column-oriented nature of Fortran data layout. Thus, we apply Givens rotations
in an aggregated fashion, updating matrix strips (R(1 : jb; (j \Gamma1)b+1 : jb))
of width b with all previously computed Givens rotations.
Similarly, the right cyclic shift introduces a "spike" in column j, which is eliminated
with Givens rotations in a bottom-up fashion. To aggregate Givens
rotations, we first compute all rotations only touching the "spike" and the diagonal
of R, and then apply all of them one block column at a time. In our
experiments, we choose the width b of the matrix strips to be the same as the
blocksize nb of the preprocessing.
Compared with a straightforward implementation of Pan and Tang's "Algorithm
3," improvements (1) through (3) on average decreased runtime by a factor of five on
200 \Theta 200 matrices on an Alliant FX/80. When retriangularizations were frequent,
improvement (4) had the most noticeable impact, resulting in a twofold to fourfold
performance gain on matrices of order 500 and 1000 on an IBM RS/6000-370.
Pan and Tang introduced the f-factor to prevent cycling of the algorithm. The
higher f is, the tighter the bounds in (18) and (19), and the better the approximations
to the k and k 1st singular values of R. However, if f is too large, it
introduces more column exchanges and therefore more iterations, and, because of
round-off errors, it might present convergence problems. We used
in our work.
Algorithm
1.
2. repeat
3. Golub-I-sf(f,k)
4. Golub-I-sf(f,k+1)
5. Chan-II-sf(f,k+1)
6.
7. until none of the four subalgorithms modified the column ordering
Fig. 4. Variant of Chandrasekaran/Ipsen Hybrid-III algorithm
Algorithm Golub-I-sf(f,k)
1. Find smallest index j, k - j - n, such that
2. kR(k: j; j)k
3.
4. R := triu(R \Delta \Pi R
5. end if
Fig. 5. "f-factor" Variant of Golub-I Algorithm
3.2 The RRQR Algorithm by Chandrasekaran and Ipsen
Chandrasekaran and Ipsen introduced algorithms that achieve bounds (18) and (19)
with We implemented a variant of the so-called Hybrid-III algorithm,
pseudocode for which is shown in Figures 4 - 6.
Compared with the original Hybrid-III algorithm, our implementation incorporates
the following features:
(1) We employ the Chan-II strategy (an O(n 2 ) algorithm) instead of the so-called
Stewart-II strategy (an O(n 3 ) algorithm because of the need for the inversion of
that Ipsen and Chandrasekaran employed in their experiments.
(2) The original Hybrid-III algorithm contained two subloops, with the first one
looping over Golub-I(k) and Chan-II(k) till convergence, the second one looping
over Golub-I(k+1) and Chan-II(k+1). We present a different loop ordering in
our variant, one that is simpler and seems to enhance convergence. On matrices
that required considerable postprocessing, the new loop ordering required about
7% less steps for 1000 \Theta 1000 matrices (one step being a call to Golub-I or Chan-
II) than Chandrasekaran and Ipsen's original algorithm. In addition, the new
ordering speeds up detection of convergence, as shown below.
Algorithm
1. v := right singular vector corresponding to oe min (R(1:k; 1: k)).
2. Find largest index
3. if f \Delta jv
4. R := triu(R \Delta \Pi L
5. end if
Fig. 6. "f-factor" Variant of Chan-II Algorithm
(3) As in our implementation of the Pan/Tang algorithm, we use ICE for estimating
singular values and vectors, and the efficient "aggregated" Givens scheme for
the retriangularizations.
We employ a generalization of the f-factor technique to guarantee termination
in the presence of rounding errors. The pivoting method assigns to every column
a "weight," namely, kR(k: in Golub-I(k) and v i in Chan-II(k), where
v is the right singular vector corresponding to the smallest singular value of
To ensure termination, Chandrasekaran and Ipsen suggested pivoting
a column only when its weight exceeded that of the current column by at
least n 2 ffl, where ffl is the computer precision; they did not analyze the impact of
this change on the bounds obtained by the algorithm. In contrast, we use a multiplicative
tolerance factor f like Pan and Tang; the analysis in [Quintana-Ort'i
and Quintana-Ort'i 1996] proves that our algorithm achieves the bounds
oe min (R 11
oe k (A); and (20)
oe (R 22
These bounds are identical to (18) and (19), except that an f 2 instead of an
f enters into the equation and that now 0 ! f - 1. We used our
implementation.
We claimed before that the new loop ordering can avoid unnecessary steps when
the algorithm is about to terminate. To illustrate, consider the situation where we
apply Chandrasekaran and Ipsen's original ordering to a matrix that almost reveals
the rank:
1. Golub-I(k) Final permutation occurs here.
Now the rank is revealed.
2. Chan-II(k)
3. Golub-I(k) Another iteration of inner k-loop
since permutation occurred.
4. Chan-II(k)
5. Golub-I(k+1) Inner loop for
7. Golub-I(k) Another iteration of the main loop
since permutation occurred in last pass.
8.
9. Golub-I(k+1)
10. Chan-II(k+1) Termination
In contrast, the Hybrid-III-sf algorithm terminates in four steps:
1. Golub-I-sf(k) Final permutation
2. Golub-I-sf(k+1)
3. Chan-II-sf(k+1)
4. Chan-II-sf(k) Termination
Algorithm RRQR(f,k)
repeat
call Hybrid-III-sf(f,k) or PT3M(f,k)
ff
if
rank := k; stop
else if ( ( ff - ) and (fi - ) )then
else if ( ( ff - ) and ( fi - ) )then
Fig. 7. Algorithm for Computing Rank-Revealing QR Factorization
3.3 Determining the Numerical Rank
As Stewart [1993] pointed out, both the Chandrasekaran/Ipsen and Pan/Tang al-
gorithms, as well as our versions of those algorithms, do not reveal the rank of
a matrix per se. Rather, given an integer k, they compute tight estimates for
To obtain the numerical rank with respect to a given threshold - , given an initial
estimate for the rank (as provided, for example, by the algorithm described in Section
2), we employ the algorithm shown in Figure 7. In our actual implementation,
ff and fi are computed in Hybrid-III-sf or PT3M.
4. EXPERIMENTAL RESULTS
We report in this section experimental results with the double-precision implementations
of the algorithms presented in the preceding section. We consider the
following codes:
DGEQPF. The implementation of the QR factorization with column pivoting
currently provided in LAPACK.
DGEQPB. An implementation of the "windowed" QR factorization scheme described
in Section 2.
DGEQPX. DGEQPB followed by an implementation of the variant of the Chan-
drasekaran/Ipsen algorithm described in subsections 3.2 and 3.3.
DGEQPY. DGEQPB followed by an implementation of the variant of the
Pan/Tang algorithm described in subsections 3.1 and 3.3.
DGEQRF. The block QR factorization without any pivoting provided in LAPACK
In the implementation of our algorithms, we make heavy use of available LAPACK
infrastructure. The code used in our experiments, including test and timing
drivers and test matrix generators, is available as rrqr.tar.gz in pub/prism on
ftp.super.org.
We tested matrices of size 100; 150; 250; 500, and 1000 on an IBM RS/6000 Model
370 and SGI R8000. In each case, we employed the vendor-supplied BLAS in the
ESSL and SGIMATH libraries, respectively.
4.1 Numerical Reliability
We employed different matrix types to test the algorithms, with various singular
value distributions and numerical rank ranging from 3 to full rank. Details of the
test matrix generation are beyond the scope of this paper, and we give only a brief
synopsis here. For details, the reader is referred to the code.
Test were designed to exercise column pivoting. Matrix
6 was designed to test the behavior of the condition estimation in the presence
of clusters for the smallest singular value. For the other cases, we employed the
LAPACK matrix generator xLATMS, which generates random symmetric matrices by
multiplying a diagonal matrix with prescribed singular values by random orthogonal
matrices from the left and right. For the break1 distribution, all singular values are
1.0 except for one. In the arithmetic and geometric distributions, they decay from
1.0 to a specified smallest singular value in an arithmetic and geometric fashion,
respectively. In the "reversed" distributions, the order of the diagonal entries was
reversed. For test cases 7 though 12, we used xLATMS to generate a matrix of
smallest singular value 5.0e-4, and then interspersed random
linear combinations of these "full-rank" columns to pad the matrix to order n. For
test cases 13 through 18, we used xLATMS to generate matrices of order n with the
smallest singular value being 2.0e-7. We believe this set to be representative of
matrices that can be encountered in practice.
We report in this section on results for matrices of size noting that
identical qualitative behavior was observed for smaller matrix sizes. We decided
to report on the largest matrix sizes because the possibility for failure in general
increases with the number of numerical steps involved. Numerical results obtained
on the three platforms agreed to machine precision. For this case, we list in Table 1
the numerical rank r with respect to a condition threshold of 1:0e5, the largest
singular value oe max , the r-th singular value oe r , the (r 1)st singular value oe r+1 ,
and the smallest singular value oe min for our test cases.
Figures
8 and 9 display the ratio
\Theta := (oe 1 =oe r )
where b-(R) as defined in (14) is the computed estimate of the condition number of
R after DGEQPB (Figure 8) and DGEQPX and DGEQPY (Figure 9). Thus, \Theta
is the ratio between the ideal condition number and the estimate of the condition
number of the leading triangular factor identified in the RRQR factorization. If this
ratio is close to 1, and b- is a good condition estimate, our RRQR factorizations do
a good job of capturing the "large" singular values of A. Since the pivoting strategy
and hence the numerical behavior of DGEQPB is potentially affected by the block
size chosen, Figures 8 and 9 contain seven panels, each of which shows the results
obtained with the test matrices and a block size ranging from 1 to 24 (shown in
the top of each panel).
We see that except for matrix type 1 in Figure 8, the block size does not play
much of a rule numerically, although close inspection reveals subtle variations in
both
Figure
8 and 9. With block size 1, DGEQPB just becomes the standard
Businger/Golub pivoting strategy. Thus, the first panel in Figure 8 corroborates
the experimentally robust behavior of this algorithm. We also see that except for
Table
1. Test Matrix Types
Description r oe max oe r oe r+1 oe min
Matrix with rank min(m;n)
has full rank
3 Full rank 1000 1.0e0 5.0e-4 5.0e-4 5.0e-4
small in norm
n) of full rank
small in norm
smallest sing. values clustered 1000 1.0e0 7.0e-4 7.0e-4-3 7.0e-4
7 Break1 distribution 501 1.0e0 5.0e-4 1.7e-15 1.0e-26
Reversed break1 distribution 501 1.0e0 5.0e-4 1.7e-15 1.2e-27
9 Geometric distribution 501 1.0e0 5.0e-4 3.3e-16 1.9e-35
Reversed geometric distribution 501 1.0e0 5.0e-4 3.2e-16 5.4e-35
11 Arithmetic distribution 501 1.0e0 5.0e-4 9.7e-16 1.4e-34
Reversed arithmetic distribution 501 1.0e0 5.0e-4 9.7e-16 1.2e-34
13 Break1 distribution 999 1.0e0 1.0e0 2.0e-7 2.0e-7
14 Reversed break1 distribution 999 1.0e0 1.0e0 2.0e-7 2.0e-7
Geometric distribution 746 1.0e0 5.0e-5 9.9e-6 2.0e-7
Reversed geometric distribution 746 1.0e0 5.0e-5 9.9e-6 2.0e-7
17 Arithmetic distribution 999 1.0e0 1.0e-1 2.0e-7 2.0e-7
Reversed arithmetic distribution 999 1.0e0 1.0e-1 2.0e-7 2.0e-7
Tests
Optimal
cond_no.
Estimated
cond_no.
Fig. 8. Ratio between Optimal and Estimated Condition Number for
s
Optimal
cond_no.
Estimated
cond_no.
. QPY
Fig. 9. Ratio between Optimal and Estimated Condition Number for DGEQPX (solid line) and
DGEQPY (dashed)
type 1, the restricted pivoting strategy employed in DGEQPB does not
have much impact on numerical behavior. For matrix type 1, however, it performs
much worse. Matrix 1 is constructed by generating n\Gamma 1 independent columns and
generating the leading n+1 as random linear combinations of those columns, scaled
by ffl 1
4 , where ffl is the machine precision. Thus, the restricted pivoting strategy, in
its myopic view of the matrix, gets stuck, so to speak, in these columns.
The postprocessing of these approximate RRQR factorizations, on the other
hand, remedies potential shortcomings in the preprocessing step. As can be seen
from
Figure
9, the inaccurate factorization of matrix 1 is corrected, while the other,
in essence correct, factorizations get improved only slightly. Except for small vari-
ations, DGEQPX and DGEQPY deliver identical results.
We also computed the exact condition number of the leading triangular submatrices
identified in the triangularizations by DGEQPB, DGEQPX, and DGEQPY,
and compared it with our condition estimate. Figure 10 shows the ratio of the exact
condition number to the estimated condition number of the leading triangular
factor. We observe excellent agreement, within an order of magnitude in all cases.
Hence, the "spikes" for test matrices 13 and 14 in Figures 8 and 9 are not due
to errors in our estimators. Rather, they show that all algorithms have difficulties
when confronted with dense clusters of singular values. We also note that in this
context, the notion of rank is numerically illdefined, since there is no sensible place
to draw the line. The "rank" derived via the SVD is 746 in both cases, and our
algorithms deliver estimates between 680 and 710, with minimal changes in the
condition number of their corresponding leading triangular factors.
In summary, these results show that DGEQPX and DGEQPY are reliable algorithms
for revealing numerical rank. They produce RRQR factorizations whose
s
Exact
Estimated
-. QPX
. QPY
Fig. 10. Ratio between Exact and Estimated Condition Number of Leading Triangular Factor
for DGEQPB (dashed), DGEQPX (dashed-dotted) and DGEQPY (dotted)
leading triangular factors accurately capture the desired part of the spectrum of A,
and thus reliable and numerically sensible rank estimates. Thus, the RRQR factorization
takes advantage of the efficiency and simplicity of the QR factorization,
yet it produces information that is almost as reliable as that computed by means
of the more expensive singular value decomposition.
4.2 Computing Performance
In this section we report on the performance of the LAPACK codes DGEQPF and
DGEQRF as well as the new DGEQPB, DGEQPX, and DGEQPY codes. For these
codes, as well as all others presented in this section, the Mflop rate was obtained by
dividing the number of operations required for the unblocked version of DGEQRF
by the runtime. This normalized Mflop rate readily allows for timing comparisons.
We report on matrix sizes 100, 250, 500, and 1000, using block sizes (nb) of 1, 5,
Figures
show the Mflop performance (averaged over the
versus block size on the IBM and SGI platforms. The dotted line denotes
the performance of DGEQPF, the solid one that of DGEQRF and the dashed one that
of DGEQPB; the 'x' and '+' symbols indicate DGEQPX and DGEQPY, respectively.
On all three machines, the performance of the two new algorithms for computing
RRQR is robust with respect to variations in the block size. The two new block
algorithms for computing RRQR factorization are, except for small matrices on the
SGI, faster than LAPACK's DGEQPF for all matrix sizes. We note that the SGI
has a data cache of 4 MB, while the IBM platform has only a data cache.
Thus, matrices up to order 500 fit into the SGI cache, but matrices of order 1000 do
not. Therefore, for matrices of size 500 or less we observe limited benefits from the
Block size
Performance
(in
Performance
(in
Block size
Performance
(in
Performance
(in
Fig. 11. Performance versus Block Size on IBM RS/6000-370: DGEQPF (\Delta \Delta \Delta), DGEQRF (-),
Block size
Performance
(in
Block size
Performance
(in
Performance
(in
Block size
Performance
(in
Fig. 12. Performance versus Block Size on SGI R8000: DGEQPF (\Delta \Delta \Delta), DGEQRF (-), DGE-
Performance
(in
Fig. 13. Performance versus Matrix Type on an IBM RS/6000-370 for
better inherent data locality of the BLAS 3 implementation in this computer. These
results also show that DGEQPX and DGEQPY exhibit comparable performance.
Figures
13 through 14 offer a closer look at the performance of the various test
matrices. We chose nb = 16 and as a representative example. Similar
behavior was observed in the other cases.
We see that on the IBM platforms (Figure 13), the performance of DGEQRF
and DGEQPF does not depend on the matrix type. We also see that, except for
matrix types 1, 5, 15, and 16, the postprocessing of the initial approximate RRQR
factorization takes up very little time, with DGEQPX and DGEQPY performing
similarly. For matrix type 1, considerable work is required to improve the initial
QR factorization. For matrix types 5 and 15, the performance of DGEQPX and
DGEQPY differ noticeably on the IBM platform, but there is no clear winner.
We also note that matrix type 5 is suitable for DGEQPB, since the independent
columns are up front and thus are revealed quickly, and the rest of the matrix is
factored with DGEQRF.
The SGI platform (Figure 14) offers a different picture. The performance of all
algorithms shows more dependence on the matrix type, and DGEQPB performs
worse on matrix type 5 than on all others. Nonetheless, except for matrix 1,
DGEQPX and DGEQPY do not require much postprocessing effort.
The pictures for other matrix sizes are similar. The cost for DGEQPX and
DGEQPY decreases as the matrix size increases, except for matrix type 1, where it
increases as expected. We also note that Figures 11 though 12 would have looked
even more favorable for our algorithm had we omitted matrix 1 or chosen the
median (instead of the average) performance.
Figure
15 shows the percentage of the actual amount of flops spent in monitoring
e
Performance
(in
Fig. 14. Performance versus Matrix Type on an SGI R8000 for
the rank in DGEQPB and in postprocessing the initial QR factorization for different
matrix sizes on the IBM RS/6000. We show only matrix types 2 through 18, since
the behavior of matrix type 1 is rather different: in this special case, roughly
50% of the overall flops is expended in the postprocessing. Note that the actual
performance penalty due to these operations is, while small, still considerably higher
than the flop count suggests. This is not surprising given the relatively fine-grained
nature of the condition estimation and postprocessing operations.
Lastly, one may wonder whether the use of DGEQRF to compute the initial
QR factorization would lead to better results, since DGEQRF is the fastest QR
factorization algorithm. This is not the case, since DGEQRF does not provide
any rank preordering, and thus performance gains from DGEQRF are annihilated
in the postprocessing steps. For example, for matrices of order 250 on an IBM
RS/6000-370, the average Mflop rate, excluding matrix 5, was 4.5, with a standard
deviation of 1.4. The percentage of flops spent in postprocessing in these cases was
on average 76.8 %, with a standard deviation of 6.7. For matrix 5, we are lucky,
since the matrix is of low rank and all independent columns are at the front of the
matrix. Thus, we spend only 3% in postprocessing, obtaining a performance of 49.1
Mflops overall. In all other cases, though, considerable effort is expended in the
postprocessing phase, leading to overall disappointing performance. These results
show that the preordering done by DGEQPB is essential for the efficiency of the
overall algorithm.
5. CONCLUSIONS
In this paper, we presented rank-revealing QR factorization (RRQR) algorithms
that combine an initial QR factorization employing a restricted pivoting scheme
in
flops
of
pivoting
in
flops
of
pivoting
Fig. 15. Cost of Pivoting (in % of flops) versus Matrix Types of Algorithms DGEQPX and DGEQPY
on an IBM RS/6000-370 for Matrix Sizes 100 (+), 250 (x), 500 (*) and 1000 (o).
with postprocessing steps based on variants of algorithms suggested by Chandrasekaran
and Ipsen and Pan and Tang.
The restricted-pivoting strategy results in an initial QR factorization that is
almost entirely based on BLAS-3 kernels, yet still achieves at a good approximation
of an RRQR factorization most of the time. To guarantee the reliability of the
initial RRQR factorization and improve it if need be, we improved an algorithm
suggested by Pan and Tang, relying heavily on incremental condition estimation and
"blocked" Givens rotation updates for computational efficiency. As an alternative,
we implemented a version of an algorithm by Chandrasekaran and Ipsen, which
among other improvements uses the f-factor technique suggested by Pan and Tang
to avoid cycling in the presence of roundoff errors.
Numerical experiments on eighteen different matrix types with matrices ranging
in size from 100 to 1000 on IBM RS/6000 and SGI R8000 platforms show that this
approach produces reliable rank estimates while outperforming the (less reliable)
QR factorization with column pivoting, the currently most common approach for
computing an RRQR factorization of a dense matrix.
ACKNOWLEDGMENTS
We thank Xiaobai Sun, Peter Tang and Enrique S. Quintana-Ort'i for stimulating
discussions on the subject.
--R
Incremental condition estimation.
A parallel QR factorization algorithm with controlled local pivoting.
SIAM Journal on Scientific and Statistical Computing
A block algorithm for computing rank-revealing QR factorizations
On updating signal subspaces.
Robust incremental condition estimation.
Preprint MCS-P225-0391
Linear least squares solution by Householder transformation.
Rank revealing QR factorizations.
Some applications of the rank revealing QR factor- ization
On rank-revealing QR factorizations
An application of systolic arrays to linear discrete ill-posed problems
Rank and null space calculations using matrix decomposition without column interchanges.
Rank degeneracy and least squares problems.
Matrix Computations.
Matrix Computations (2nd
A comparison between some direct and iterative methods for certain large scale geodetic least-squares problem
A stable variant of the secant method for solving nonlinear equations.
An iterative method for computing multivariate C 1 piecewise polynomial interpolants.
Rank deficient interpolation and optimal design: An example.
Technical Report SCA-TR-113
A stable and efficient algorithm for the rank-one modification of the symmmetric eigenproblem
Truncated SVD solutions to discrete ill-posed problems with ill- determined numerical rank
Efficient algorithms for computing the condition number of a tridiagonal matrix.
The rank revealing QR decomposition and SVD.
Mathematics of Computation
Comparisons of truncated QR and SVD methods for AR spectral estimations.
The Levenberg-Marquardt algorithm: Implementationand theory
Bounds on singular values revealed by QR factor- izaton
Guaranteeing termination of Chandrasekaran
Fortran subroutines for updating the QR factorization.
ACM Transactions on Mathematical Software
A storage efficient WY representation for products of Householder transformations.
Rank degeneracy.
An updating algorithm for subspace tracking.
Determining rank in the presence of error.
Using a fast signal processor to solve the inverse kinematic problem with special emphasis on the singularity problem.
--TR
Efficient algorithms for computing the condition number of a tridagonal matrix
A comparison between some direct and iterative methods for certian large scale godetic least squares problems
An aplicaiton of systolic arrays to linear discrete Ill posed problems
A storage-efficient WY representation for products of householder transformations
A block QR factorization algorithm using restricted pivoting
Truncated singular value decomposition solutions to discrete ill-posed problems with ill-determined numerical rank
Incremental condition estimation
Algorithm 686: FORTRAN subroutines for updating the QR decomposition
An updating algorithm for subspace tracking
A parallel QR factorization algorithm with controlled local pivoting
Structure-preserving and rank-revealing QR-factorizations
LAPACK''s user''s guide
Some applications of the rank revealing QR factorization
Determining rank in the presence of error
The modified truncated SVD method for regularization in general form
On Rank-Revealing Factorisations
Matrix computations (3rd ed.)
A BLAS-3 Version of the QR Factorization with Column Pivoting
Rank degeneracy and least squares problems
Working Note 33: Robust Incremental Condition Estimation
--CTR
C. H. Bischof , G. Quintana-Ort, Algorithm 782: codes for rank-revealing QR factorizations of dense matrices, ACM Transactions on Mathematical Software (TOMS), v.24 n.2, p.254-257, June 1998
Enrique S. Quintana-Ort , Gregorio Quintana-Ort , Maribel Castillo , Vicente Hernndez, Efficient Algorithms for the Block Hessenberg Form, The Journal of Supercomputing, v.20 n.1, p.55-66, August 2001
Peter Benner , Maribel Castillo , Enrique S. Quintana-Ort , Vicente Hernndez, Parallel Partial Stabilizing Algorithms for Large Linear Control Systems, The Journal of Supercomputing, v.15 n.2, p.193-206, Feb.1.2000 | rank-revealing orthogonal factorization;block algorithm;QR factorization;least-squares systems;numerical rank |
287639 | An object-oriented framework for block preconditioning. | General software for preconditioning the iterative solution of linear systems is greatly lagging behind the literature. This is partly because specific problems and specific matrix and preconditioner data structures in order to be solved efficiently, i.e., multiple implementations of a preconditioner with specialized data structures are required. This article presents a framework to support preconditioning with various, possibly user-defined, data structures for matrices that are partitioned into blocks. The main idea is to define data structures for the blocks, and an upper layer of software which uses these blocks transparently of their data structure. This transparency can be accomplished by using an object-oriented language. Thus, various preconditioners, such as block relaxations and block-incomplete factorizations, only need to be defined once and will work with any block type. In addition, it is possible to transparently interchange various approximate or exact techniques for inverting pivot blocks, or solving systems whose coefficient matrices are diagonal blocks. This leads to a rich variety of preconditioners that can be selected. Operations with the blocks are performed with optimized libraries or fundamental data types. Comparisons with an optimized Fortran 77 code on both workstations and Cray supercomputers show that this framework can approach the efficiency of Fortran 77, as long as suitable block sized and block types are chosen. | INTRODUCTION
In the iterative solution of the linear system
a preconditioner M is often used to transform the system into one which has better
convergence properties, for example, in the left-preconditioned case,
M \Gamma1 is referred to as the preconditioning operator for the matrix A and, in general,
is a sequence of operations that somehow approximates the effect of A \Gamma1 on a
vector.
Unfortunately, general software for preconditioning is seriously lagging behind
methods being published in the literature. Part of the reason is that many methods
do not have general applicability: they are not robust on general problems, or they
are specialized and need specific information (e.g., general direction of flow in a
fluids simulation) that cannot be provided in a general setting.
Another reason, one that we will deal with in this article, is that specific linear
systems need specific matrix and preconditioner data structures in order to be
solved efficiently; i.e., there need to be multiple implementations of a preconditioner
with specialized data structures. For example, in some finite element applications,
diagonal blocks have a particular but fixed sparse structure. A block SSOR preconditioner
that needs to invert these diagonal blocks should use an algorithm suited
to this structure. A block SSOR code that treats these diagonal blocks in a general
way is not ideal for this problem.
When we encounter linear systems from different applications, we need to determine
suitable preconditioning strategies for their iterative solution. Rather than
code preconditioners individually to take advantage of the structure in each appli-
cation, it is better to have a framework for software reuse. Also, a wide range of
preconditionings should be available so that we can choose a method that matches
the difficulty of the problem and the computer resources available.
This article presents a framework to support preconditioning with various, possibly
user-defined, data structures for matrices that are partitioned into blocks. The
main idea is to define data structures (called block types) for the blocks, and an
upper layer of software which uses these blocks transparently of their data struc-
ture. Thus various preconditioners, such as block relaxations and block incomplete
only need to be defined once, and will work with any block type.
These preconditioners are called global preconditioners for reasons that will soon
become apparent. The code for these preconditioners is almost as readable as the
code for their pointwise counterparts. New global preconditioners can be added in
the same fashion.
Global preconditioners need methods (called local preconditioners) to approximately
or exactly invert pivot blocks, or solve systems whose coefficient matrices
are diagonal blocks. For example, a block stored in a sparse format might be in-
Object-Oriented Block Preconditioning \Delta 3
verted exactly, or an approximate inverse might be computed. Our design permits
a variety of these inversion or solution techniques to be defined for each block type.
The transparency of the block types and local preconditioners can be implemented
through polymorphism in an object-oriented language. Our framework,
called currently implements block incomplete factorization and block relaxation
global preconditioners, a dense and a sparse block type, and a variety of
local preconditioners for both block types. Users of BPKIT will either use the block
types that are available, or add block types and local preconditioners that are appropriate
for their applications. Users may also define new global preconditioners
that take advantage of the existing block types and local preconditioners. Thus
BPKIT is not intended to be complete library software; rather it is a framework
under which software can be specialized from relatively generic components.
It is appropriate to make some comments about why we use block preconditioning.
Many linear systems from engineering applications arise from the discretization
of coupled partial differential equations. The blocking in these systems may be
imposed by ordering together the equations and unknowns at a single grid point, or
those of a subdomain. In the first case, the blocks are usually dense; in the latter
case, they are usually sparse. Experimental tests suggest it is very advantageous for
preconditionings to exploit this block structure in a matrix [Chow and Saad 1997;
Fan et al. 1996; Jones and Plassmann 1995; Kolotilina et al. 1991]. The relative
robustness of block preconditioning comes partly from being able to solve accurately
for the strong coupling within these blocks. From a computational point of view,
these block matrix techniques can be more efficient on cached and hierarchical
memory architectures because of better data locality. In the dense block case,
block matrix data structures also require less storage. Block data structures are
also amenable to graph-based reorderings and block scalings.
When approximations are also used for the diagonal or pivot blocks (i.e., approximations
with local preconditioners are used), these techniques are specifically
called two-level preconditioners [Kolotilina and Yeremin 1986], and offer a middle-ground
between accuracy and simpler computations. Beginning with [Underwood]
in 1976 and then [Axelsson et al. 1984] and [Concus et al. 1985] more than a decade
ago, these preconditioners have been motivated and analyzed in the case of block
tridiagonal incomplete factorizations combined with several types of approximate
inverses, and have recently reached a certain maturity. Most implementations of
these methods, however, are not flexible: they are often coded for a particular block
size and inversion technique, and further, they are almost always coded for dense
blocks.
The software framework presented here derives its flexibility from the use of an
object-oriented language. We chose to use C++ [Stroustrup 1991] in real, 64-bit
arithmetic. Other object-oriented languages are also appropriate. The framework
is computationally efficient, since all operations involving blocks are performed with
code that employs fundamental types, or with optimized Fortran 77 libraries such
as the Level 3 BLAS [Dongarra et al. 1990], LAPACK [Demmel 1989], and the
sparse BLAS toolkit [Carney et al. 1994]. By the same token, users implementing
block types and local preconditioners may do so in practically any language, as long
as the language can be linked with C++ by their compilers. BPKIT also has an
interface for Fortran 77 users.
Chow and M. A. Heroux
BPKIT is available at http://www.cs.umn.edu/~chow/bpkit.html. Other C++
efforts in the numerical solution of linear equations include LAPACK++ [Dongarra
et al. 1993] for dense systems, and Diffpack [Bruaset and Langtangen 1997], ISIS++
[Clay 1997], SparseLib++ and IML++ [Dongarra et al. 1994] for sparse systems.
It is also possible to use an object-oriented style in other languages [Eijkhout 1996;
Machiels and Deville 1997; Smith et al. 1995].
In Section 2, we discuss various issues that arise when designing interfaces for
block preconditioning and for preconditioned iterative methods in general. We
describe the specification of the block matrix, the global and local preconditioners,
the interface with iterative methods, and the Fortran 77 interface. In Section 3,
we describe the internal design of BPKIT, including the polymorphic operations
on blocks that are needed by global preconditioners. In Section 4, we present the
results of some numerical tests, including a comparison with an optimized Fortran
77 code. Section 5 contains concluding remarks.
2. INTERFACES FOR BLOCK PRECONDITIONING
We have attempted to be general when defining interfaces (to allow for extensions
of functionality), and we have attempted to accept precedents where we overlap
with related software (particularly in the interface with iterative methods). For
concreteness, we describe several methods which will be used in the numerical tests.
Section 2 brings to light various issues in the software design of preconditioned
iterative methods.
2.1 Block matrices
A matrix that is partitioned into blocks is called a block matrix. Although with
BPKIT any storage scheme may be used to store the blocks that are not zero, the
locations of these blocks within the block matrix must still be defined. The block
matrix class (data type) that is available in BPKIT, called BlockMat, contains a
pointer to each block in the block matrix. The pointers for each row of blocks (block
row) are stored contiguously, with additional pointers to the first pointer for each
block row. This is the analogy to the compressed sparse row data structure [Saad
1990] for pointwise matrices; pointers point to blocks instead of scalar entries.
The global preconditioners in BPKIT assume that the BlockMat class is being
used. It is possible for users to design new block matrix classes and to code new
global preconditioners for their problems, and still use the block types and local
preconditioners in BPKIT.
For the block matrix data structure described above, BPKIT provides conversion
routines to that data structure from the Harwell-Boeing format [Duff et al. 1989].
There is one conversion routine for each block type (e.g., one routine will convert
a Harwell-Boeing matrix into a block matrix whose blocks are dense). However,
these routines are provided for illustration purposes only. In practice, a user's
matrix that is already in block form (i.e., the nonzero entries in each block are
stored contiguously) can usually be easily converted by the user directly into the
BlockMat form.
To be general, the conversion routines allow two levels of blocking. In many prob-
lems, particularly linear systems arising from the discretization of coupled partial
differential equations, the blockings may be imposed by ordering together the equa-
Object-Oriented Block Preconditioning \Delta 5
tions and unknowns at a single grid point and those of a subdomain. The latter
blocking produces coarse-grain blocks, and the smaller, nested blocks are called
fine-grain blocks. Figure 1 shows a block matrix of dimension 24 with coarse blocks
of dimension 6 and fine blocks of dimension 2.
Fig. 1. Block matrix with coarse and fine blocks.
The blocks in BPKIT are the coarse blocks. Information about the fine blocks
should also be provided to the conversion routines because it may be desirable to
store blocks such that the coarse blocks themselves have block structure. For ex-
ample, the variable block row (VBR) [Saad 1990] storage scheme can store coarse
blocks with dense fine blocks in reduced space. Optimized matrix-vector product
and triangular solve kernels for the VBR and other block data structures are provided
in the sparse BLAS toolkit [Carney et al. 1994; Remington and Pozo 1996].
local preconditioners or block operations, however, are defined for fine blocks
(i.e., there are not two levels of local preconditioners).
It is apparent that the use of very small coarse blocks will degrade computing
performance due to the overhead of procedure calls. Larger blocks can give better
computational efficiency and convergence rate in preconditioned iterative methods,
and computations with large dense blocks can be vectorized. In this article, we will
rarely have need to mention fine blocks; thus, when we refer to "blocks" with no
distinction, we normally mean coarse blocks.
To be concrete, we give an example of how a conversion routine is called when a
block matrix is defined. The statement
BlockMat B("HBfile", 6, DENSE);
6 \Delta E. Chow and M. A. Heroux
defines B to be a square block matrix where the blocks have dimension 6, and the
blocks are stored in a format indicated by DENSE (which is of a C++ enumerated
type). The other block type that is implemented is CSR, which stores blocks in
the compressed sparse row format. The matrix is read from the file HBfile, which
must be encoded in the standard Harwell-Boeing format [Duff et al. 1989]. (The
dimension of the matrix does not need to be specified in the declaration since it is
stored within the file.) To specify a variable block partitioning (with blocks with
different sizes), other interfaces are available which use vectors to define the coarse
and fine partitionings.
2.2 Specifying the preconditioning
A preconditioning for a block matrix is specified by choosing
(1) a global preconditioner, and
(2) a local preconditioner for each diagonal or pivot block to exactly or approximately
invert the block or solve the corresponding set of equations.
For example, to fully define the conventional block Jacobi preconditioning, one must
specify the global preconditioner to be block Jacobi and the local preconditioner to
be LU factorization.
In addition, the block size of the matrix has a role in determining the effect of
the preconditioning. At one extreme, if the block size is one, then the preconditioning
is entirely determined by the global preconditioner. At the other extreme, if
there is only one block, then the preconditioning is entirely determined by the local
preconditioner. The block size parameterizes the effect and cost between the selected
local and global preconditioners. The best method is likely to be somewhere
between the two extremes.
For example, suppose symmetric successive overrelaxation (SSOR) is used as the
global preconditioner, and complete LU factorization is used as the local precondi-
tioner. For linear systems that are not too difficult to solve, SSOR may be used with
a small block size. For more challenging systems, larger block sizes may be used,
giving a better approximation to the original matrix. In the extreme, the matrix
may be treated as a single block, and the method is equivalent to LU factorization.
A global preconditioner M is specified with a very simple form of declaration. In
the case of block SSOR, the declaration is
Two functions are used to specify the local preconditioner and to provide parameters
to the global preconditioner:
factorization for the blocks
M.setup(B, 0.5, 3); // BSSOR(omega=0.5, iterations=3)
Here B is the block matrix defined as in Section 2.1. The setup function provides
the real data to the preconditioner, and performs all the computations necessary
for setting up the global preconditioner, for example, the computation of the LU
factors in this case. Therefore, localprecon must be called before setup. The
setup function must be called again if the local preconditioner is changed. In these
interfaces, the same local preconditioner is specified for all the diagonal blocks.
Object-Oriented Block Preconditioning \Delta 7
In general, however, the local preconditioners are not required to be the same. In
some applications, different variables (e.g., velocity and pressure variables in a fluids
simulation) may be blocked together. It may then make sense to write a specialized
global preconditioner with an interface that allows different local preconditioners
to be specified for each block.
2.2.1 Global preconditioners. The global preconditioners that we have implemented
in BPKIT are listed in Table 1, along with the arguments of the setup
function, and any default argument values. General reference works describing
these global preconditioners and many of the local preconditioners described later
are [Axelsson 1994; Barrett et al. 1994; Saad 1995]. See also the BPKIT Reference
Manual [Chow and Heroux 1996]. Here we briefly specify these preconditioners and
make a few comments on how they may be applied.
Table
1. Global preconditioners.
setup arguments
none
level
BTIF none
BJacobi, BSOR and BSSOR are block versions of the diagonal, successive overrelax-
ation, and symmetric successive overrelaxation preconditioners. BILUK is a block
version of level-based incomplete LU (ILU) factorization. BTIF is an incomplete
factorization for block tridiagonal matrices.
A preconditioner for a matrix A is often expressed as another matrix M which is
somehow an approximation to A. However, M does not need to be explicitly formed,
but instead, only the operation of M \Gamma1 on a vector is required. This operation
is called the preconditioning operation, or the application of the preconditioner.
For iterative methods based on biorthogonalization, the transposed preconditioning
operator M \GammaT is also needed.
It is also possible to apply the preconditioner in a split fashion when the preconditioner
has a factored form. For example, if M is factored as LU , then the
preconditioned matrix is L and the operations of L \Gamma1 and U \Gamma1 on a vector
are required.
Many preconditioners M can be expressed in factored form. Consider the splitting
of a block matrix A,
where DA is the block diagonal of A, \GammaL A is the strictly lower block triangular
part, and \GammaU A is the strictly upper part. The block SSOR preconditioner in the
case of one iteration is defined by
Chow and M. A. Heroux
The scale factor 1=!(2\Gamma!) is important if the iterative method is not scale invariant.
When used as a preconditioner, the relaxation parameter ! is usually chosen to be
1, since selecting a value is difficult. However, if more than one iteration is used and
the matrix is far from being symmetric and positive definite, underrelaxation may
be necessary to prevent divergence. Also, the simpler block SOR preconditioner
(with one iteration)
may be preferable over block SSOR if A is nonsymmetric. If k iterations of block
are used, the preconditioner has the form
although it is not implemented this way. Instead, the preconditioner is applied to
a vector v by performing k SOR iterations on the system starting from the
zero vector.
The level-0 block ILU preconditioner for certain structured matrices including
block 5-point matrices can be written in a very similar form
called the generalized block SSOR form. Here, D is the block diagonal matrix
resulting from the incomplete factorization. In general, however, a level-based block
ILU preconditioner is computed by performing Gaussian elimination and neglecting
elements in the factors that fall out of a predetermined sparsity pattern. Level-based
ILU preconditioners are much more accurate than relaxation preconditioners, but
for general sparse matrices, have storage costs at least that of the original matrix.
Incomplete factorization of block tridiagonal matrices is popular for certain structured
matrices where the blocks have banded structure. It is a special case of the
generalized block SSOR form, and thus only a sequence of diagonal blocks needs to
be computed and stored. The block partitioning may be along lines of a 2-D grid,
or along planes of a 3-D grid. In general, any "striped" partitioning will yield a
block tridiagonal matrix. The inverse-free form of block tridiagonal factorization is
where D is a block diagonal matrix whose blocks D i are defined by the recurrence
starting with D This inverse-free form only requires matrix-vector multiplications
in the preconditioning operation. However, the blocks are typically very
large, and an approximate inverse is used in place of the exact inverse in the above
equation to make the factorization incomplete. Many techniques for computing
approximate inverses are available [Chow and Saad 1998].
2.2.2 Local preconditioners. Local preconditioners are either explicit or implicit
depending on whether (approximate) inverses of blocks are explicitly formed. An
example of an implicit local preconditioner is LU factorization.
Object-Oriented Block Preconditioning \Delta 9
The global preconditioners that involve incomplete factorization require the inverses
of pivot blocks. For large block sizes, the use of approximate or exact dense
inverses usually requires large amounts of storage and computation. Thus sparse
approximate inverses should be used in these cases. Implicit local preconditioners
produce inverses that are usually dense, and are therefore usually not computationally
useful for block incomplete factorizations. This use of implicit local preconditioners
is disallowed within BPKIT. We also apply this rule for small block
sizes, since dense exact inverses are usually most efficient in these cases. (Note
that the explicit local preconditioner LP INVERSE for the CSR block type is meant
to be used for testing purposes only. Also, if an exact factorization is sought, it
is usually most efficient to use an LU factorization on the whole matrix.) The
global preconditioners that involve block relaxation may use either explicit or implicit
local preconditioners, but usually the implicit ones are used. Explicit local
preconditioners can be appropriate for block relaxation when the blocks are small.
Local preconditioners are also differentiated by the type of the blocks on which
they operate. Not all local preconditioners exist for all block types; incomplete
factorization, for example, is only meaningful for sparse types. Thus, a local preconditioner
must be chosen that matches the type of the block.
BPKIT requires the user to be aware of the restrictions in the above two paragraphs
when selecting a local preconditioner. Due to the dynamic binding of C++
virtual functions, violations of these restrictions will only be detected at run-time.
Table
2 lists the local preconditioners that we have implemented, along with their
localprecon arguments, their block types, and whether the local preconditioner
is explicit or implicit. In contrast to the setup function, localprecon takes no
default arguments. We have included an explicit exact inverse local preconditioner
for the CSR format for comparison purposes (it would be inefficient to use it in block
tridiagonal incomplete factorizations, for example).
Table
2. Local preconditioners.
localprecon arguments Block type Expl./Impl.
LP LU none DENSE implicit
LP INVERSE none DENSE explicit
LP SVD alpha1, alpha2 DENSE explicit
LP LU none CSR implicit
LP INVERSE none CSR explicit
LP ILUT lfil, threshold CSR implicit
LP APINV TRUNC semibw CSR explicit
LP APINV BANDED semibw CSR explicit
LP APINV0 none CSR explicit
LP APINVS lfil CSR explicit
LP DIAG none CSR explicit
LP TRIDIAG none CSR implicit
iterations CSR implicit
iterations CSR implicit
LP GMRES restart, tolerance CSR implicit
LP LU is an LU factorization with pivoting. LP INVERSE is an exact inverse com-
Chow and M. A. Heroux
puted via LU factorization with pivoting. LP RILUK is level-based relaxed incomplete
LU factorization. LP ILUT is a threshold-based ILU with control over the
number of fill-ins [Saad 1994], which may be better for indefinite blocks. The local
preconditions prefixed with LP APINV are new approximate inverse techniques; see
[Chow and Saad 1998] and [Chow and Heroux 1996] for details.
LP DIAG is a diagonal approximation to the inverse, using the diagonal of the
original block, and LP TRIDIAG is a tridiagonal implicit approximation, ignoring
all elements outside the tridiagonal band of the original block. LP SVD uses the
singular value decomposition to produce a dense approximate inverse
\Sigma is \Sigma with its singular values thresholded by ff 1
a constant ff 2 plus a factor ff 1 of the largest singular value oe 1 . This may produce a
more stable incomplete factorization if there are many blocks to be inverted that are
close to being singular [Yeremin 1995]. LP SOR, LP SSOR and LP GMRES are iterative
methods used as local preconditioners.
2.3 Interface with iterative methods
An object-oriented preconditioned iterative method requires that matrix and preconditioner
objects define a small number of operations. In BPKIT, these operations
are defined polymorphically, and are listed in Table 3.
For left and right preconditionings, the functions apply and applyt may be
used to apply the preconditioning operator (M \Gamma1 , or its transpose) on a vector.
Split (also called two-sided, or symmetric) preconditionings use applyl and applyr
to apply the left and right parts of the split preconditioner, respectively. For an
incomplete factorization A - LU , applyl is the L \Gamma1 operation, and applyr is the
To anticipate all possible functionality, the applyc function defines
a combined matrix-preconditioner operator to be used, for example, to implement
the Eisenstat trick [Eisenstat 1981]. If the Eisenstat trick is used with flexible
preconditionings (described at the end of this section), the right preconditioner
apply also needs to be used.
Two functions not listed here are matrix member functions that return the row
and column dimensions of the matrix, which are useful for the iterative method
code to help preallocate any work-space that is needed.
Not all the operations in Table 3 may be defined for all matrix and preconditioner
objects, and many iterative methods do not require all these operations. The
GMRES iterative method, for example, does not require the transposed operations,
and the relaxation preconditioners usually do not define the split operations. This
is a case where we violate an object-oriented programming paradigm, and give the
parent classes all the specializations of their children (e.g., a specific preconditioner
may not define applyl although the generic preconditioner does). This will be seen
again in Section 3.2.
The argument lists for the functions in Table 3 use fundamental data types so
that iterative methods codes are not forced to adopt any particular data structure
for vectors. The interfaces use blocks of vectors to support iterative methods that
use multiple right-hand sides. The implementation of these operations use Level 3
BLAS whenever possible. All the interfaces have the following form:
void mult(int nr, int nc, const double *u, int ldu, double* v, int ldv) const;
Object-Oriented Block Preconditioning \Delta 11
Table
3. Operations required by iterative methods.
Matrix operations
mult matrix-vector product
trans mult transposed matrix-vector product
Preconditioner operations
apply apply preconditioner
applyt apply transposed preconditioner
applyl apply left part of a split preconditioner
applyr apply right part of a split preconditioner
applyc apply a combined matrix-preconditioner operator
applyct above, transposed
where nr and nc are the row and column dimensions of the (input) blocks of vectors,
u and v are arrays containing the values of the input and output vectors, respec-
tively, and ldu and ldv are the leading dimensions of these respective arrays. The
preconditioner operations are not defined as const functions, in case the preconditioner
objects need to change their state as the iterations progress (and spectral
information is revealed, for example).
When a non-constant operator is used in the preconditioning, a flexible iterative
method such as FGMRES [Saad 1993] must be used. In BPKIT, this arises
whenever GMRES is used as a local preconditioner. Users may wish to write advanced
preconditioners that work with the iterative methods, and which change, for
example, when there is a lack of convergence. This is a simple way of enhancing
the robustness of iterative methods. In this case, the iterative method should be
written as a class function whose class also provides information about convergence
history and possibly approximate spectral information [Wu and Li 1995].
2.4 Fortran 77 interface
Many scientific computing users are unfamiliar with C++. It is usually possible,
however, to provide an interface which is callable from any other language. BPKIT
provides an object-oriented type of Fortran 77 interface. Objects can be created,
and pointers to them are passed through functions as Fortran 77 integers. Consider
the following code excerpt (most of the parameters are not important to this
call blockmatrix(bmat, n, a, ja, ia, num-block-rows, partit, btype)
call preconditioner(precon, bmat, BJacobi, 0.d0, 0.d0, LP-LU, 0.d0, 0.d0)
call flexgmres(bmat, sol, rhs, precon, 20, 600, 1.d-8)
The call to blockmatrix above creates a block matrix from the compressed sparse
row data structure, given a number of arguments. This "wrapper" function is
actually written in C++, but all its arguments are available to a Fortran 77 pro-
gram. The integer bmat is actually a pointer to a block matrix object in C++.
The Fortran 77 program is not meant to interpret this variable, but to pass it to
Chow and M. A. Heroux
other functions, such as preconditioner which defines a block preconditioner with
a number of arguments, or flexgmres which solves a linear system using flexible
GMRES. Similarly, precon is a pointer to a preconditioner object. The constant
parameters BJacobi and LP LU are used to specify a block Jacobi preconditioner,
using LU factorization to solve with the diagonal blocks.
The matrix-vector product and preconditioner operations of Table 3 also have
"wrapper" functions. This makes it possible to use BPKIT from an iterative solver
written in Fortran 77. This was also another motivation to use fundamental types
to specify vectors in the interface for operations such as mult (see Section 2.3).
Calling Fortran 77 from C++ is also possible, and this is done in BPKIT when
it calls underlying libraries such as the BLAS. BPKIT illustrates how we were able
to mix the use of different languages.
3. LOCAL MATRIX OBJECTS
A block matrix may contain blocks of more than one type. The best choice for the
types of the blocks depends mostly on the structure of the matrix, but may also
depend on the proposed algorithms and the computer architecture. For example,
if a matrix has been reordered so that its diagonal blocks are all diagonal, then a
diagonal storage scheme for the diagonal blocks is best. Inversion of these blocks
would automatically use the appropriate algorithm. (The diagonal block type and
the local preconditioners for it would have to be added by the user.)
To handle different block types the same way, instances of each type are implemented
as C++ polymorphic objects (i.e., a set of related objects whose functions
can be called without knowing the exact type of the object). The block types are
derived from a local matrix class called LocalMat, a class that defines the common
interface for all the block types. The global preconditioners refer to LocalMat
objects. When LocalMat functions are called, the appropriate code is executed,
depending on the actual type of the LocalMat object (e.g., DENSE or CSR).
In addition, each block type has a variety of local preconditioners. The explicitness
or implicitness of local preconditioners need to be transparent, since, for
example, either can be used in block SSOR. Thus both types of preconditioners are
derived from the same base class. In particular, local preconditioners for a given
block type are derived from the base class which is that block type (e.g., the LP SVD
local preconditioner for the DENSE type is derived from the DENSE block type). This
gives the user the flexibility to treat explicit local preconditioners as regular blocks.
Implicit local preconditioners are not derived separately because logically they
are related to explicit local preconditioners. All block operations that apply to
explicit preconditioners also apply to local preconditioners; however, many of these
operations are inefficient for local preconditioners, and their use has been disallowed
to prevent improper usage. Implicit preconditioners cannot be derived separately
from explicit preconditioners because of their similarity from the point of view of
global preconditioners. The LocalMat hierarchy is illustrated in Figure 2, showing
the derivation of block types and the subsequent derivation of local preconditioners.
These LocalMat classes form the "kernel" of BPKIT, and allow global preconditioners
to be implemented without knowledge of the type of blocks or local preconditioners
that are used. Users may also add to the kernel by deriving their own
specific classes.
Object-Oriented Block Preconditioning \Delta 13
CSR
Fig. 2. LocalMat hierarchy.
The challenge of designing the LocalMat class was to determine what operations
are required to implement block preconditioners and to give these operations semantics
that allow an efficient implementation for all possible block types. The
operations are implemented as C++ virtual functions. The following subsections
describe these operations.
3.1 Allocating storage
An important difference between dense and sparse blocks is that the storage requirement
for sparse blocks is not always known beforehand. Thus, in order to
treat dense and sparse blocks the same way, storage is allocated for a block when
it is required. As an optimization, if it is known that dense blocks are used (e.g.,
conversion of a sparse matrix to a block matrix with dense blocks), storage may be
allocated beforehand by the user. Functions are provided to set the data pointers
of the block objects. Thus it is possible to allocate contiguous storage for an array
of dense blocks.
3.2 Local matrix functions
Table
4. Functions for LocalMat objects.
A
A.Mat
A.Mat Mat Add(B, C, alpha)
A.Mat Mat Mult(B, C, alpha, beta)
A.Mat
A.Mat Trans Vec Mult(b, c, alpha, beta)
A.Mat Vec Solve(b, c)
A.Mat Trans Vec Solve(b, c)
14 \Delta E. Chow and M. A. Heroux
Table
4 lists the functions that we have determined to be required for implementing
the block preconditioners listed in Table 1. The functions are invoked by a block
object represented by A. B and C are blocks of the same type as A, b and c are
components from a block vector object, and ff and fi are scalars. The default value
for ff is 1 and for fi is 0.
CreateEmpty() creates an empty block (0 by 0 dimensions) of the same class
as that of A. This function is useful for constructing blocks in the preconditioner
without knowing the types of blocks that are being used. SetToZero(dim1, dim2)
sets A to zero, resetting its dimensions if necessary. This operation is not combined
with CreateEmpty() because it is not always necessary to zero a block when creating
it, and zeroing a block could be relatively expensive for some block types.
copies its argument block to the invoking block. The original data
held by the invoking block is released, and if the new block has a different size, the
allocated space is resized. CreateInv(lprecon) provides a common interface for
creating local preconditioners. lprecon is of a type that describes a local preconditioner
with its arguments from Table 2. The exact or approximate inverse (explicit
or implicit) of A is generated. The CreateEmpty and CreateInv functions create
new objects (not just the real data space). These functions return pointers to the
new objects to emphasize this point.
Overloading of the arithmetic operators such as for blocks and local preconditioners
has been sacrificed since chained operations such as would be
inefficient if implemented as a sequence of elementary operations. In addition, these
operators are difficult to implement without extra memory copying (for
the first store the result into a temporary before the result is copied
into A by the = operator).
These are the functions that we have found to be useful for block preconditioners.
For example, used in BTIF, used in BILUK, and other
functions are useful, for example, in matrix-vector product and triangular solve
operations. Note in particular that Mat Trans Mat Mult is not a useful function
here, and has not been defined.
Note that local preconditioner objects also inherit these functions, although they
do not need them all. For objects that are implicit local preconditioners, no matrix
is formed, and operations such as addition (Mat Mat Add) do not make sense. For
blocks for which no local preconditioner has been created, solving a system with
that block (Mat Vec Solve) is not allowed. Here, again, we had to give the parent
classes all the specializations of their derived classes. Table 5 indicates when the
functions are allowed. An error condition is raised at run-time if the functions are
used incorrectly.
Given these operations, a one-step block SOR code could be implemented as
shown below. Ap is a pointer to a block matrix object which stores its block structure
in CSR format (the ia array stores the block row pointers, and the ja array
stores the block column indices). The pointers to the diagonal elements in idiag
and the inverses of the diagonal elements diag were computed during the call to
setup. V is a block vector object that allows blocks in a vector to be accessed as
individual entries. The rest of the code is self-explanatory.
1. for (i=0; i!Ap-?numrow(); i++)
Object-Oriented Block Preconditioning \Delta 15
Table
5. The types of objects that may be used with each function.
Explicit Implicit
Coarse local local
Function blocks precon. precon.
CreateEmpty *
MatCopy *
Mat Trans *
Mat Mat Add *
Mat Mat Mult *
Mat Vec Mult *
Mat Trans Vec Mult *
Mat Vec Solve *
Mat Trans Vec Solve *
2. -
3. for (j=ia[i]; j!idiag[i]; j++)
4. -
5. //
6.
7. Ap-?val(j).Mat-Vec-Mult(V(ja[j]), V(i), -omega, 1.0);
8. -
9.
10. diag[i]-?Mat-Vec-Solve(V(i), V(i));
11. -
A block matrix that mixes different block types must be used very carefully. First,
the restrictions for the different block types (Section 2.2.2) must not be violated.
Second, unless we define arithmetic operations between blocks of different types,
the incomplete factorization preconditioners cannot be used.
Our main design alternative was to create a block matrix class for each block
type. The classes would be polymorphic and define a set of common operations
that preconditioners may use to manipulate their blocks. A significant advantage
of this design is that it is impossible to use local preconditioners of the wrong
type (e.g., use incomplete factorization on a dense block). A disadvantage is that
different block types (e.g., specialized types created for a particular application)
cannot be used within the same block matrix.
Another alternative was to implement meta-matrices, i.e., blocks are nested re-
cursively. It would be complicated, however, for users to specify these types of
matrices and the levels of local preconditioners that could be used. In addition,
there is very little need for such complexity in actual applications, and the two-level
design (coarse and fine blocks) described in Section 2.1 should be sufficient.
4. NUMERICAL TESTS
The numerical tests were carried out on the matrices listed in Table 6. SHERMAN1
is a reservoir simulation matrix on a grid, with one unknown per grid
point. This is a simple symmetric problem which we solve using partitioning by
Chow and M. A. Heroux
planes. WIGTO966 is from an Euler equation model and was supplied by Larry
Wigton of Boeing. FIDAP019 models an axisymmetric 2-D developing pipe flow
with the fully-coupled Navier-Stokes equations using the two-equation k-ffl model
for turbulence. The BARTHT1A and BARTHT2A matrices were supplied by Tim
Barth of NASA Ames and are from a 2-D, high Reynolds number aerofoil problem,
with a 1-equation turbulence model. The BARTHT2A model is solved with a
preconditioner based on the less accurate but sparser BARTHT1A model.
Table
6. Test matrices, listed with their dimensions and numbers of nonzeros.
Matrix n no. nonz
Tables
7 and 9 show the results for SHERMAN1 with the block relaxation and
incomplete factorization global preconditioners, using various local preconditioners.
The arguments given for the global and local preconditioners in these tables correspond
to those displayed in Tables 1 and 2 respectively. A block size of 100 was
used. Since the matrix is block tridiagonal, BILUK and BTIF are equivalent. The
tables show the number of steps of GMRES (FGMRES, if appropriate) that were
required to reduce the residual norm by a factor of 10 \Gamma8 . A dagger (y) is used to
indicate that this was not achieved in 600 steps. Right preconditioning, 20 Krylov
basis vectors and a zero initial guess were used. The right-hand side was provided
with the matrix.
Since the local preconditioners have different costs, Tables 8 and 9 show the CPU
timings (system and user times) for BSSOR(1.,3) and BTIF. The tests were run on
one processor of a Sun Sparcstation 10. For this particular problem and choice of
partitioning, the ILU local preconditioners required the least total CPU time with
BSSOR(1.,3). With BTIF, an exact solve was most efficient (i.e., the preconditioner
was an exact solve).
Table
7. Number of GMRES steps for solving the SHERMAN1 problem with block relaxation
global preconditioners and various local preconditioners.
LP RILUK(0,0.) 93 71 53 402 48
Object-Oriented Block Preconditioning \Delta 17
Table
8. Number of GMRES steps and timings for solving the SHERMAN1 problem with
and various local preconditioners.
precon solve total
LP ILUT(2,0.) 44 0.03 1.73 1.76
Table
9. Number of GMRES steps and timings for solving the SHERMAN1 problem with block
incomplete factorization and various local preconditioners.
precon solve total
Tables
show the number of GMRES steps for the BARTHT2A matrix.
A random right-hand side was used, and the initial guess was zero. The GMRES
tolerance was 10 \Gamma8 and 50 Krylov basis vectors were used. In Table 10, block
incomplete factorization was used as the global preconditioner, and LU factorization
was used as the local preconditioner. In Table 11, block SSOR with one iteration
used as the global preconditioner, and level-3 ILU was used as the
local preconditioner.
Table
10. Number of GMRES steps for solving the BARTHT2A problem with BILUK-LP LU.
block BILUK level
Tables
12 and 13 show the results for WIGTO996 using block incomplete factor-
ization. The right-hand side was the vector of all ones, and the GMRES tolerance
. The other parameters were the same as those in the previous experiment.
The failures in Table 12 are due to inaccuracy for low fill levels, and instability for
high levels. In Table 13, LP SVD(0.1,0.) used as the local preconditioner gave the
Chow and M. A. Heroux
Table
11. Number of GMRES steps for solving the BARTHT2A problem with
block GMRES
size steps
best results. LP SVD(0.1,0.) indicates that the singular values of the pivot blocks
were thresholded at 0.1 times the largest singular value.
Table
12. Number of GMRES steps for solving the WIGTO966 problem with BILUK-LP INVERSE.
block BILUK level
Table
13. Number of GMRES steps for solving the WIGTO966 problem with
block BILUK level
Now we show some results with block tridiagonal incomplete factorization preconditioners
using general sparse approximate inverses. The matrix FIDAP019 was
partitioned into a block tridiagonal system using a constant block size of 161 (the
last block has size 91). Since the matrix arises from a finite element problem, a
more careful selection of the partitioning could have yielded better results.
The rows of the system were scaled by their 2-norms, and then their columns
were scaled similarly, since the matrix contains different equations and variables.
A Krylov subspace size of 50 for GMRES was used. The right-hand side was constructed
so that the solution is the vector of all ones. We compare the result with
the pair of global-local preconditioners BILUK(0)-LP SVD(0.5,0.), using a block
size of 5 (LP SVD(0.5,0.) gave the best result after several trials). Table 14 shows
the number of GMRES steps to convergence, timings for setting up the preconditioner
and for the iterations, and the number of nonzeros in the preconditioner.
The experiments were carried out on one processor of a Sun Sparcstation 10.
The timings show that some combinations of the BTIF global preconditioner with
the APINVS local preconditioner are comparable to BILUK(0)-LP SVD(0.5,0.), but
use much less memory, since only the approximate inverses of the pivot blocks need
to be stored. Although the actual number of nonzeros in the matrix is 259 879,
there were 39 355 block nonzeros required for BILUK, and therefore almost a million
Object-Oriented Block Preconditioning \Delta 19
Table
14. Test results for the FIDAP019 problem.
GMRES CPU time
steps precon solve total in precon
entries which were needed to be stored. The APINVS method produced approximate
inverses that were sparser than the original pivot blocks. See [Chow and Saad 1998]
for more details.
There is often heated debate over the use of C++ in scientific computing. Ideally,
C++ and Fortran 77 programs that are coded similarly should perform similarly.
However, by using object-oriented features in C++ to make a program more flexible
and maintainable, researchers usually encounter a 10 to percent performance
penalty [Jiang and Forsyth 1995]. If optimized kernels such as the BLAS are called,
then the C++ performance penalty can be very small for large problems, as a larger
fraction of the time is spent in the kernels.
Since C++ and Fortran 77 programs will usually be coded differently, a practical
comparison is made when a general code such as BPKIT is compared to a
specialized Fortran 77 code. Here we compare BPKIT to an optimized block SSOR
preconditioner with a GMRES accelerator. This code performs block relaxations
of the form
ii r i
for a block row i, where A ii is the i-th diagonal block of A, A :;i is the i-th block
column of A, x i is the i-th block of the current solution, and r is the current residual
vector. Notice that the update of the residual vector is very fast if A is stored by
sparse columns and not by blocks. Since BPKIT stores the matrix A by blocks for
flexibility, it is interesting to see what the performance penalty would be for this
case.
Tables
15 and 16 show the timings for block SSOR on a Sun Sparcstation 10
and a Cray C90 supercomputer, for the WIGTO966 matrix. In this case, the right-hand
side was constructed so that the solution is a vector of all ones; the other
parameters were the same as before. All programs were optimized at the highest
optimization level; clock was used to measure CPU time (user and system) for
the C++ programs, and etime and timef were used to measure the times for the
Fortran 77 programs on the Sun and Cray computers, respectively. One step of
block SSOR with used in the tests. The local preconditioner was an
exact LU factorization. Results are shown for a large range of block sizes, and in
the case of BPKIT, for both DENSE and CSR storage schemes for the blocks. The last
column of each table gives the average time to perform one iteration of GMRES.
The results show that the specialized Fortran 77 code has better performance over
a wide range of block sizes. This is expected because the update of the residual,
which is the most major computation, is not affected by the blocking.
Chow and M. A. Heroux
If dense blocks are used, BPKIT can be competitive on the Cray by using large
block sizes, such as 128. Blocks of this size contain many zero entries which are
treated as general nonzero entries when a dense storage scheme is used. However,
vectorization on the Cray makes operations with large dense blocks much more
efficient.
If sparse blocks are used, BPKIT can be competitive on the workstation with
moderate block sizes of 8 or 16. Operations with smaller sparse blocks are inefficient,
while larger blocks imply larger LU factorizations for the local preconditioner.
This comparison using block SSOR is dramatic since two very different data
structures are used. Comparisons of level-based block ILU in C++ and Fortran
77 show very small differences in performance, since the data structures used are
similar [Jiang and Forsyth 1995].
In conclusion, the types and sizes of blocks must be chosen carefully in BPKIT
to attain high performance on a particular machine. The types and sizes of blocks
should also be chosen in conjunction with the requirements of the preconditioning
algorithm and the block structure of the matrix. Based on the above experiments,
Table
17 gives an idea of the approximate block sizes that should be used for
BPKIT, given no other constraints.
5. CONCLUDING REMARKS
This article has described an object-oriented framework for block preconditioning.
Polymorphism was used to handle different block types and different local precon-
ditioners. Block types and local preconditioners form a "kernel" on which the block
preconditioners are built. Block preconditioners are written in a syntax comparable
to that for non-block preconditioners, and they work for matrices containing
any block type. BPKIT is easily extensible, as an object-oriented code would al-
low. We have distinguished between explicit and implicit local preconditioners, and
deduced the operations and semantics that are useful for polymorphically manipulating
blocks. Timings against a specialized and optimized Fortran 77 code on both
workstations and Cray supercomputers show that this framework can approach the
efficiency of such a code, as long as suitable block sizes and block types are chosen.
We believe we have found a suitable compromise between Fortran 77-like performance
and C++ flexibility. A significant contribution of BPKIT is the collection
of high-quality preconditioners under a common, concise interface.
Block preconditioners can be more efficient and more robust than their non-block
counterparts. The block size parameterizes between a local and global method, and
is valuable for compromising between accuracy and cost, or combining the effect of
two methods. The combination of local and global preconditioners leads to a variety
of useful methods, all of which may be applicable in different circumstances.
ACKNOWLEDGMENTS
We wish to thank Yousef Saad, Kesheng Wu and Andrew Chapman for their codes
and for helpful discussions. We also wish to thank Larry Wigton and Tim Barth
for providing some of the test matrices, and Tim Peck for helping us with editing.
This article has benefited substantially from the comments and suggestions of one
of the anonymous referees, and we are grateful for his time and patience.
Object-Oriented Block Preconditioning \Delta 21
Table
15. timings.
Specialized Fortran 77 program
block GMRES time
size steps precon solve total average
BPKIT, dense blocks
block GMRES time
size steps precon solve total average
128 212 3.66 559.05 562.71 2.6543
BPKIT, sparse blocks
block GMRES time
size steps precon solve total average
128 212 4.42 162.58 167.
22 \Delta E. Chow and M. A. Heroux
Table
16. WIGTO966: BSSOR(0.5,1)-LP LU, Cray C90 timings.
Specialized Fortran 77 program
block GMRES time
size steps precon solve total average
BPKIT, dense blocks
block GMRES time
size steps precon solve total average
BPKIT, sparse blocks
block GMRES time
size steps precon solve total average
128 212 5.39 132.92 138.31 0.6524
Table
17. Recommended block sizes.
Block type Sun Cray
CSR
Object-Oriented Block Preconditioning \Delta 23
--R
Iterative Solution Methods.
On some versions of incomplete block-matrix factorization iterative methods
Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods.
A revised proposal for a sparse BLAS toolkit.
BPKIT Block preconditioning toolkit.
Approximate inverse techniques for block-partitioned ma- trices
Approximate inverse preconditioners via sparse-sparse iter- ations
Block preconditioning for the conjugate gradient method.
LAPACK: A portable linear algebra library for supercomputers.
IEEE Control Systems Society Workshop on Computer-Aided Control System Design (December <Year>1989</Year>)
A set of level 3 basic linear algebra subprograms.
A sparse matrix library in C
An object oriented design for high performance linear algebra on distributed memory architectures.
Sparse matrix test problems.
ParPre: A parallel preconditioners package.
Efficient implementation of a class of preconditioned conjugate gradient methods.
Performance issues for iterative solvers in device simulation.
Robust linear and nonlinear strategies for solution of the transonic Euler equations.
users manual: Scalable library software for the parallel solution of sparse linear systems.
Block SSOR precon- ditionings for high-order 3D FE systems
On a family of two-level preconditionings of the incomplete block factorization type
Fortran 90: An entry to object-oriented programming for solution of partial differential equations
SPARSKIT: A basic tool kit for sparse matrix computations.
A flexible inner-outer preconditioned GMRES algorithm
ILUT: A dual threshold incomplete ILU factorization.
Iterative Methods for Sparse Linear Systems.
PETSc 2.0 users' manual.
An approximate factorization procedure based on the block Cholesky decomposition and its use with the conjugate gradient method.
BKAT: An object-oriented block Krylov accelerator toolkit
Presentation at Cray Research
Private communication.
--TR
Sparse matrix test problems
A set of level 3 basic linear algebra subprograms
The C++ programming language (2nd ed.)
A flexible inner-outer preconditioned GMRES algorithm
Iterative solution methods
Performance issues for iterative solvers in device simulation
Fortran 90
Object-oriented design of preconditioned iterative methods in diffpack
Approximate Inverse Techniques for Block-Partitioned Matrices
Approximate Inverse Preconditioners via Sparse-Sparse Iterations
Iterative Methods for Sparse Linear Systems
--CTR
Michael Gertz , Stephen J. Wright, Object-oriented software for quadratic programming, ACM Transactions on Mathematical Software (TOMS), v.29 n.1, p.58-81, March
Iain S. Duff , Michael A. Heroux , Roldan Pozo, An overview of the sparse basic linear algebra subprograms: The new standard from the BLAS technical forum, ACM Transactions on Mathematical Software (TOMS), v.28 n.2, p.239-267, June 2002
Marzio Sala, An object-oriented framework for the development of scalable parallel multilevel preconditioners, ACM Transactions on Mathematical Software (TOMS), v.32 n.3, p.396-416, September 2006
Glen Hansen , Andrew Zardecki , Doran Greening , Randy Bos, A finite element method for unstructured grid smoothing, Journal of Computational Physics, v.194 n.2, p.611-631, March 2004
Mikel Lujn , T. L. Freeman , John R. Gurd, OoLALA: an object oriented analysis and design of numerical linear algebra, ACM SIGPLAN Notices, v.35 n.10, p.229-252, Oct. 2000
Glen Hansen , Andrew Zardecki , Doran Greening , Randy Bos, A finite element method for three-dimensional unstructured grid smoothing, Journal of Computational Physics, v.202 n.1, p.281-297, January 2005
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002 | preconditioners;block matrices |
287640 | A combined unifrontal/multifrontal method for unsymmetric sparse matrices. | We discuss the organization of frontal matrices in multifrontal methods for the solution of large sparse sets of unsymmetric linear equations. In the multifrontal method, work on a frontal matrix can be suspended, the frontal matrix can be stored for later reuse, and a new frontal matrix can be generated. There are thus several frontal matrices stored during the factorization, and one or more of these are assembled (summed) when creating a new frontal matrix. Although this means that arbitrary sparsity patterns can be handled efficiently, extra work is required to sum the frontal matrices together and can be costly because indirect addressing is requred. The (uni)frontal method avoids this extra work by factorizing the matrix with a single frontal matrix. Rows and columns are added to the frontal matrix, and pivot rows and columns are removed. Data movement is simpler, but higher fill-in can result if the matrix cannot be permuted into a variable-band form with small profile. We consider a combined unifrontal/multifrontal algorithm to enable general fill-in reduction orderings to be applied without the data movement of previous multifrontal approaches. We discuss this technique in the context of a code designed for the solution of sparse systems with unsymmetric pattern. | Introduction
. We consider the direct solution of sets of linear equations
the coefficient matrix A is sparse, unsymmetric, and has a general
nonzero pattern. In a direct approach, a permutation of the matrix A is factorized
into its LU factors, are permutation matrices chosen
to preserve sparsity and maintain numerical accuracy. Many recent algorithms and
software for direct solution of sparse systems are based on a multifrontal approach
[17, 2, 10, 27]. In this paper, we will examine a frontal matrix strategy to be used
within a multifrontal approach.
Frontal and multifrontal methods compute the LU factors of A by using data
structures that permit regular access of memory and the use of dense matrix kernels in
their innermost loops. On supercomputers and high-performance workstations, this
can lead to a significant increase in performance over methods that have irregular
memory access and which do not use dense matrix kernels.
We discuss frontal methods in Section 2 and multifrontal techniques in Section 3,
before discussing the combination of the two methods in Section 4. We consider the
influence of the principle parameter in our approach in Section 5, and the performance
Computer and Information Science and Engineering Department, University of Florida,
Gainesville, Florida, USA. (904) 392-1481, email: davis@cis.ufl.edu. Technical reports and matrices
are available via the World Wide Web at http://www.cis.ufl.edu/~davis, or by anonymous ftp at
ftp.cis.ufl.edu:cis/tech-reports.
y Rutherford Appleton Laboratory, Chilton, Didcot, Oxon. 0X11 0QX England, and European
Center for Research and Advanced Training in Scientific Computation (CERFACS), Toulouse,
France. email: isd@letterbox.rl.ac.uk. Technical reports, information on HSL, and matrices are
available via the World Wide Web at http://www.cis.rl.ac.uk/struct/ARCD/NUM.html, or by
anonymous ftp at seamus.cc.rl.ac.uk/pub.
T. A. DAVIS AND I. S. DUFF
l
l
l
l
l l
l
Fig. 2.1. Frontal method example
of our new approach in Section 6, before a few concluding remarks and information
on the availability of our codes in Section 7.
2. Frontal methods. In a frontal scheme [15, 26, 31, 32], the factorization
proceeds as a sequence of partial factorizations and eliminations on full submatrices,
called frontal matrices. Although frontal schemes were originally designed for the
solution of finite element problems [26], they can be used on assembled systems [15]
and it is this version that we study in this paper. For general systems, the frontal
matrices can be written as
where all rows are fully summed (that is there is no further contributions to come
to the rows in (2.1)) and the first two block columns are fully summed. This means
that pivots can be chosen from anywhere in the fully summed block comprising the
first two block columns and, within these columns, numerical pivoting with arbitrary
row interchanges can be accommodated since all rows in the frontal matrix are fully-
summed. We assume, without loss of generality, that the pivots that have been chosen
are in the square matrix F 11 of (2.1). F 11 is factorized, multipliers are stored over
F 21 and the Schur complement
using
full matrix kernels. The submatrix consisting of the rows and columns of the frontal
matrix from which pivots have not yet been selected is called the contribution block.
At the next stage, further entries from the original matrix are assembled with the
Schur complement to form another frontal matrix. The overhead is low since each
equation is only assembled once and there is never any assembly of two (or more)
frontal matrices.
An example is shown in Figure 2.1, where two pivot steps have already been
performed on a 5-by-7 frontal matrix (computing the first two rows and columns of
U and L, respectively). Nonzero entries in L and U are shown in lower case. Row 6
has just been assembled into the current 4-by-7 frontal matrix (shown as a solid box).
Columns 3 and 4 are now fully-summed and can be eliminated. After this step, rows 7
and 8 must both be assembled before columns 5 and 6 can be eliminated (the dashed
box, a 4-by-6 frontal matrix containing rows 5 through 8 and columns 5, 6, 7, 8, 9, and
12 of the active submatrix). The frontal matrices are, of course, stored with columns
A COMBINED UNIFRONTAL/MULTIFRONTAL METHOD 3
packed together so no zero columns are held in the frontal matrix delineated by the
dashed box. The dotted box shows the state of the frontal matrix when the next five
pivots can be eliminated. To factorize the 12-by-12 sparse matrix in Figure 2.1, a
working array of size 5-by-7 is required to hold the frontal matrix. Note that,
in
Figure
2.1, the columns are in pivotal order.
One important advantage of the method is that only the frontal matrix need
reside in memory. Entries in A can be read sequentially from disk into the frontal
matrix, one row at a time. Entries in L and U can be written sequentially to disk in
the order they are computed.
The method works well for matrices with small profile, where the profile of a
matrix is a measure of how close the nonzero entries are to the diagonal and is given
by the expression:
a ji 6=0
where it is assumed the diagonal is nonzero so all terms in the summation are non-
negative. If numerical pivoting is not required, fill-in does not increase the profile
U has the same profile as A). To reduce the profile, the frontal method is
typically preceded by an ordering method such as reverse Cuthill-McKee (RCM) [5,
7, 29], which is typically faster than the sparsity-preserving orderings required by a
more general technique like a multifrontal method (such as nested dissection [24] and
minimum degree [1, 25]). A degree update phase, which is typically the most costly
part of a minimum degree algorithm, is not required in the RCM algorithm. However,
for matrices with large profile, the frontal matrix can be large, and an unacceptable
level of fill-in can occur.
3. Multifrontal methods. In a multifrontal scheme for a symmetric matrix
[2, 8, 9, 10, 17, 18, 28], it is normal to use an ordering such as minimum degree to
reduce the fill-in. Such an ordering tends to reduce fill-in much more than profile
reduction orderings. The ordering is combined with a symbolic analysis to generate
an assembly or computational tree, where each node represents the assembly and
elimination operations on a frontal matrix and each edge the transfer of data from child
to parent node. When using the tree to drive the numerical factorization, assembly
and eliminations at any node can proceed as soon as those at the child nodes have been
completed, giving added flexibility for issues such as exploitation of parallelism. As in
the frontal scheme, the complete frontal matrix (2.1) cannot normally be factorized
but only a few steps of Gaussian elimination are possible, after which the remaining
reduced matrix (the Schur complement
needs to be
summed (assembled) with other data at the parent node.
In the unsymmetric-pattern multifrontal method, the tree is replaced by a directed
acyclic graph (dag) [9], and a contribution block may be assembled into more than
one subsequent frontal matrix.
4. Combining the two methods. Let us now consider an approach that
combines some of the best features of the two methods. Assume we have chosen
a pivot and determined a frontal matrix as in a normal multifrontal method. At this
stage, a normal multifrontal method will select as many pivots as it can from the fully
summed part of the frontal matrix, perform the eliminations corresponding to these
pivots, store the contributions to the factors, and store the remaining frontal matrix
for later assembly at the parent node of the assembly tree. The strategy we use in
4 T. A. DAVIS AND I. S. DUFF
our combined method is, after forming the frontal matrix, to hold it as a submatrix
of a larger working array and allow further pivots to be chosen from anywhere within
the frontal matrix. If such a potential pivot lies in the non-fully summed parts of
the frontal matrix then it is necessary to sum its row and column before it could be
used as a pivot. This is possible so long as its fully summed row and column can be
accommodated within the larger working array. In this way, we avoid some of the
data movement and assemblies of the multifrontal method. Although the motivation
is different, the idea of continuing with a frontal matrix for some steps before moving
to another frontal matrix is similar to recent work in implementing frontal schemes
within a domain decomposition environment, for example [23], where several fronts are
used within a unifrontal context. However, in the case of [23], the ordering is done a
priori and no attempt is made to use a minimum degree ordering. Another case where
one continues with a frontal matrix for several assemblies is when relaxed supernode
amalgamation is used [17]. However, this technique delays the selection of pivots and
normally causes more fill-in and operations than our technique. Indeed, the use of our
combined unifrontal/multifrontal approach does not preclude implementing relaxed
supernode amalgamation as well. In comparison with the unifrontal method where
the pivoting is determined entirely from the order of the assemblies, the combined
method requires the selection of pivots to start each new frontal matrix. This implies
either an a priori symbolic ordering or a pivot search during numerical factorization.
The pivot search heuristic requires the degrees of rows and columns, and thus a degree
update phase. The cost of this degree update is clearly a penalty we pay to avoid the
poor fill-in properties of conventional unifrontal orderings.
We now describe how this new frontal matrix strategy is applied in UMFPACK
Version 1.1 [8, 10], in order to obtain a new combined method. However, the new
frontal strategy can be applied to any multifrontal algorithm. UMFPACK does not
use actual degrees but keeps track of upper bounds on the degree of each row and
column. The symmetric analogue of the approximate degree update in UMFPACK
has been incorporated into an approximate minimum degree algorithm (AMD) as
discussed in [1], where the accuracy of our degree bounds is demonstrated.
Our new algorithm consists of several major steps, each of which comprises several
pivot selection and elimination operations. To start a major step, we choose a pivot,
using the degree upper bounds and a numerical threshold test, and then define a
working array which is used for pivoting and elimination operations so long as these
can be performed within it. When this is no longer possible, the working array is
stored and another major step is commenced.
To start a major step, the new algorithm selects a few (by default columns
from those of minimum upper bound degree and computes their patterns and true
degrees. A pivot row is selected on the basis of the upper bound on the row degree
from those rows with nonzero entries in the selected columns. Suppose the pivot row
and column degrees are r and c, respectively. A k-by-l working array is allocated,
typically has a value between 2 and 3, and is fixed
for the entire factorization). The active frontal matrix is c-by-r but is stored in a
k-by-l working array. The pivot row and column are fully assembled into the working
array and define a submatrix of it as the active frontal matrix. The approximate
degree update phase computes the bounds on the degrees of all the rows and columns
in this active frontal matrix and assembles previous contribution blocks into the active
frontal matrix. A row i in a previous contribution block is assembled into the active
frontal matrix if
A COMBINED UNIFRONTAL/MULTIFRONTAL METHOD 5
1. the row index i is in the nonzero pattern of the current pivot column, and
2. if the column indices of the remaining entries in the row are all present in the
nonzero pattern of the current pivot row.
Columns of previous contribution blocks are assembled in an analogous manner.
The major step then continues with a sequence of minor steps at each of which
another pivot is sought. These minor steps are repeated until the factorization can
no longer continue within the current working array, at which point a new major
step is started. When a pivot is chosen in a minor step, its rows and columns are
fully assembled into the working array and redefine the active frontal matrix. The
approximate degree update phase computes the bounds on all rows and columns in
this active frontal matrix and assembles previous contribution blocks into the active
frontal matrix. The corresponding row and column of U and L are computed, but
the updates from this pivot are not necessarily performed immediately. For efficient
use of the Level 3 BLAS it is better to accumulate a few updates (typically 16 or 32,
if possible) and perform them at the same time. To find a pivot in this minor step,
a single candidate column from the contribution block is first selected, choosing one
with least value for the upper bound of the column degree, and any pending updates
are applied to this column. The column is assembled into a separate work vector,
and a pivot row is selected on the basis of upper bound on the row degrees and a
numerical threshold test. Suppose the candidate pivot row and column have degrees
are r 0 and c 0 , respectively. Three conditions apply:
1. If r 0 ? l or c 0 ? k, then factorization can no longer continue within the active
frontal matrix. Any pending updates are applied. The LU factors are stored.
The active contribution block is saved for later assembly into a subsequent
frontal matrix. The major step is now complete.
2. If r then the candidate pivot can fit into the
active frontal matrix without removing the p pivots already stored there.
Factorization continues within the active frontal matrix by
commencing another minor step.
3. Otherwise, if l then the candidate pivot can fit,
but only if some of the previous p pivots are shifted out of the current frontal
matrix. Any pending updates are applied. The LU factors corresponding
to the pivot rows and columns are removed from the front and stored. The
active contribution block is left in place. Set p / 1. Factorization continues
within the active frontal matrix by commencing another minor step.
We know of no previous multifrontal method that considers case 3 (in particular,
UMFPACK Version 1.1 does not). Case 1 does not occur in frontal methods, which are
given a working array large enough to hold the largest frontal matrix (unless numerical
pivoting causes the frontal matrix to grow unexpectedly). Taking simultaneous
advantage of both cases 1 and 3 can significantly reduce the memory requirements and
number of assembly operations, while still allowing the use of orderings that reduce
fill-in.
Figure
4.1 illustrates how the working array is used in UMFPACK Version 1.1
and the combined method. The matrices L 1 , L 2 , U 1 , and U 2 in the figure are the
columns and rows of the LU factors corresponding to the pivots eliminated within
this frontal matrix. The matrix D is the contribution block. The matrix -
U 2 is U 2
with rows in reverse order, and -
columns in reverse order. The arrows
denote how these matrices grow as new pivots are added. When pivots are removed
from the working array in Figure 4.1(b), for case 4 above, the contribution block does
6 T. A. DAVIS AND I. S. DUFF
(a) UMFPACK working array
(b) Combined method working array
empty
empty
U
U
LD
U 121Fig. 4.1. Data structures for the active frontal matrix
Table
Test matrices.
name n jAj sym. discipline comments
gre 1107 1107 5664 0.000 discrete simul. computer system
gemat11 4929 33185 0.001 electric power linear programming basis
migration
lns 3937 3937 25407 0.850 fluid flow linearized Navier-Stokes
shyy161 76480 329762 0.726 fluid flow viscous fully-coupled Navier-Stokes
hydr1 5308 23752 0.004 chemical eng. dynamic simulation
rdist1 4134 94408 0.059 chemical eng. reactive distillation
lhr04 4101 82682 0.015 chemical eng. light hydrocarbon recovery
chemical eng. light hydrocarbon recovery
not need to be repositioned. A similar data organization is employed by MA42 [22].
5. Numerical experiments. We discuss some experiments on the selection of
a value for G in this section and compare the performance of our code with other
sparse matrix codes in the next section.
We show in Table 5.1 the test matrices we will use in our experiments in this
section and the next. The table lists the name, order, number of entries (jAj),
symmetry, the discipline from which the matrix comes, and additional comments.
The symmetry, or more correctly the structural symmetry, is the number of matched
off-diagonal entries over the total number of off-diagonal entries. An entry, a ij (j 6= i),
is matched if a ji is also an entry.
We have performed our experiments on the effect of the value of G on a SUN
SPARCstation 20 Model 41, using the Fortran compiler (f77 version 1.4, with -O4
and -libmil options), and the BLAS from [11] and show the results in Table 5.2. The
table lists the matrix name, the growth factor G, the number of frontal matrices
(major steps), the numerical factorization time in seconds, the total time in seconds,
the number of nonzeros in the LU factors, the memory used, and the floating-point
operation count.
The number of frontal matrices is highest for decreases as G increases
although, because the effect is local, this decrease may not be monotonic. Although
the fill-in and operation count is typically the lowest when the minimum amount of
memory is allocated for each frontal matrix 1), the factorization time is often
high because of the additional data movement required to assemble the contribution
blocks and the fact that the dense matrix kernels are more efficient for larger frontal
matrices.
These results show that our strategy of allocating more space than necessary for
the frontal matrix and choosing pivots that are not from the initial fully summed
A COMBINED UNIFRONTAL/MULTIFRONTAL METHOD 7
Table
Effect of G on the performance of the combined method
Matrix G fronts factor total jL
memory op count
gre 1107 1.0 926 0.51 0.89 0.048 0.160 2.6
2.0 406 0.43 0.74 0.079 0.196 6.7
2.5 323 0.70 1.01 0.100 0.224 9.0
7.0 122 0.75 0.85 0.121 0.235 9.5
2.0 988 0.37 0.70 0.067 0.287 0.8
2.5 873 0.39 0.69 0.077 0.299 1.0
3.0 776 0.33 0.66 0.087 0.312 1.3
5.0 586 0.41 0.70 0.119 0.353 2.0
7.0 538 0.41 0.77 0.148 0.381 2.8
2.0 2324 1.41 3.79 0.122 0.959 7.2
2.5 2326 1.37 3.96 0.122 0.883 7.2
3.0 2322 1.41 3.97 0.123 0.886 8.1
7.0 2280 2.82 5.87 0.153 0.852 16.0
lns 3937 1.0 3208 12.82 17.16 0.474 1.161 95.5
2.0 2011 5.68 9.02 0.494 1.170 84.1
2.5
3.0 1717 7.01 9.29 0.551 1.425 90.0
5.0 1446 6.26 9.30 0.591 1.750 96.4
7.0 1454 11.76 15.14 0.697 2.005 134.1
2.0 3947 0.52 1.57 0.112 0.384 2.7
2.5 3726 0.67 1.46 0.122 0.398 3.3
2.0 306 1.12 1.85 0.278 0.668 10.6
2.5 316 1.21 1.85 0.304 0.694 11.5
3.0 295 1.23 1.69 0.333 0.702 10.4
5.0 160 2.15 2.21 0.493 0.890 14.6
7.0 160 3.17 2.04 0.620 1.038 14.7
2.0 2223 1.90 4.03 0.313 0.824 16.4
2.5 2185 3.09 5.12 0.457 1.051 32.4
5.0 2153 2.51 4.10 0.444 0.995 26.4
7.0 2180 3.70 4.55 0.497 1.195 28.9
block can give substantial gains in execution time with sometimes a small penalty in
storage, although occasionally the storage can be less as well.
6. Performance. In this section, we compare the performance of the combined
unifrontal/multifrontal method (MA38) with the unsymmetric-pattern multifrontal
8 T. A. DAVIS AND I. S. DUFF
method (UMFPACK Version 1.1 [8, 10]), a general sparse matrix factorization
algorithm that is not based on frontal matrices (MA48, [19, 20]), the frontal method
(MA42, [15, 22]), and the symmetric-pattern multifrontal method (MUPS [2]). All
methods can factorize general unsymmetric matrices and all use dense matrix kernels
to some extent [12]. We tested each method on a single processor of a CRAY C-98,
although MUPS is a parallel code. Version 6.0.4.1 of the Fortran compiler (CF77)
was used. Each method (except MA42, which we discuss later) was given 95 Mw of
memory to factorize the matrices listed in Table 5.1. Each method has a set of input
parameters that control its behavior. We used the recommended defaults for most of
these, with a few exceptions that we now indicate.
By default, three of the five methods (MA38, UMFPACK V1.1, and MA48)
preorder a matrix to block triangular form (always preceded by finding a maximum
transversal [14]), and then factorize each block on the diagonal [16]. This can reduce
the work for unsymmetric matrices. We did not perform the preordering, since MA42
and MUPS do not provide these options.
One matrix (lhr71) was so ill-conditioned that it required scaling prior to its
factorization. The scale factors were computed by the Harwell Subroutine Library
routine MC19A [6]. Each row was then subsequently divided by the maximum
absolute value in the row (or column, depending on how the method implements
threshold partial pivoting). No scaling was performed on the other matrices.
By default, MUPS preorders each matrix to maximize the modulus of the smallest
entry on the diagonal (using a maximum transversal algorithm). This is followed by
a minimum degree ordering on the nonzero pattern of A +A T .
For MA42, we first preordered the matrix to reduce its profile using a version
of Sloan's algorithm [30, 21]. MA42 is able to operate both in-core and out-of-
core, using direct access files. It has a finite-element entry option, for which it was
primarily designed. We used the equation entry mode on these matrices, although
some matrices are obtained from finite-element calculations. We tested MA42 both
in-core and out-of-core. The CPU time for the out-of-core factorization was only
slightly higher than the in-core factorization, but the memory usage was much less.
We thus report only the out-of-core results. Note that the CPU time does not include
any additional system or I/O device time required to read and write the direct access
files. The only reliable way to measure this is on a dedicated system. On a CPU-bound
multiuser system, the time spent waiting for I/O would have little effect on
total system throughput.
The symbolic analysis phase in MA42 determines a minimum front size (k-by-l,
which assumes no increase due to numerical pivoting), and minimum sizes of other
integer and real buffers (of total size b, say). We gave the numerical factorization
phase a working array of size 2k-by-2l, and a buffer size of 2b, to allow for numerical
pivoting. In our tables, we show how much space was actually used. The disk space
taken by MA42 is not included in the memory usage.
The results are shown in Table 6.1. For each matrix, the tables list the numerical
factorization time, total factorization time, number of nonzeros in L+U (in millions),
amount of memory used (in millions of words), and floating-point operation count (in
millions of operations) for each method. The total time includes preordering, symbolic
analysis and factorization, and numerical factorization. The time to compute the scale
factors for the lhr71 matrix is not included, since we used the same scaling algorithm
for all methods. For each matrix, the lowest time, memory usage, or operation count
is shown in bold. We compared the solution vectors, x, for each method. We found
A COMBINED UNIFRONTAL/MULTIFRONTAL METHOD 9
that all five methods compute the solutions with comparable accuracy, in terms of
the norm of the residual. We do not give the residual in Table 6.1.
All five codes have factorize-only options that are often much faster than the
combined analysis+factorization phase(s), and indeed the design criterion for some
codes (for example MA48) was to minimize factorize time even if this caused an
increase in the initial analyse time. For MA42, however, the analysis is particularly
simple so normally its overhead is much smaller than for the other codes.
MUPS is shown as failing twice (on shyy161 and lhr71 which are both nearly
singular). In both cases, numerical pivoting caused an increase in storage requirements
above the 95 Mw allocated. MA42 failed to factorize the shyy161 matrix because of
numerical problems. It ran out of disk space during the numerical factorization of
the lhr71 matrix (about 5.8 Gbytes were required). For this matrix, we report the
estimates of the number of entries in the LU factors and the memory usage, as reported
by the symbolic analysis phase of MA42.
Since the codes being compared all offer quite different capabilities and are
designed for different environments, the results should not be interpreted as a
direct comparison between them. However, what we would like to highlight is the
improvements that our new technique brings to the UMFPACK code and that the new
code is at least comparable in performance with other sparse matrix codes. The peak
performance of MA38 is 607 Mflops for the numerical factorization of the psmigr 1
matrix (compared with the theoretical peak of about 1 Gflop for one C-90 processor).
Our new code requires less storage than the original UMFPACK code and sometimes
it requires less than half the storage. In execution time for both analyse/factor and
only, it is generally faster (sometimes by nearly a factor of two) and when
it is slower than the original UMFPACK code it is so by at most about 13% (for the
hydr1 matrix). Over all the codes, MA38 has the fastest analyse+factorize time for
five out of the ten matrices. Except for one matrix (lns 3937) it never takes more than
twice the time of the fastest method. Its memory usage is comparable to the other
in-core methods. MA42 can typically factorize these matrices with the least amount
of core memory. MA42 was originally designed for finite-element entry and so is not
optimized for equation entry. The experiments are run on only one computer which
strongly favors direct addressing and may thus disadvantage MA48 which does not
make such heavy use of the Level 3 BLAS.
7.
Summary
. We have demonstrated how the advantages of the frontal and
multifrontal methods can be combined. The resulting algorithm performs well for
matrices from a wide range of disciplines. We have shown that the combined
unifrontal/multifrontal method gains over our earlier code because it avoids more
indirect addressing and generally performs eliminations on larger frontal matrices.
Other differences between UMFPACK Version 1.1 and the new code (MA38, or
UMFPACK Version 2.0) include an option of overwriting the matrix A with its LU
factors, printing of input and output parameters, a removal of the extra copy of the
numerical values of A, iterative refinement with sparse backward error analysis [4],
more use of Level 3 BLAS within the numerical factorization routine, and a simpler
calling interface. These features improve the robustness of the code and result a
modest decrease in memory usage.
The combined unifrontal/multifrontal method is available as the Fortran 77 codes,
UMFPACK Version 2.0 in Netlib [13], 1 and MA38 in Release 12 of the Harwell
1 UMFPACK Version 2.0, in Netlib, may only be used for research, education, or benchmarking
T. A. DAVIS AND I. S. DUFF
Subroutine Library [3]. 2
8.
Acknowledgements
. We would like to thank Nick Gould, John Reid, and
Jennifer Scott from the Rutherford Appleton Laboratory for their helpful comments
on a draft of this report.
--R
An approximate minimum degree ordering algorithm
Vectorization of a multiprocessor multifrontal code
Solving sparse linear systems with sparse backward error
A linear time implementation of the reverse Cuthill-Mckee algorithm
On the automatic scaling of matrices for Gaussian elimination
Reducing the bandwidth of sparse symmetric matrices
Users' guide to the unsymmetric-pattern multifrontal package (UMFPACK
Distribution of mathematical software via electronic mail
On algorithms for obtaining a maximum transversal
An implementation of Tarjan's algorithm for the block triangularization of a matrix
The design of MA48
The use of profile reduction algorithms with a frontal code
The use of multiple fronts in Gaussian elimination
Computer Solution of Large Sparse Positive Definite Systems
A frontal solution program for finite element analysis
The multifrontal method for sparse matrix solution: Theory and Practice
The multifrontal method for sparse matrix solution: Theory and practice
Comparative analysis of the Cuthill-Mckee and the reverse Cuthill-Mckee ordering algorithms for sparse matrices
An algorithm for profile and wavefront reduction of sparse matrices
frontal techniques for chemical process simulation on supercomputers
Supercomputing strategies for the design and analysis of complex separation systems
--TR
Distribution of mathematical software via electronic mail
Sparse matrix test problems
The evolution of the minimum degree ordering algorithm
A set of level 3 basic linear algebra subprograms
The multifrontal method for sparse matrix solution
Sparse matrix methods for chemical process separation calculations on supercomputers
The design of a new frontal code for solving sparse, unsymmetric systems
The design of MA48
Stable finite elements for problems with wild coefficients
An Approximate Minimum Degree Ordering Algorithm
An Unsymmetric-Pattern Multifrontal Method for Sparse LU Factorization
The Multifrontal Solution of Indefinite Sparse Symmetric Linear
Computer Solution of Large Sparse Positive Definite
Reducing the bandwidth of sparse symmetric matrices
A Supernodal Approach to Sparse Partial Pivoting
--CTR
Eero Vainikko , Ivan G. Graham, A parallel solver for PDE systems and application to the incompressible Navier-Stokes equations, Applied Numerical Mathematics, v.49 n.1, p.97-116, April 2004
J.-R. de Dreuzy , J. Erhel, Efficient algorithms for the determination of the connected fracture network and the solution to the steady-state flow equation in fracture networks, Computers & Geosciences, v.29 n.1, p.107-111, February
Kai Shen, Parallel sparse LU factorization on different message passing platforms, Journal of Parallel and Distributed Computing, v.66 n.11, p.1387-1403, November 2006
Timothy A. Davis , John R. Gilbert , Stefan I. Larimore , Esmond G. Ng, A column approximate minimum degree ordering algorithm, ACM Transactions on Mathematical Software (TOMS), v.30 n.3, p.353-376, September 2004
Anshul Gupta, Recent advances in direct methods for solving unsymmetric sparse systems of linear equations, ACM Transactions on Mathematical Software (TOMS), v.28 n.3, p.301-324, September 2002
Xiaoye S. Li , James W. Demmel, SuperLU_DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems, ACM Transactions on Mathematical Software (TOMS), v.29 n.2, p.110-140, June
Timothy A. Davis, A column pre-ordering strategy for the unsymmetric-pattern multifrontal method, ACM Transactions on Mathematical Software (TOMS), v.30 n.2, p.165-195, June 2004
J. Candy , R. E. Waltz, An Eulerian gyrokinetic-Maxwell solver, Journal of Computational Physics, v.186
A. Guardone , L. Vigevano, Finite element/volume solution to axisymmetric conservation laws, Journal of Computational Physics, v.224 n.2, p.489-518, June, 2007 | frontal methods;sparse unsymmetric matrices;linear equations;multifrontal methods |
287731 | On Two Interior-Point Mappings for Nonlinear Semidefinite Complementarity Problems. | Extending our previous work ( Monteiro and Pang 1996), this paper studies properties of two fundamental mappings associated with the family of interior-point methods for solving monotone nonlinear complementarity problems over the cone of symmetric positive semidefinite matrices. The first of these maps lead to a family of new continuous trajectories which include the central trajectory as a special case. These trajectories completely "fill up" the set of interior feasible points of the problem in the same way as the weighted central paths do the interior of the feasible region of a linear program. Unlike the approach based on the theory of maximal monotone maps taken by Shida and Shindoh (1996) and Shida, Shindoh, and Kojima (1995), our approach is based on the theory of local homeomorphic maps in nonlinear analysis. | Introduction
In a series of recent papers (see Kojima, Shida and Shindoh 1995a, Kojima, Shida and Shindoh
1995b, Kojima, Shida and Shindoh 1996, Kojima, Shindoh and Hara 1997, Shida and Shindoh
1996, Shida, Shindoh and Kojima 1995, Shida, Shindoh and Kojima 1996), Hara, Kojima, Shida
and Shindoh have introduced the monotone complementarity problem in symmetric matrices, studied
its properties, and developed interior-point methods for its solution. A major source where this
problem arises is a convex semidefinite program which has in the last couple years attracted a great
deal of attention in the mathematical programming literature (see Alizadeh 1995, Alizadeh, Hae-
berly and Overton 1994, Alizadeh, Haeberly and Overton 1995, Goldfarb and Scheinberg 1996, Luo,
Sturm and Zhang 1996a, Luo, Sturm and Zhang 1996b, Monteiro 1995, Nesterov and Nemirovskii
1994, Nesterov and Todd 1995, Nesterov and Todd 1997, Potra and Sheng 1995, Ramana, Tun-cel
and Wolkowicz 1997, Shapiro 1997, Vandenberghe and Boyd 1996, Zhang 1995). Our goal in this
paper is to extend our previous work Monteiro and Pang (1996) to the monotone complementarity
problem in symmetric matrices and to apply the results to a convex semidefinite program. Although
the present analysis is significantly more involved (for one thing, a great deal of matrix-theoretic
tools is employed), we obtain a large set of conclusions that extend those in Monteiro and Pang
(1996). Similar to the motivation in this reference (which the reader is advised to consult), we undertook
the present study in order to gain an in-depth understanding of the interior point methods,
in particular the limiting behavior of certain solution trajectories, both new and known, for solving
these important mathematical programs defined on the cone of positive semidefinite matrices.
Let M n denote the vector space of n \Theta n matrices with real entries; let S n denote the subspace
of M n consisting of the symmetric matrices; and let ! m denote the m-dimensional Euclidean space
of real vectors. Let S n
denote the subset of S n consisting of the positive semidefinite matrices.
m be a given mapping which we assume to be continuous on S n
(which is isomorphic to the Euclidean space ! n(n+1)=2 ). The complementarity problem which we
shall study in this paper is to find a triple (X; Y; z) 2 S n \Theta S n \Theta ! m satisfying
F (X; Y;
(1)
It is known (see the cited references) that there are several equivalent ways to represent the complementarity
conditions in this problem; namely,
(2)
where "tr" denotes the trace of a matrix. Associated with these equivalent conditions, we can define
mappings that will help us understand the limiting behavior of certain path-following interior point
methods for solving the problem (1). In this paper, we focus on the first two conditions and the
associated mappings. Specifically, a main objective of this paper is to examine properties of the
mapping
defined by
Associated with this map, we define the set
++ \Theta S n
We will give conditions on the mapping F which guarantee that the system
has the following properties:
(P1) it has a solution for every (A; B) 2 S n
(P2) the solution, denoted (X(A; B); Y (A; B); z(A; B)), is unique when
if a sequence
++ \Theta F (U \Theta ! m ) converges to a limit (A1
++ \Theta
F (U \Theta ! m ), then the sequence converges to the limit
if a sequence
++ \Theta F (U \Theta ! m ) converges to a limit (A1
then the sequence
implies the existence of a solution of (1) when
We also consider the map ~
~
and prove under suitable conditions that for every - 0 and B 2 F (S n
++ \Theta S n
~
has a solution, which is is unique when - ? 0. Clearly, this latter result implies that (1) has a
solution under the weaker assumption that 0 2 F (S n
++ \Theta S n
A major difference between the mappings H and ~
H lies in their ranges. Comparing the systems
(5) and (7) with varying right-hand sides, we see that we are able to obtain results for a broader
class of solution trajectories in the case of H than ~
H , with the right-side matrix A in (5) being an
arbitrary symmetric matrix versus the restriction to a positive multiple of the identity matrix in
(7).
The main tool used to derive the above results is a known theory of local homeomorphic maps
summarized in Monteiro and Pang (1996) that has been applied to a standard mixed complementarity
problem defined on ! n
As judged from the papers on semidefinite programming,
the extension of the previous analysis Monteiro and Pang (1996) to the case of complementarity
problems in symmetric positive semidefinite matrices is nontrivial; this thus necessitates our present
investigation.
The complementarity problem (1) arises as the set of first-order necessary optimality conditions
for the following nonlinear semidefinite program (see Shapiro 1997):
minimize '(x)
subject to G(x) 2 \GammaS n
are given smooth mappings. Indeed, it is
well-known (see for example Shapiro 1997) that, under a suitable constraint qualification, if x is
a local optimal solution of the semidefinite program, then there must exist j
such that
is the Lagrangian function defined by
where
denotes the scalar product of A; Letting
F (U; V; x;
r x L(x; U;
we see that the first-order necessary optimality conditions for problem (8) are exactly in the form
of the complementarity problem (1). For the mapping F in (11), our principal result is Theorem 4
which shows that under some fairly standard assumptions on the functions ', G, and h, the mapping
H defined by (3) maps U \Theta ! m+p homeomorphically onto the convex set S n
++ \Theta F (U \Theta ! m+p );
moreover, H(S n
Before proceeding further, we should relate the formulation (1) with the formulation in Kojima,
Shida and Shindoh (1995a), Kojima, Shida and Shindoh (1995b), Kojima, Shida and Shindoh
(1996), Kojima, Shindoh and Hara (1997), Shida and Shindoh (1996), Shida, Shindoh and Kojima
(1995), Shida, Shindoh and Kojima (1996). Specifically by defining the set
the problem (1) is of the form
which is exactly the central problem studied in the cited references. So in principle the results in
these references (especially Shida and Shindoh 1996, Shida, Shindoh and Kojima 1995 which deal
with nonlinear problems) could be applied to the problem (1), provided that we can demonstrate
that F is a "maximal monotone" subset of S n \Theta S n . By assuming that a certain monotonicity
condition on F holds everywhere on S n \Theta S n \Theta ! m , the maximal monotonicity of F follows from
Theorem 8 in Monteiro and Pang (1996). In this case, the existence and continuity of the solutions
of system (7) follow from the theory presented in Shida, Shindoh and Kojima (1995). However, the
requirement that the function F satisfy the monotonicity condition everywhere on S n \Theta S n \Theta ! m
is quite restrictive; for example, the function F given by (11) satisfies the monotonicity condition
only on the set S n
Our analysis assumes that the monotonicity condition holds only
on the latter set, and thus is valid for the special map F given by (11). Incidentally, under the
weaker monotonicity condition imposed on F , we are able to establish that the set F defined by
(12) is maximal monotone only in a restricted sense, namely with respect to the set U (see Section
6). Section 7 of this paper establishes the results of Shida, Shindoh and Kojima (1995) about the
existence and continuity of the solutions to the system (7) under the weaker monotonicity condition
on F . As far as the system (5) is concerned, properties (P1)-(P4) are shown to be valid here for
the first time.
The following notation is used throughout this paper. The symbols - and - denote, respectively,
the positive semidefinite and positive definite ordering over the set of symmetric matrices; that is,
for positive semidefinite, and X - Y (or Y OE X)
means positive definite. We let M n
++ denote the set of matrices
that respectively. Given U 2 M n and a differentiable function
denote the m-vector whose i-th entry is (@G(x)=@x i
is the partial derivative of G with respect to x i . Finally we let
Preliminaries
In this section, we introduce some further notation and describe some background concepts and
results needed for the subsequent developments. The section is divided into two subsections. The
first subsection summarizes a theory of local homeomorphic maps defined on metric spaces; the
discussion is very brief. We refer the reader to Section 2 of Monteiro and Pang (1996), Chapter 5
of Ortega and Rheinboldt (1970), and Chapter 3 of Ambrosetti and Prodi (1993) for a thorough
treatment of this theory. The second subsection introduces the key conditions on the mapping F
in (1) and establishes some basic properties of the set U defined in (4).
2.1 Local homeomorphic maps
If M and N are two metric spaces, we denote the set of continuous functions from M to N by
C(M;N) and the set of homeomorphisms from M onto N by Hom(M;N ). For G 2 C(M;N ),
Eg. Given
such that G(D) ' E, the restricted mapping ~
defined by ~
denoted by Gj (D;E) ; if then we write this ~
G
simply as GjD . We will also refer to Gj (D;E) as "G restricted to the pair (D; E)", and to GjD as
"G restricted to D". The closure of a subset E of a metric space will be denoted by cl E. Any
continuous function from a closed interval of the real line ! into a metric space will be called a
path. We say that partition of the set V if
space M is said to be connected if there exists no partition
both O 1 and O 2 are non-empty and open. A metric space M is said to be path-connected if for any
two points there exists a path M such that It
is well-known that any path-connected metric space is connected; the converse however does not
always hold. A metric space M is said to be simply-connected if it is path-connected and for any
there exists a continuous mapping ff : [0; 1] \Theta [0; 1] !M
such that ff(s;
A subset C of a vector space is said be star-shaped if there exists c 0 2 C such that the line segment
connecting c 0 to any other point in C is contained entirely in C. Clearly, every star-shaped set in
a normed vector space is simply-connected.
In the rest of this subsection, we will assume that M and N are two metric spaces and that
1 The mapping G 2 C(M;N) is said to be proper with respect to the set E ' N if
is compact for every compact set K ' E. If G is proper with respect to N , we will
simply say that G is proper.
The proofs of the following three results can be found in Section 2 and the Appendix of Monteiro
and Pang (1996).
Proposition 1 Assume that G : M ! N is a local homeomorphism. If M is path-connected and
N is simply-connected then G is proper if and only if G 2 Hom(M;N).
Proposition be given sets satisfying the following conditions: GjM 0
is a local homeomorphism, G(M Assume that G is proper with
respect to some set E such that N 0 ' E ' N . Then H restricted to the pair (M
a proper local homeomorphism. If, in addition, N 0 is connected, then G(M 0
cl N 0 .
Proposition 3 Let M be a path-connected metric space. Assume that is a local
homeomorphism and that G \Gamma1 ([y is compact for any pair of points y
and G(M) is convex.
2.2 Some key concepts
We introduce the key conditions on the map F that will be assumed throughout the paper.
mapping J(X; Y; z) defined on a subset dom (J) of M n \Theta M n \Theta ! m is said to be
Y )-equilevel-monotone on a subset V ' dom (J) if for any (X; Y; z) 2 V and (X
such that J(X; Y; dom (J), we
will simply say that J is (X; Y )-equilevel-monotone.
In the following two definitions, we assume that W , Z and N are three normed spaces and that
OE(w; z) is a function defined on a subset of W \Theta Z with values in N .
Definition 3 The function OE(w; z) is said to be z-bounded on a subset V ' dom (OE) if for every
sequence f(w k ; z k )g ae V such that fw k g and fOE(w k ; z k )g are bounded, the sequence fz k g is also
bounded. When dom (OE), we will simply say that OE is z-bounded.
Definition 4 The function OE(w; z) is said to be z-injective on a subset V ' dom (OE) if the following
implication holds: (w;
dom (OE), we will simply say that OE is z-injective.
In the next result, we collect a few technical facts which will be used later. Parts (a) and (b)
are well-known consequences of the self-dual property of the cone S n
. Their proofs are omitted.
Parts (c), (d), and (e) pertain to properties of the set U .
Lemma 1 The following statements hold:
(a) U - 0 if and only if U ffl V - 0 for every V - 0;
(b) if U - 0,
(c)
(d) if (X; Y
then
(e) the set U is star-shaped, thus simply-connected.
Proof. We give a simple proof of (c). Let X; Y 2 S n
be such that XY +Y X - 0. Then XY must
be a P-matrix, and thus it has positive determinant. Hence X and Y are both nonsingular, and
thus positive definite.
By applying the proof of Theorem 3.1(iii) in Shida, Shindoh and Kojima (1996), we can establish
part (d). The details are omitted.
For part (e), observe that (I ; I) 2 U . Now let (X; Y ) be an arbitrary element in U . We will
show that the line segment connecting (I ; I) to (X; Y ) is contained in U , from which part (e)
follows. Indeed, any point on this segment is of the form (X
It is easy to see
++ is a convex cone, it follows that (X t We have thus shown that
U is star-shaped.
The next lemma gives a consequence of the concepts introduced in the previous definitions.
m be a continuous map and let H
m be the map defined by (3). Assume that the map F is (X; Y )-equilevel-monotone and
z-bounded. If the map H restricted to U \Theta ! m is a local homeomorphism, then H is proper with
respect to S n \Theta F (U \Theta ! m ).
Proof. Let K be a compact subset of S n \Theta F (U \Theta ! m ). We will show that H \Gamma1 (K) is compact,
from which the result follows. The continuity of H implies that H \Gamma1 (K) is a closed set. Hence, it
remains to show that H \Gamma1 (K) is bounded. Indeed, suppose for contradiction that there exists a
sequence is compact and
we may assume without loss of generality that there exists F 1 2 F (U \Theta ! m )
such that
Clearly, we have F
such that the set
contains is an open set and every local homeomorphism maps open sets
onto open sets, it follows from Lemma 4 that H(N1 ), and hence F (N1 ), is an open set. Thus, by
(13), we conclude that for all k sufficiently large, say k
hence that F (X
monotone, we have (X
This inequality together with fact
that ( ~
imply
for every k - k 0 . Using the fact that fH(X k ; Y k ; z k )g ae K and K is bounded, we conclude that
g, is bounded. This fact together with the above inequality
implies that the sequences fX k g and fY k g are bounded. Since lim k!1 k(X
must have lim k!1 kz k is z-bounded, we conclude that lim k!1
1, thereby contradicting (13).
3 The Affine Case
Beginning in this section, we develop our main theory for the complementarity problem (1) and
the associated mapping H defined in (3). This section pertains to the case where F is affine; the
treatment of the general case of a nonlinear map F is given in the next section. Apart from the
fact that an affine map F considerably simplifies the analysis, through this case, we will be able to
obtain a technical result (Lemma 5) that will play an important role in the analysis of a nonlinear
F , which is the subject of the next section. The derivation of this technical lemma is definitely
the main reason for us to consider the case of an affine map separately.
We begin with a lemma that contains some elementary properties of affine maps.
Lemma 3 Assume that F is an affine map and let F
its linear part. Then the following statements hold:
(a) F is (X; Y )-equilevel-monotone if and only if
(b) F is z-injective if and only if
(c) F is z-injective if and only if F is z-bounded.
Proof. The proofs of (a) and (b) are straightforward. We next prove (c). Assume first that F is
z-injective. To show that F is z-bounded assume for contradiction that f(X k ; Y k ; z k )g is a sequence
in S n \Theta S n \Theta ! m such that f(X k are bounded and lim k!1 kz k
By passing to a subsequence if necessary, we may assume that lim k!1 z k =kz k
1. Hence, we obtain
z k
Since \Deltaz 6= 0, this contradicts the fact that F is z-injective. Hence, the "only if " part of (c) follows.
Assume now that F is not z-injective. Then there exists \Deltaz
and \Deltaz 6= 0. The sequence f(X defined by X
has the property that F 0 (X are
bounded and lim k!1 kz k showing that F is not z-bounded. The 'if ' part of c) follows.
The next lemma is an important step toward the main result in the affine case.
Lemma 4 Assume that F is an affine map which is (X; Y )-equilevel-
monotone and z-injective. Then the map H restricted to U \Theta ! m is a local homeomorphism.
Proof. Since U \Theta ! m is an open set, it is sufficient to show that the derivative map H 0 (X;
m is a isomorphism for every (X; Y; z) 2 U \Theta ! m . For this purpose,
fix any (X; Y; z) 2 U \Theta ! m . Since H 0 (X; Y; z) is linear and is a map between identical spaces, it is
enough to show that
Indeed, assume that the left-hand side of the above implication holds. By the definition H , we have
By (17) and Lemma 3(a), we have \DeltaX ffl \DeltaY - 0. In view of Lemma 1(d), this relation together
with (18) imply that This conclusion together with (17) and Lemma 3(b) imply
that We have thus shown that the implication (16) holds and the result follows.
We are now ready to present the main result of this section.
Theorem 1 Assume that F is an affine map which is (X; Y )-equilevel-
monotone and z-injective. Then, the following statements hold:
(a) H maps U \Theta ! m homeomorphically onto S n
(b) H(S n
Proof. The proof of the theorem follows from Proposition 2 as follows. Let M j S n
show
that these sets together with the map G j H j M satisfy the assumptions of Proposition 2. Indeed,
first observe that GjM 0
is a local homeomorphism due to Lemma 4. The assumption that
F is (X; Y )-equilevel-monotone and z-injective together with Lemma 3(c) and Lemma 4 imply
that the assumptions of Lemma 2 are satisfied. Hence, the conclusion of this lemma implies that
is proper with respect to E. Also the set H(M due to the fact that
To show that H(MnM 0 assume for contradiction that there
exists (X; Y; z) 2 MnM 0 such that H(X; Y; z) 2 N 0 . Then, by definition of H and the sets M , M 0
and N 0 , it follows that (XY +Y X)=2 2 S n
++ and (X; Y
. But this contradicts Lemma
1(c), and hence we must have Lemma 1, the set U , and hence
U star-shaped. Since the image of a star-shaped set under an affine map is star-shaped,
it follows that F (U \Theta ! m ), and hence N 0 , is star-shaped. Since every star-shaped set is simply-
connected, we conclude that N 0 is simply-connected. Hence, by Proposition 2 and the fact that
restricted to the pair (M 0
is a proper local homeomorphism and that H(cl U \Theta ! m
The theorem now follows from the two last conclusions, the fact that M 0 is path-connected, N 0 is
simply-connected, and Proposition 1.
We use the above result to prove the following very important technical lemma.
be elements of U such that
Proof. Let \DeltaX assume for contradiction that either \DeltaX 6= 0
or \DeltaY 6= 0. Without loss of generality, we may assume that \DeltaX 6= 0. We claim that there exists a
linear \DeltaY and is monotone, that is W ffl M(W
every W 2 S n . Indeed, if then the zero map satisfies the conditions of the claim. Assume
then that \DeltaY 6= 0. We consider two cases depending on whether \DeltaX ffl \DeltaY ? 0 or \DeltaX ffl
Consider first the case in which \DeltaX ffl \DeltaY ? 0. In this case, the subspace generated by \DeltaX and
the subspace L orthogonal to \DeltaY span S n . Clearly, there exists a unique linear
such that M (\DeltaX can be uniquely written
as
for every W 2 S n . Hence, M is a monotone map. Consider now the case in which \DeltaX ffl
Let L 1 denote the subspace orthogonal to both \DeltaX and \DeltaY . Clearly, there exists a unique map
\DeltaX; and M(V
Any W 2 S n can be uniquely written as
Then, we obtain
\DeltaX
for every W 2 S n . Hence, M is a monotone map. We have thus shown that the claim holds.
Consider now the defined by F (X; Y
. Using the fact that M is a monotone map and that M (\DeltaX
see that F satisfies the assumptions of Theorem 1 (with
By Theorem 1(a), it follows that the associated map H (with restricted to U is one-to-one.
Moreover, (19) and the relation F (X imply that H(X
last two conclusions together with the fact that (X imply that (X
4 The Nonlinear Case
In this section we establish results for nonlinear maps F which are similar to Theorem 2. We
also consider the case of a nonlinear map that arises from the mixed nonlinear complementarity
problems in symmetric matrices.
Theorem 2 Assume that F : S n
m is a continuous map which is (X; Y )-
equilevel-monotone, z-bounded (on S n
z-injective on S n
++ \Theta S n
Then, the
following statements hold for the mapping H given by (3):
(a) H is proper with respect to S n \Theta F (U \Theta ! m );
homeomorphically onto S n
(c) H(S n
Proof. The proof is close to the one given for Theorem 1. It consists of showing that the sets
together with the map G j H j M satisfy the assumptions of Proposition 2. But
instead of using Lemma 4 to show that GjM 0
is a local homeomorphism, we use Lemma
5 to prove that H j M 0
maps M 0 homeomorphically onto H(M 0 ). Since
is a continuous map
from an open subset of the vector space S n \Theta S n \Theta ! m into the same space, by the domain
invariance theorem it suffices to show that H j M 0
is one-to-one. For this purpose, assume that
z) for some ( -
Then, by the
definition of H , we have F ( -
z) and -
Y ~
X. Since F is
(X; Y )-equilevel-monotone, we conclude that ( -
Hence, by Lemma 5, we
have
Y ). This implies that F ( -
and by the z-injectiveness
of F on S n
++ \Theta S n
++ \Theta S n
We have thus
proved that H j M 0 maps M 0 homeomorphically onto H(M 0 ). To prove (b), it suffices to show that
. As in Theorem 1, we can verify that H(M 0
Using the assumption that F is (X; Y )-equilevel-monotone and z-bounded and the fact that H j M 0
is a homeomorphism onto H(M 0 ), it follows from Lemma 2 that H is proper with respect to
holds. Using the fact that U 0 is star-shaped and thus path-
connected, we easily see that N 0 is path-connected, and hence connected. Hence, by Proposition 2
and the fact that H(M 0
and H(cl U 0 \Theta ! m
Remark. The z-injectiveness of F is assumed only on S n
++ \Theta S n
(and not on S n
In the application to convex semidefinite programming to be discussed in the next section, we show
that under appropriate convexity assumptions, the mapping F defined by (11) satisfies the former
z-injectiveness property; thus Theorem 2 is applicable. Nevertheless, we do not know
if this special map F is z-injective on the larger set.
Theorem 2 establishes the claimed properties (P1)-(P4) of the map H stated in the Introduction.
Indeed, (P1) follows from conclusion (c); (P2) and (P3) follow from (b); and (P4) follows from
(a). In what follows, we give two important consequences of the above theorem, assuming that
first one, Corollary 1, has to do with the central path for the semidefinite
complementarity problem (1); the second one, Corollary 2, is a solution existence result for the
same problem.
Corollary 1 Assume that F : S n
m is a continuous map which is (X; Y )-
equilevel-monotone, z-bounded (on S n
z-injective on S n
++ \Theta S n
further that
that
++ for every t 2 (0; 1]. Then there exist (unique) paths
++ and z : (0;
Moreover, every accumulation point of (X(t); Y (t); z(t)) as t tends to 0 is a solution of the complementarity
problem (1).
Corollary 2 Assume that F : S n
m is a continuous map which is (X; Y )-
equilevel-monotone, z-bounded (on S n
z-injective on S n
++ \Theta S n
. If there
exists m such that F (X
, the system2 (XY
has a solution, which is unique when A 2 S n
++ .
Next, we introduce a strengthening of the equilevel-monotonicity concept and show that under
this strengthened monotonicity property, the image F (U \Theta ! m ) is a convex set in S n \Theta ! m .
Definition 5 The
)-everywhere-monotone if there
exist continuous functions OE
that c(R;
(R
where
Obviously, if F is (X; Y )-everywhere-monotone then F is (X; Y )-equilevel-monotone.
Theorem 3 Suppose that F : S n
m is a continuous map which is (X; Y )-
everywhere monotone, z-bounded (on S n
z-injective on S n
++ \Theta S n
in addition to statements (a), (b) and (c) of Theorem 2, it holds that the set F (U \Theta ! m ) is convex.
Proof. It suffices to establish the convexity of F (U \Theta ! m ). The proof is based on Proposition 3.
Consider the set M j U \Theta ! m and the map G j H j U \Theta! m . With M and G defined this way, the
convexity of F (U \Theta ! m ) follows from Proposition 3, provided that M and G satisfy the hypotheses of
the corollary. The rest of proof is devoted to the verification of these hypotheses. First observe that
G is a local homeomorphism due to Lemma 4. It remains to show that for any two triples ( ~
, the set G \Gamma1 (E) is compact, where E is the line segment connecting the
points H( ~
z). It is enough to show that any sequence f(X
has an accumulation point in G \Gamma1 (E). Indeed, let f(X
every k - 0 we have
Y ~
for some sequence f- k g ae [0; 1], where ~
z) and F k j F (X
By the (X; Y )-everywhere monotonicity of F and (21), it follows that
Hence,
Similarly, we can show that
Multiplying (22) by - k and (23) by and adding, we obtain
By (20) and (21) and the fact that f- k g is bounded, we see that the sequences fX k ffl Y k g and
are bounded. This observation, the fact that f- k g is bounded and the function c(\Delta; \Delta) is
continuous imply that the right hand side of (24), and hence its left hand side, is bounded. A
simple argument now shows that fX k g and fY k g are also bounded. This conclusion, the fact that
is bounded and the map F is z-bounded imply that fz k g is bounded. Hence, the sequence
is bounded and must have an accumulation point ( -
z). It remains to show that
closed since
H is continuous and both the domain of H and the set E are closed. The closeness of H
and the fact that f(X k imply that ( -
Using (20) and the
fact that ( ~
Y ) are in U , we easily see that ( -
++ . Morever, since
we see that ( -
. By Lemma 1(c), we conclude
that ( -
We have thus shown that ( -
Consider now the special case in which the mapping F : S n
has the
following structure:
F (X; Y; z) jB @
for some continuous mapping /
. The following result gives
conditions on / for the mapping F to satisfy all the assumptions of Theorem 3. This result will be
used in the next section for the map F given by (11).
Proposition 4 The following statements hold:
(a) F is (X; Y )-everywhere-monotone on S n
is a monotone mapping, that
is
\Theta
for every (X; z); (X
(b) F is z-injective on S n
++ \Theta S n
is z-injective on S n
(c) F is z-bounded on S n
is z-bounded on S n
Proof. We first prove (a). To prove that the mapping H is (X; Y )-everywhere-monotone, define
OE(X; Y; z) j (X; \Gammaz); and c j 0:
Using these
two equalities and the fact that / is a monotone mapping, we obtain
\Theta
which implies
(R
The proofs of (b) and (c) are straightforward and therefore we omit the details.
5 Convex Semidefinite Programming
In this section, we discuss the application of Theorem 3 to the mapping F given by (11). We wish
to specify some conditions on the functions ', G, and h in order for the resulting function F to
satisfy the assumptions of this theorem, and thus for the conclusions of the theorem to hold. The
following correspondence of variables should be used to cast the mapping F defined by (11) into
the form of the general mapping F considered in Section 4: (U; V )
Our first goal is to give a sufficient condition for the mapping F to be (X; Y )-monotone on
As in Shapiro (1997), we say that the mapping G positive semidefinite
convex (psd-convex) if
The following technical result is useful for the analysis of this section.
Lemma 6 Assume that G is an affine function. Then
the following statements hold:
(a) for every W 2 S n
, the function x
(b) the function x
(c) if the set X j fx nonempty and bounded then, for every
A 2 S n and fi 2 !, the set fx
Proof. (a) Using (25) and Lemma 1(b), we obtain
(b) Using (25), the fact that - max (\Delta) is a homogeneous convex function over the set of symmetric
matrices and the implication U
(c) Consider the function defined by v(x) j maxf- max (G(x)); kh(x)kg for x
Using (b), we see that v is a convex function. Moreover, the level set v \Gamma1 (\Gamma1; 0] is nonempty and
bounded since it coincides with the set X . By a well-known property of convex functions (Corollary
8.7.1 of Rockafellar 1970), it follows that every level set of v is bounded. Since fx
In the next three lemmas we study properties of the mapping /
defined by
\GammaG(x)
r x L(x; U;
where the map L is defined in (10).
Lemma 7 Suppose that continuously differentiable and convex, G
continuously differentiable and psd-convex and is an affine function. Then, the map
defined by (26) is monotone in the sense of Proposition 4(a).
Proof. We have to show that for any (U; x; j); (U
m+p , there holds
0: (27)
The assumption of the proposition implies that the functions L(\Delta; U; are convex
Hence, we have
Adding these two inequalities, using the definition of L and simplifying, we obtain (27).
Lemma 8 Suppose that continuous differentiable and convex, G
continuously differentiable and psd-convex and is an affine function such that the
gradient matrix rh(x) has full column rank. Then, / defined by (26) is (x; j)-injective
on S n
any one of the conditions below is satisfied:
(a) ' is strictly convex;
(b) for every U 2 S n
++ , the map x strictly convex;
(c) the feasible set X j fx bounded and each G ij (x) is an analytic
function.
Proof. Let U 2 S n
++ and (x; j); m+p be such that /(U; x;
We have to show that (x;
since rh(x) has full column rank, we have
Hence, it suffices to show that Note that by (28) and the definition of L, we have
The assumption of the lemma implies that the functions L(\Delta; U; are convex on ! m .
Hence, the two bracketed expressions in the right hand side of (29) are nonnegative and, since their
sum is zero, both of them must be equal to zero. In particular, we have
If (a) or (b) holds then the function L(\Delta; U; j) is strictly convex and, by (30), we must have
Consider case (c). Since h is affine, we have h(x
together with (30) and the definition of L imply
0:
Since '(\Delta) and U ffl G(\Delta) are convex functions on that the two bracketed expressions
in the above relation are nonnegative. Hence, we conclude that U ffl G(x
It is now easy to see that this relation together with the convexity of U ffl G(x)
and the fact that imply that U ffl using
the fact that G is psd-convex and
Thus, we have
we must have is an analytic function, this implies
that every y in the set L j f-x !g. Clearly, since
and h is affine, we also have that Hence, L is contained in the set
fy which is bounded in view of Lemma 6. This implies that
Lemma 9 Suppose that the function '
is continuously differentiable and psd-convex and is an affine function such that
the (constant) gradient matrix rh(x) has full column rank. Suppose also that the set X defined
in Lemma 8(c) is nonempty and bounded. Then the map / defined by (26) is (x; j)-bounded on
Proof. Assume that f(U k is a sequence in S n
m+p such that fU k g and f/(U k
are bounded. By the definition of /, it follows that fG(x k )g, fr x are
bounded. Hence, there exists ff ? 0 such that
or equivalently, fx k g ae fx ffg. By Lemma 6(c), we conclude that
is bounded. This fact together with the fact that fU k g and fr x are bounded
imply that frh(x k )j k g is bounded. Since rh(x) is constant and has full column rank, we conclude
that is bounded. We have thus shown that f(x k ; j k )g is bounded, and hence that / is (x; j)-
bounded on S n
Combining the above lemmas with Theorem 3, we obtain the following theorem which is the
main result of this section.
Theorem 4 Suppose that the function ' continuously differentiable and convex,
continuously differentiable and psd-convex, is an affine function such
that the (constant) gradient matrix rh(x) has full column rank and the feasible set X defined in
Lemma 8(c) is bounded. If any one of the following conditions holds:
(a) ' is strictly convex;
(b) for every U 2 S n
++ , the map x strictly convex;
(c) each G ij is an analytic function,
then the following statements hold for the maps F and H given by (11) and (3), respectively:
(i) H is proper with respect to S n \Theta F (U \Theta ! m+p );
homeomorphically onto S n
++ \Theta F (U \Theta ! m+p );
(iii) the set F (U \Theta ! m+p ) is convex;
(iv) H(S n
Proof. In view of Lemmas 7, 8 and 9, the map / given by (26) is monotone on S n
(x; j)-injective on S n
j)-bounded on S n
By Proposition 4, it follows that
the map F given by (11) is (U; V )-everywhere-monotone on S n
j)-injective on
++ \Theta S n
j)-bounded on S n
. The result now follows immediately
from Theorem 3.
By Theorem 4, if 0 2 F (U \Theta ! m+p ) then the system
has a solution for every A 2 S n
this solution is unique when A 2 S n
++ . Clearly, a solution
of this system when yields a feasible solution of (8) satisfying the stationary condition (9).
We henceforth give conditions on problem (8) for 0 to be an element of F (U \Theta ! m+p ). It turns out
that the existence of a Slater point for (8) is one of the conditions required by the next result.
Suppose that the function '
is continuously differentiable and psd-convex, is an affine function. Suppose also
that the feasible set X defined in Lemma 8(c) is bounded and there exists a vector ~ x 2 X such that
is given by (11).
Proof. consider the problem
subject to G(x) - \Gamma"I
By Lemma 6(c) and the assumption that X is bounded, it follows that the set of feasible solutions
of (32) is compact. Since its objective function is defined and continuous over the whole feasible
region, we conclude that (32) has an optimal solution x . Observe that (32) satisfies the Slater
constraint qualification since G(~x) OE \Gamma"I , is psd-convex and h is affine. Hence, there
exist multipliers U 2 S n
Letting it follows from (33) and (34) that
or equivalently, F (U It remains to show that (U Clearly, we have
++ \Theta S n
++ due to the fact that G(x ) OE 0. Moreover, using (34) we obtain
Clearly, this implies that (U
6 Maximal Monotonicity
In this section, we show that a mapping F satisfying the assumptions of Theorem 2 defines a family
of set-valued maps from S n
into itself which are maximal monotone in a restricted sense. For this
purpose, we introduce some definition and notation.
If M and N are two metric spaces, we shall denote a set-valued map A from M into subsets of
N by A
the graph of A is the set
is said to be a monotone set if for every (V; W
there holds
!W , with V and W being subsets
of M n , is called a monotone map if Gr(A) is a monotone set. The monotone map A
or its graph Gr(A), is said to be maximal monotone with respect to a subset T ' V \Theta W if there
exists no monotone set \Gamma ' T that properly contains T " Gr(A). If simply say
that the map A
!W is maximal monotone.
The following result establishes the maximal monotonicity with respect to the set U of certain
set-valued maps associated with a mapping F : S n
satisfying the assumptions
of Theorem 2.
Theorem 5 Suppose that F : S n
m is a continuous map which is (X; Y )-
equilevel-monotone, z-bounded (on S n
z-injective on S n
++ \Theta S n
defined by
is monotone, and maximal monotone with respect to U .
Proof. Using the fact that F is (X; Y )-equilevel-monotone, we easily see that AB is monotone.
To show that AB is maximal monotone with respect to U , let \Gamma ' U be a monotone set containing
We have to show that
Theorem 2(b) implies the existence of a triple
m such that
are in \Gamma, we
conclude that
In view of Lemma 5, the last relation together with the first equality of (35) and the fact that both
are in U imply that (X
have thus shown that and the result follows.
Observe that the set F given by (12) is equal to the set Gr(AB ) with hence F is a
maximal monotone set with respect to set U . We end this section by giving the following close
version of Theorem 5 for maps F defined over the whole space S n \Theta S n \Theta ! m .
Theorem 6 Suppose that F m is a continuous map which is (X; Y )-
monotone, z-bounded and z-injective. Then, for every B 2 F (S n \Theta S n \Theta ! m ), the set-valued map
AB defined in Theorem 5 is maximal monotone.
Proof. This follows immediately from Theorem 8 in Monteiro and Pang (1996) by using the fact
that S n is isomorphic to ! n(n+1)=2 .
7 An Alternative Map
Given a continuous map
consider in this section the map
defined by
and present conditions on the map F which guarantee the existence of a (unique) solution of the
system
-I
for every B 2 F (M n
++ \Theta S n
first establish two useful lemmas.
Lemma 11 Suppose that
m is a continuous map which is (X; Y )-
equilevel-monotone and z-injective on M n
++ \Theta S n
Then, the map H defined by (36) maps
++ \Theta S n
homeomorphically onto H(M n
++ \Theta S n
Proof. Since M n
++ \Theta S n
is an open subset of M n \Theta S n \Theta ! m , by the domain invariance
theorem it suffices to show that that H j M n
assume that (X; Y; z)
are two triples in M n
++ \Theta S n
m such that H(X; Y;
have
F (X; Y;
Y \GammaY . Using (38) and the fact that F is (X; Y )-equilevel-monotone, we
conclude that \DeltaX ffl \DeltaY - 0. Moreover, it is easy to see that (37) implies that (\DeltaX ) -
or equivalently \DeltaX = \GammaX \DeltaY -
Y \Gamma1 . Hence, we obtain
where the penultimate equality follows from the fact that tr
the matrix \DeltaY (X This observation, the fact that
imply that \DeltaY (X
Hence, we have We thus shown that
Y . This conclusion
together with (38) and the assumption that F is z-injective on M n
++ \Theta S n
imply that
z.
Lemma 12 Suppose that
m is a continuous map which is (X; Y )-
equilevel-monotone, z-bounded on S n
and z-injective on M n
++ \Theta S n
. Then the
defined by (36) satisfies the following two properties:
(a) H is proper with respect to (!++ \Delta I) \Theta F (M n
++ \Theta S n
(b) H restricted to S n
proper with respect to (!+ \Delta I) \Theta F (M n
++ \Theta S n
Proof. The proof of part (a) is very close to the one given for Lemma 2. Let K be a compact
subset of (!++ \Delta I) \Theta F (M n
++ \Theta S n
show that H \Gamma1 (K) is compact, from which
the result follows. The continuity of H implies that H \Gamma1 (K) is a closed set. Hence, it remains
to show that H \Gamma1 (K) is bounded. Indeed, suppose for contradiction that there exists a sequence
each Y k is in S n
and the
product is a positive multiple of the identity matrix, it follows that both sequences fX k g
and fY k g must belong to S n
++ . Since K is compact and fH(X we may assume
without loss of generality that there exists F 1 2 F (M n
++ \Theta S n
Clearly, we have F
++ \Theta S n
such that the set
++ \Theta S n
contains is an open set of M n
++ \Theta S n
m and every homeomorphism
maps open sets onto open sets, it follows from Lemma 11 that H(N1 ), and hence F (N1 ), is
an open set. Thus, by (39), we conclude that for all k sufficiently large, say k
Since F is (X; Y )-equilevel-monotone, we have (X
This
inequality together with fact that ( ~
imply
for every k - k 0 . Using the fact that fH(X k ; Y k ; z k )g ae K and K is bounded, we conclude that
g, and hence fX k ffl Y k g, is bounded. This fact together with the above inequality implies
that the sequences fX k g and fY k g are bounded. Since lim k!1 k(X must have
which implies that lim k!1 due to the fact that F is
z-bounded. But this last conclusion contradicts (39). This establishes part (a).
The proof of part (b) requires only a slight modification of a couple steps in the above proof. Indeed
let K be a compact subset of (!+ \Delta I) \Theta F (M n
++ \Theta S n
let the sequence f(X
be such that (X
We follow the above
argument. Although we can not deduce that (X
++ \Theta S n
++ , but utilizing the fact that
, we can still establish the boundedness of the sequence f(X )g. The
details are not repeated. Thus (b) holds.
Based on the above two lemmas, we can establish two additional properties of the mapping H .
Theorem 7 Suppose that
m is a continuous map which is (X; Y )-
equilevel-monotone, z-bounded on S n
and z-injective on M n
++ \Theta S n
addition to the conclusions of Lemmas 11 and 12, we have:
(a) H(S n
++ \Theta S n
++ \Theta S n
(b) H(S n
++ \Theta S n
Proof. Note that if we can show
++ \Theta S n
++ \Theta S n
then part (a) follows. In turn the proof of the displayed inclusion consists of showing that the sets
++ \Theta S n
++ \Theta S n
together with the map G j H satisfy the assumptions of Proposition 2.
Clearly, is a local homeomorphism by Lemma 11. Also H(I ; I ;
claim that H(MnM 0 assume that (X; Y; z) 2 M is such that H(X; Y; z) 2 N 0 .
, we conclude that both X and Y are in S n
++ .
Hence, (X; Y; z) 2 M 0 and the claim follows. By Lemma 12, we know that proper with
respect to E. Using Lemma 1(e), it is easy to see that the set N 0 is path-connected, and thus
connected. Hence, it follows from Proposition 2 that H(M 0 ) ' N 0 . This is precisely the inclusion
(40).
To prove (b), it suffices to show that (0; B) 2 H(S n
++ \Theta S n
For each scalar - ? 0, there exists (X
++ \Theta S n
must be symmetric
positive definite. By part (b) of Lemma 12, we conclude that lim sup -!0 k(X
Consequently, a simple limit argument completes the proof.
As the final result of this paper, we present a corollary of the above theorem which summarizes
the essential properties of the mapping ~
H defined in (6). This corollary is the analog of Theorem
2 for the alternative interior-point map ~
H .
Corollary 3 Suppose that ~
m is a continuous map which is (X; Y )-
equilevel-monotone, z-bounded on S n
and z-injective on S n
++ \Theta S n
Then, the
defined by
~
~
satisfies the following statements:
(a) ~
H is proper with respect to (!+ \Delta I) \Theta ~
F (S n
++ \Theta S n
(b) ~
H maps S n
++ \Theta S n
homeomorphically onto ~
++ \Theta S n
(c) ~
F (S n
++ \Theta S n
Proof. Consider the map
defined by F (X; Y;
. Using the assumptions about the map ~
F , it is
easy to see that F is (X; Y )-equilevel-monotone, z-bounded on S n
and z-injective on
++ \Theta S n
Hence, it follows that the associated
defined by (36) satisfies the conclusions of Lemma 11, Lemma 12 and Theorem 7, which easily
imply the desired conclusions (a), (b), and (c).
Corollary 4 Suppose that ~
m is a continuous map which is (X; Y )-
equilevel-monotone, z-bounded on S n
and z-injective on S n
++ \Theta S n
~
F (S n
++ \Theta S n
Proof. Let
F (S n
++ \Theta S n
Corollary 3(c), we know that
H is the map defined in Corollary 3. Hence, there exists (X; Y; z) 2 S n
such that I and ~
the relation obviously implies that
we conclude that B 2 ~
Acknowledgement
During this research, Dr. Monteiro was supported by the National Science Foundation under grants
INT-9600343 and CCR-970048 and the Office of Naval Research under grant N00014-94-1-0234; Dr.
Pang was supported by the National Science Foundation under grants CCR-9213739 and by the
Office of Naval Research under grant N00014-93-1-0228.
--R
Interior point methods in semidefinite programming with application to combinatorial optimization.
Complementarity and nondegeneracy in semidefinite programming.
A Primer of Nonlinear Analysis
Interior point trajectories in semidefinite programming
A predictor-corrector interior-point methods for the semidefinite linear complementarity problem using the Alizadeh-Haeberly-Overton search direction
Properties of an interior-point mapping for mixed complementarity problems
Interior Point Methods in Convex Programming: Theory and Application
Iterative Solution of Nonlinear Equations in Several Variables
A superlinearly convergent primal-dual infeasible-interior-point algorithm for semidefinite programming
Strong duality for semidefinite programming
Convex Analysis
First and second order analysis of nonlinear semidefinite programs.
Monotone semidefinite complementarity problems
Centers of monotone generalized complementarity problems
Existence of search directions in interior-point algorithms for the SDP and the monotone SDLCP
programming.
On extending some primal-dual interior-point algorithms from linear programming to semidefinite programming
--TR
--CTR
Jong-Shi Pang , Defeng Sun , Jie Sun, Semismooth homeomorphisms and strong stability of semidefinite and Lorentz complementarity problems, Mathematics of Operations Research, v.28 n.1, p.39-63, February | interior point methods;maximal monotonicity;problems;mixed nonlinear complementarity problems;generalized complementarity;nonlinear semidefinite programming;weighted central path;monotone mappings;continuous trajectories |
287884 | Engineering Software Design Processes to Guide Process Execution. | AbstracttUsing systematic development processes is an important characteristic of any mature engineering discipline. In current software practice, Software Design Methodologies (SDMs) are intended to be used to help design software more systematically. This paper shows, however, that one well-known example of such an SDM, Booch Object-Oriented Design (BOOD), as described in the literature is too imprecise and incomplete to be considered as a fully systematic process for specific projects. To provide more effective and appropriate guidance and control in software design processes, we applied the process programming concept to the design process. Given two different sets of plausible design process requirements, we elaborated two more detailed and precise design processes that are responsive to these requirements. We have also implemented, experimented with, and evaluated a prototype (called Debus-Booch) that supports the execution of these detailed processes. | Introduction
If software engineering is to make solid progress
towards becoming a mature discipline, then it must
move in the direction of establishing standardized, disciplined
methods and processes that can be used systematically
by practitioners in carrying out their routine
software development tasks. We note that such
standardized methods and processes should not be totally
inflexible, but indeed must be tailorable and flexible
to enable different practitioners to respond to what
This research is supported by the Advanced Research
Projects Agency, through ARPA Order #6100, Program Code
7E20, which was funded through grant #CCR-8705162 from the
National Science Foundation. This work is also sponsored by
the Advanced Research Projects Agency under Grant Number
MDA972-91-J-1012.
is known to be a very wide range of software development
situations in correspondingly different ways.
Regardless of this, however, the basis of a mature discipline
of software engineering seems to us to entail
being able to systematically execute a clearly defined
process in carrying out these tasks. In this paper we
refer to a process as being "systematic" if it provides
precise and specific guidance for software practitioners
to rationally carry out the routine parts of their work.
As design is perhaps the most crucial task in software
development, it seems particularly crucial that
software design processes be clearly defined in such a
way as to be more systematic. Humphrey [Hum93]
says that "one of the great misconceptions about creative
work is that it is all creative. Even the most
advanced research and development involves a lot of
routine. The role of a process is to make the routine
aspects of a job truly routine." We agree with
this, and believe that design as a creative activity still
contains a lot of routine which can be systematized.
For example, making each design decision is probably
creative (e.g, deciding if an entity should be a class
when using an object oriented design method). How-
ever, the order of making each of these related design
decisions can be relatively more systematic (e.g.,
identify each class first and then define its semantics
and relations). We also anticipate that with progress
in design communities, design methodologies will provide
more routine which can be systematized. This
will help adapt SDMs into practice more easily and
thus improve productivity and software quality.
This paper describes our work that is aimed at this
goal-namely to make SDM processes more systematic
and thus more effective in guiding designers. This
work begins with the assumption that the large diversity
of Software Design Methodologies (SDM's) provides
at least a starting point in efforts to provide
the software engineering community with such well-defined
and systematic design processes. This paper
concentrates on the Booch Object Oriented Design
Methodology (BOOD) [Boo91] in order to provide
specificity and focus. The paper shows, however,
that BOOD, as described and defined in the litera-
ture, is far too vague to provide specific guidance to
designers, and is too imprecise and incomplete to be
considered a very systematic process for the needs of
specific projects. On the other hand, we did find that
BOOD could be considered to be a methodological
framework for a family of such processes.
Our work builds upon the basic ideas of process programming
[Ost87], which suggest that software processes
should be thought of as software themselves,
and that software processes should be designed, coded,
and executed. That being the case, we found that
BOOD, as described in the literature, is far closer to
the architecture, or high-level design, of a design process
than to the code of such a process. As such,
BOOD is seen to be amenable to a variety of detailed
designs and encodings, each representing an elaboration
of the BOOD architecture, and each sufficiently
detailed and specific that it can be systematic in a way
that is consistent with superior engineering practice in
older, more established engineering disciplines.
In the remainder of this paper we indicate how and
why we believe that BOOD should be considered a
software design process architecture. We then suggest
two significantly different detailed designs that can be
elaborated from BOOD, each of which can be viewed
as a more detailed elaboration upon the basic BOOD
architecture. We show that these elaborations can be
defined very precisely through the use of such accepted
software design representations as OMT [RBP
and through the use of process coding languages such
as APPL/A. Indeed, this paper shows that the use
of such formalisms is exactly what is needed in order
to render these elaborations sufficiently complete and
precise that they can be considered to be systematic.
Thus, the paper indicates a path that needs to be
traveled in order to take the work of software design
methodologists and render it the adequate basis for a
software engineering discipline.
First (in Section 2), we define the process architecture
provided by BOOD, and then describe two
processes elaborated from the architecture. Second
(in Section 3), we describe a prototype that supports
designers in carrying out the execution of these pro-
cesses, illustrating how these differently elaborated
processes support different execution requirements.
Third (in Section 4), we describe our experience of
using the prototype and summarize some of the main
issues that have arisen in our efforts to take the design
process architectures that are described in the
literature to the level of encoded, systematic design
processes.
2 The BOOD Architecture and Two
Elaborations
2.1 Overview of BOOD
We decided to experiment with BOOD because
BOOD is widely used, and provides a few application
examples that are very useful in helping us to identify
the key issues in elaborating BOOD to the level of ex-
ecutable, systematic processes. A detailed description
of BOOD can be found in [Boo91]. In this section, we
present only a brief description of the architecture of
the BOOD process. We believe that it can be summarized
as consisting of the following steps:
1. Classes/Objects: Designers must
first analyze the application requirements specification
to identify the most basic classes and
objects, which could be entities in the problem
domain or mechanisms needed to support the re-
quirement. This step produces a set of candidate
classes and objects.
2. Determine Semantics of Classes/Objects:
Designers must next determine which of the candidate
classes should actually be defined in the
design specification. If a class is to be defined,
designers will determine its semantics, specifying
its fields and operations.
3. Define Relations among Classes: This step
is an extension of step 2. Designers must now
define the relationships among classes, which include
use, inheritance, instantiation and meta re-
lationships. Steps 2 and 3 produce a set of class
and object diagrams and templates, which might
be grouped into class categories.
4. Implement Classes: Designers must finally select
and then use certain mechanisms or programming
constructs to implement the classes/objects.
This step produces a set of modules, which might
be grouped into subsystems.
BOOD provides more hints and guidelines on how
to carry out these steps. However, BOOD provides
no further explicit elaboration on the details of these
steps. Thus designers are left to fill in important details
of how these complex, major activities are to be
done. As a result, there is a considerable range of variation
and success in carrying out BOOD. Further, the
process carried out by those who are relatively more
successful is not documented, defined or described in
a way that helps them to repeat it effectively, or for
them to pass on so that others can reuse it. We believe
that this is the sense in which BOOD, as described in
the literature, is a process architecture. It provides
the broad features and outlines of how to produce a
design. It supplies elements that can be thought of
and used as building blocks for specific approaches to
design creation. On the other hand, it provides no specific
guidance, details or procedures. These are to be
filled in by others who, we claim, then become design
process designers (e.g., the authors of [HKK93]) and
implementors when the method is applied to specific
projects or organizations.
2.2 Process Definition Formalism
Earlier experiences [KH88, CKO92] have shown
that the State-Charts formalism [HLN + 90] is a powerful
vehicle for modeling software processes. Thus,
we use a variant of State-Charts [HLN + 90], the dynamic
modeling notation of the Object Modeling
Technique 91], to model the processes
that we will elaborate from BOOD. As shown later,
we believe these dynamic models of BOOD processes
are sufficient to demonstrate our point
Generally, our approach is to use the notion of a
state (denoted as a labeled rounded box) to represent a
step of a BOOD process, the notion of an activity (the
text inside a rounded box and after "Do:") to represent
a step which does not contain any other steps. According
to OMT, the order for performing these activities
can be sequential, parallel or some other forms. We
use the order in which the activities is listed to recommend
a plausible order for performing those activities.
A transition (denoted as a solid arc) denotes moving
from one design step to another. The text labels on
a transition denote the events which cause the transi-
tion. The text within brackets indicates guarding conditions
for this transition. The text within parentheses
denotes attributes passed along with the transition. A
state could have sub-states, each of which denotes a
sub-step of the step.
Indeed, a modelling formalism is generally inadequate
for characterizing certain details of processes.
We found that sometimes it was necessary to specify
these details in order to render the process we were attempting
to specify sufficiently precisely that it could
realistically be considered to be systematic. For ex-
ample, OMT does not provide a capability for specifying
the sequencing of two events which are sent
by the same transition. Specification of this order
might well be the basis for important guidance to a
designer about which design issues ought to be considered
before which others. Thus, we found it necessary
to supplement OMT, by using a process coding
language called APPL/A [SHO90b] to model such
details. APPL/A is a superset of Ada that supports
many features that we found to be useful. Some examples
of APPL/A code are also provided in subsequent
sections of this paper.
Note that the goal of this work is to use these process
models and codes to demonstrate the diversity
and details of the processes that can be elaborated
from an SDM. As shown later, these dynamic models
of BOOD processes are sufficient to demonstrate this
point. Thus, we did not develop OMT's object models
and function models for BOOD processes.
2.3 Modeling the BOOD Architecture
Fig. 1 represents an OMT model of the original
BOOD process architecture, as described in [Boo91].
In this architecture, we merged step 2 and 3 of
the original BOOD process because our experience
shows that it is hard to separate those steps in practice
(Booch himself also considers that step 3 is an
extension of step 2 [Boo91]). We believe that this
model is considerably more precise than the informal
description originally provided. It is still quite vague
and imprecise on many important issues, however.
Booch [Boo91] claims that this vagueness is necessary
in order to assure that users will be able to tailor and
modify it as dictated by the specifics of particular design
situations. For example, step 2 of Fig. 1 does not
define the order for editing various BOOD diagrams
and templates. It does not define clearly which of the
diagrams or templates must be specified in order to
move from step 2 to step 3. Booch claims that different
designers might have important and legitimate needs
to elaborate these details in different ways (Chapters
8-12 of [Boo91] provide a few examples).
We found that there are indeed many ways in which
these details might be elaborated precisely and that
many of these different variants might offer better
guidance. The differences might well arise from differences
in application, differences in organization, differences
in personnel expertise, and differences in the
nature of specific project constraints. Once these differences
have been understood and analyzed, however,
the design process to be carried out should be defined
with suitable precision. Such precise definitions are
needed in order to support adequate improvements of
the efforts of novices. In addition, we believe that
there are expert designers who have internalized very
3. Implement Classes and Objects
Do: Browse Class/Object Diagrams
(Requirements)
(The change list) User initiates the transition
[No inconsistency is found]
(Proposed modification)
(Requirements exist)
User initiates the transition
Edit Object Diagram
Edit State Transition Diagram
Edit Module Diagram
Edit Process Diagram
Edit Moduel Template
Edit Process Template
[Candidate Classes/Objects are defined]
User initiates the transition
(The change list)
Changes to candidates
(List of candidates)
Edit Class Template
Edit Timing Diagram
Do: Browse Candidate Classes/Objects
2. Identify the Semantics of Classes/Objects
Edit Object Template
Edit Class Diagram
List candidate classes/objects
Nouns
1.
Do: Read requirements
Edit Data Flow Diagram
User initiates the transition
(Class/object, and some other diagram/templates)
[Class/Object diagram/template defined]
Changes to the class/object diagram/templates
Find inconsistency in class/object diagrams/templates
(Proposed modification)
User initiates the transition
User initiates the transition
(Proposed modification)
Figure
1: A Process Architecture of BOOD
specific and very effective elaborations of the BOOD
architecture and that the more these are defined pre-
cisely, the more important design expertise may be
understood, reused, automated, and improved.
In order to make the above remarks specific, we
now discuss two possible elaborations of the BOOD
architecture. In addition, using these examples we can
show the need for, and power of, both design and code
representations as vehicles for making design processes
clear and thereby providing more effective guidance.
2.4 Two Examples of BOOD Process Refinement
2.4.1 Examples of Software Project Types
First, we characterize two different types of projects
for which we will elaborate variants of design pro-
cesses, within the outlines of the BOOD architecture
(see Project Properties columns in Table 1). The parameters
of these characterizations are 1) implementation
language, documentation requirements,
project schedule, designer skill, 5) software operation
domain, software domain, and 7) maturity of
the software domain.
Based upon our experiences, we identified these two
project types as representatives of projects commonly
encounted in software engineering practice (see Table
1). For example, an instance of project type 1
could be a defense-related or a medical systems project
while an instance of project type 2 could be a civilian
project. We expected that elaborating processes
to fit the requirements of these two different types of
projects would help us understand the range of processes
that could be elaborated from BOOD.
The seven characterization parameters were chosen
because our earlier work indicated that these parameters
are likely to have major and interesting effects
upon design process elaboration. For example, when
consulting with Siemens medical companies, we found
that the U.S. Food and Drug Administration (FDA).
has specific documentation requirements, and requires
control and monitoring of corrective actions on the
product design. [FDA89] says that "when corrective
action is required, the action should be appropriately
monitored. Schedule should be established for completing
corrective action. Quick fixes should be pro-
hibited." This certainly affects how an SDM should be
applied to a specific project.
The application examples described in [Boo91] also
provide us with some details that seemed likely to be
useful in employing these parameters to help us to derive
these BOOD-based design processes. For exam-
ple, one of Booch's examples indicates that if C++ is
to be the eventual application coding language, then
class/object diagrams would not need to be translated
into module diagrams. In addition, Booch's problem
report application example [Boo91] helps us to understand
the process requirements for developing an information
processing system. For instance, that example
shows that the method must be tailored to support the
design of database schemas. His traffic control example
helps us to understand the process requirements
for developing a large scale, device-embedded system.
2.4.2 The Processes Elaborated from BOOD
In this section we present portions of the OMT diagrams
used to define details of each of these two
elaborations on the basic BOOD architecture. We
then further refine parts of them down to the level
of executable code. Each of these processes is clearly
a "Booch Design Process", each represents what we
consider to be a completely plausible design process,
and each is quite completely and precisely defined-to
the point of being systematic for the specific kinds of
projects. These two processes demonstrate the point
that there is a great deal of imprecision in the current
definition of "Booch Object-Oriented Design." They
also indicate how BOOD can be elaborated, and what
the range of elaboration might be when it is applied
to specific projects.
We will refer to our first elaborated process as
the Template Oriented Process (TOP). It emphasizes
defining various BOOD templates (e.g., the class tem-
plate) as it hypothesizes the importance of carrying
1. A Template Oriented Example 2. A Diagram Oriented Example
Project Properties Process Requirements Project Properties Process Requirements
Must be coded in Ada Specify Module Diagram Must be coded in C++ Guide designers not to specify Module
Diagram since it is not needed in this case.
Must incorporate very Requires specification of Only minimum documentation No need to enforce
complete documentation all templates required specifying all templates
Long-term Allow full documentation Short-term Encourage use of existing code
Skilled design team Less process guidance Inexperienced design team More process guidance
More process flexibility Less process flexibility
Safety-critical More change control needed Non safety-critical Less change control needed
(e.g., Medical Systems) to satisfy FDA's requirements
Large scale, Use structured analysis Information processing system Single, familiar domain.
device-embedded system Support partitioning domain Need to support schema design
State of the art project Need to support prototyping Well-understood No support for prototyping needed.
Need to support code reuse
Table
1: Project Characteristics and Process Requirements
out a design activity that delivers very complete doc-
umentation. The TOP's emphasis on complete documentation
can be seen by noting that we have refined
steps 2 and 3 of Fig 1 into the more detailed model
defined in Fig. 2.
We further hypothesized in designing the TOP that
the software to be developed is to be safety-critical,
and that, therefore, the TOP should enforce more control
over design change as this is often required by
government agencies to ensure product quality. Accordingly
note that the high level design of the TOP
incorporates an approval cycle for all changes to previously
defined artifacts.
On the other hand, we hypothesized that the TOP
is to be executed by skilled and experienced designers.
Because of this, we did not refine the detailed design
activities into lower level steps. Our expectation here
is that such designers would insist upon freedom and
flexibility that this would be given them. This also
illustrates that it is possible to define a design process
precisely, yet still provide considerable freedom and
flexibility to practitioners. In addition we designed
the TOP to allow for a certain degree of flexibility
in making transitions from one step to another. We
have also included the possibility of incorporating a
prototyping subprocess into this process.
We refer to our second elaborated process as the
Diagram Oriented Process (DOP), as it emphasizes
specifying BOOD diagrams. We derived this process
from Booch's Home Heating System example [Boo91].
In the DOP we hypothesized that there are only weak
requirements in the area of documentation, and we,
therefore, do not design in the need for designers to
specify BOOD's templates (see Figures 4). We also
hypothesized that the product being designed will be
coded in a language that provides direct support for
programming classes and objects. For this reason, the
DOP omits step 3 of the general model shown in Fig. 1
as part of its elaboration, leaving the model defined in
Fig. 4. Note that this elaboration incorporates fewer
top-level steps than the general BOOD model does.
We also hypothesized that the DOP is aimed at
supporting novice designers, and so the DOP provides
detailed guidelines for identifying classes/objects (see
Figures
5, 6, and 7). In addition, the DOP assumes
that a great deal of importance is placed upon reuse.
In response, the DOP incorporates steps that guide
designers to reuse existing software components (see
Fig. 7).
The job of creating more specific and detailed elaborations
of BOOD is not limited solely to modification
of the processing steps of BOOD. It also entails specifying
the flow of control between these steps and their
substeps. A good example of the importance of these
specifications can be seen by examining how change
management is handled in these design processes.
We use the term forward change management to
denote a transition used to maintain consistency between
a changed artifact and its dependent artifacts,
that are normally specified at a later stage of the pro-
cess. For example, a designer may add a class to a
candidate class list (in step 1 of Fig. 2) . This results
in forcing designers to redo step 2 to consider adding
a corresponding class to the class diagram. There is
virtually no guidance in BOOD about precisely how
this is to be done, or how the critical and tricky issues
of consistency management are to be addressed. Thus
there is a clear need for more detailed guidance on
automatic change control. One way this can be done
is to refine this high-level transition further as shown
in Fig. 8. In Fig. 8, a dotted line from a transition
to a class represents an event sent by the transition.
For example, the transition from Selected Class A to
Rejected Class A, which is caused by updating candidate
class A's field Needed to FALSE (i.e. class A
is no longer needed), sends event Delete Class A to
class Class. Clearly this refinement is simply one of a
very large assortment of possible refinements. We do
not claim that it is the only one or the "right" one.
We do claim, however, that supplying details such as
these provide specific guidance that is important for
designers-especially for novice designers and for large
design teams. Should it turn out that such a specifically
designed process is shown to be particularly useful
and desirable, then the detailed specification will
also render it more amendable to computer support.
We should also note that we did not stop at the level
of design diagrams in refining the meaning of forward
change management, but that we went further and defined
it as actual executable process code. Our code
was written in the APPL/A process coding language.
Fig. 9 shows the APPL/A code for the process defined
in Fig. 8. Note that this code provides even more de-
tails. For example, note that this code specifies that
changing a candidate class to a candidate object will
cause an ordered sequence of events: 1) the insertion
of an object template, 2) the removal of the class template
and 3) the forwarding of that template to step
3 for editing of the object template. Again, we stress
that these specific details are not to be considered the
only feasible elaboration of BOOD-only one possible
elaboration. We do believe, however, that in specifying
the design process to this level of detail deeper
understandings result, and the process becomes more
systematic. In addition, by reducing the process to
executable APPL/A code, it becomes possible to use
the computer to provide a great deal of automated
support (e.g., some types of automatic updating and
consistency maintenance) to human designers.
Another kind of control flow in BOOD is backward
change management, which is aimed at maintaining
consistency between a specified artifact and all the
artifacts upon which the specified artifact should de-
pend. These artifacts are normally defined at earlier
stages of the process. For example, in step 2 of Fig. 2,
designers may need to define a class in a class diagram
and find that this class does not correspond to
any candidate class because of an incomplete or faulty
analysis of the application requirements. Thus, designers
have to go back to earlier steps, reviewing the
requirements and possibly redoing step 1 to add this
class to the candidate class list. This transition can
be refined and coded in a manner similar to what was
described in the case of forward change management.
Do: Browse Requirement
Browse Candidate Classes/Objects
Edit Class Diagram
Edit State Transition Diagram
Edit Object Diagram
Do: Browse Class Diagram
Browse Object Diagram
Edit Class Templates
Edit Class Utility Templates
Edit Object Templates
Do: Browse Object Diagram
Browse Class Diagram
Edit Module Diagram
Edit Process Diagram
Edit Device Template
Edit Process Template
Edit Module Template
Do: Browse Module Diagram
Determine Semantics of Class
Step 3 Specify Class/Object Templates
Develop Module Diagram
Specify Module Template
Step
[No inconsistency is found]
[Requirements exist]
User initates the transition
Class/Object
Find inconsistencies in Candidate Class/Object Diagrams
(List of Candidate Class/Objects)
[Candidate Classes/Objects are defined and reviewed]
User initiates the transition
User initiates the transition (Class/Object diargam)
User initiates the transition
Step
(Requirements)
Changes to candidates
(Changed List)
[The changes are approved]
Diagrams
Changes on
(Changed diagrams)
[Class/Object diagrams are reviewed]
[The changes are approved]
(Changed List)
Changes to candidates
[The changes are approved]
Find inconsistencies in
class/object templates
[Modification is approved]
[Modification is approved]
class/object diagrams
Find inconsistencies in
[Modification is approved]
Find inconsistencies in Class Diagram
Changes to
class/object templates
(Class/Object Templates)
[The changes are approved]
User initiates the transition
(Class/Object Diagrams)
(Class/Object Templates)
[Class/Object Templates are reviewed]
Find inconsistencies in
module diagram
[Modification is approved]
User initiates the transition
(Module diagrams)
[Module diagrams are being defined]
Changes to
module diagram
(Changed module diagram)
Figure
2: Top-level Process Definition of the Template
Oriented Process
These process definitions, including both main flow
and change management transitions, explicitly and
clearly demonstrate how the published Booch Object
Oriented Design description can be elaborated into
a precisely defined process to provide more effective
guidance for specific projects. Our research indicates
that this observation is quite generally applicable to
the range of SDM's that are currently being espoused
widely in the community. There are a number of reasons
for this imprecision. We have already noted that
the imprecision is there intentionally to permit wide
variation in design processes to match similarly wide
design process contexts and requirements. While we
neither doubt nor dispute this need, we believe that
our work has shown that it can be met more effectively
through tailoring SDMs for specific needs of projects.
These processes resulting from the tailoring, and supported
by the appropriate tools, provide more effective
guidance and help implement various recommended
practices (e.g, those recommended by FDA [FDA89]).
In the next sections, we discuss how to support the execution
of the elaborations of the BOOD architecture
that we have just described.
Do: Browse Requirement
Do: Browse Requirements
Edit Data Flow Diagram
Do: Edit Object Diagram
Edit Class Diagram
User initiates the transition
Inappropriate Problem Definition
Step 1.1
Step 1.2
Problem Boundary
Structured Analysis Step 1.4 Prototyping
User initiates the transition
Problem
(Proposed new definition)
(Problem
User initiates the transition
[Definition is approved]
(Problem
User initiates the transition
Domain Analysis
Step 1.3
Do: Browse Requirements
Edit Candidate Class/Objects
User initiates the transition(Data Flow Diagrams)
(Problem
[Definition is approved]
(Candidate class/object)
[Candidate Class/Objects exist]
Step 1. Identify Candidate Class/Object
Figure
3: Second-level Process Definition of Template
Oriented Process: Refinement of Step 1
2. Identify the Semantics of Classes/Objects
(Requirements)
(The change list)
(Requirements exist)
User initiates the transition
(Design specification)
[No inconsistency is found]
1.
Change to
Candidates
User initiates the transition
(List of candidates)
[Candidate Classes/Objects are defined]
Do: Browse Candidate Classes/Objects
Edit Class Diagram
Edit Object Diagram
Edit State Transition Diagram
Edit Timing Diagram
User initiates the transition
Find inconsistency between candidates and
class/object diagrams
(Proposed modification)
Figure
4: Top-level Process Definition of Diagram Oriented
Process
Do: Browse Requirement
Problem Definition
Inappropriate Problem Definition
Step 1.1 Define Problem Boundary
(Proposed definition)
Domain Analysis
Step 1.2
1.3 Reuse-based Design
User initiates the transition and
(Candidate Classes/Objects)
User initiates the transition and
[Problem is defined]
(Candidate Abstract Class)
User initiates the transition and
Step 1. Identify Candidate Class/Object
Figure
5: Second-level Process Definition of Diagram
Oriented Process
Search for Noun
Search for Verb
Search for Adjective
Key Abstractions
Do: Browse Requirement
Candidate Classes/Objects
Do: Identify Classes from Nouns
Decide Operations from Verbs
Objects from Nouns
(Identified nouns, verbs, adjectives)
User initiates the transition (Change List)
Nouns/Verbs/Adjective
Changes on
Domain Analysis
Step 1.2
Step 1.2.1
Step 1.2.2
Figure
Third-level Process Definition of Diagram
Oriented Process : Refinement of Domain Analysis
Reusable Components
Develop Object Diagram for the
Edit the Concrete Classes
Concrete Classes
Instantiate the Abstract Class to
User initiates the transition
User initiates the transition
User initiates the transition
(Concrete Class)
User initiates the transition
Abstract Class
Change on reusable
components
Change on semantics
of the abstract class
1.3. Reuse-based Design
Change on the object diagram
(New components)
(New object diagram)
(Completed concrete classes)
(Abstract classes)
(Abstract classes)
Class
Abstract
Classes, Reusable Components)
(Reusable Components, Candidate Classes)
User initiates the transition
(Object Diagram, Candidate Classes)
Find new sharable objects
(Description of the objects)
Figure
7: Third-level Process Definition of the Diagram
Oriented Process : Refinement of Reuse-based
Design, which is based on Booch's Home Heating System
example
3 Support for Executing BOOD Pro-
To experiment with our ideas and demonstrate how
these processes should be supported appropriately, we
have developed a research prototype, called Debus-
Booch, to support the execution of design processes
of the sort that have just been described. Execution of
such processes is possible as a result of their encoding
in APPL/A, a superset of Ada that can be translated
into Ada, and then compiled into executable code.
We note that BOOD addresses only issues concerned
with supporting single users working on a single
design project. As most designers must work in teams,
and are often engaged in multiple projects simultane-
ously, a practical system for support of such users must
do more than simply execute straightforward encodings
of BOOD elaborations. Our Debus-Booch prototype
adapts an architecture used in a previous research
prototype (Rebus [SHDH 91]). The architecture lets
developers post (to be done) and submit (finished)
tasks to a whiteboard to coordinate their task assign-
ments. Since this work has been published and is not
directly related to the topic of this paper, we will not
describe it here.
Selected Class A
Insert
Rejected Class A
Update
Rejected Class B
Update
Update
Update
[No existing class has name A]
[No existing class has name A]
Insert
Selected Class B Terminated
Update
Delete
Selected Object A
Update
Rejected Object A
Class
Object
Insert the Class
Update
Delete Class A
Delete Class A
Delete Class A
Insert Class A
Insert Object A
Figure
8: A Refinement of Forward Change Man-
agement: illustrates more precisely how a change
in the candidate list might affect the class dia-
gram/templates. A candidate is recorded with three
fields: name, needed (indicating if it is selected as an
candidate), and kind (indicating that it is a class or
In addition, there are a variety of difficult user interface
issues to be faced in implementing a system
such as this. Exhaustive treatment of all of these issues
is well beyond the scope and limitations of this
paper. An indication of our approaches to these and
related problems can be seen from the following brief
implementation discussion.
3.1 System Overview
Debus-Booch provides four levels of process guidance
and support to its end-users (see Fig. 11 for their
user interface representations):
1. Process Selection (Accessed through a
This enables users to select
any of a range of elaborations of the BOOD ar-
chitecture, or any non-atomic step of any such
elaboration (as shown in Fig. 10). This is done
by selecting a driver to perform a constrained sequence
of steps at a certain level of the selected
process step hierarchy. Debus-Booch helps users
with this selection by furnishing users with access
to information about the nature of these various
processes and steps.
with candidate-rel, class-template;
trigger maintain-candidate;
- maintain the product of step 1
trigger body maintain-candidate is
begin
loop
trigger select
upon candidate-rel.update
needed
new-name
update-needed
new-needed
new-kind
completion do
change management is necessary only when
- candidate is selected or being updated.
case kind is
when class =?
- the candidate is no longer needed
class-template.delete(name =? name);
else
- the candidate becomes needed
class-template.insert (name =? name, .);
query (pname, plen, sname, slen);
define-class-proc(pname,plen,sname, slen);
class-template.update (name =? name,
update-name =? TRUE,
new-name =? new-name);
or (new-needed = TRUE and
case new-kind is
when object =?
object-rel.insert (name =? name);
class-template.delete (name);
when operation =? .
when abstract-class =? .
case
upon
or
select
Figure
9: APPL/A code for defining Forward Change
Management between candidate class list and class di-
agram/template definitions
Process 1 . Process N
Step 1.1 Step 1.2 Step N.1 Step N.M
Step 1.1.1
Step 1.1.1.1
Process 2
Console
Driver 1.1 Driver 1.2 Driver N.1 Driver N.M
Driver 1.1.1
Panel 1.1.1.1
Tool-Button 1 Tool-Button 2
Support
Initiate
Exclusive Unspecified Order
Constrained Order
Debus-Booch System Interface Architecture
An SDM Model
Criteria Guidline Display Display
Figure
10: An SDM definition and support model
2. Process Step Execution (Accessed through
a Panel): The user can obtain support for the
sequencing and coordination of the driver activities
to be performed in an elaborated design pro-
cess. These activities can be divided into two cat-
egories: required and optional activities. For ex-
ample, in the step used to determine the semantics
of classes, designers must use Class Diagram
Editor, which therefore supports a required activ-
ity; in the same step, designers may use a requirements
browser, which therefore supports an optional
activity. Designers can invoke all the tools
that support the required activities by clicking
on the Set Environment button. In using this access
method, we help designers to set up a design
environment more easily. Note that different processes
may have different required activities. For
example, in the template oriented process (TOP),
editing the class template is a required activity.
However, in contrast, using the diagram oriented
process (DOP), the user cannot even access this
editor.
3. Atomic activity/support (Ac-
cessed through a Tool-Button): The user can
obtain support for a specific activity in an atomic
step. For example, the user can request access to
a Class Diagram Editor in order to obtain support
for defining a class diagram, which is an activity
performed in determining the semantics of
classes.
4. Documenation and Help Support (Ac-
cessed through Displays): This support can
be obtained in conjunction with the use of tools
that support atomic activities. The displays that
are made available convey a variety of informa-
tion, such as the criteria, guidelines, examples,
and measures [SO92] to be used to help designers
understand how to carry out the activity.
Debus-Booch provides the flexibility that is needed
for experienced designers. Designers can use a console
display to access all of the supports listed above. For
example, a designer can click on the Console's Steps
button to execute any step of any elaborated BOOD
process (as long as the guarding condition for this step
is satisfied, otherwise, the invocation will be rejected).
Figure
shows how these four types of support
are made available to the designers who use Debus-
Booch. In particular the figure indicates the degrees
of interactions that are allowed among the supports
for processes, steps, and activities. In particular, note
that support for process execution will be provided on
an exclusive basis only, as we believe it is reasonable to
use only one process at a time to design any given sys-
tem, or any major part of a system. Similarly, there
are constraints on furnishing support for the simultaneous
execution of process steps. This is because
there are often data dependencies between steps. On
the other hand, support for simultaneous execution of
activities is unconstrained as many design process activities
must often be highly cooperative in practice.
Some sets of activities must indeed be carried out in
constrained orders. In this case it is necessary to group
them into composite steps. The decisions about allowable
degrees of concurrency were made based on our
observations of the nature and structure of the process
models defined in Section 2.4.2.
3.2 Scenario for Use of Debus-Booch
Here is a general scenario, which indicates how designers
might use Debus-Booch (see Fig. 11):
1. Designers select a specific elaborated BOOD process
from the menu popped up after pressing the
Process button. They may select Process Selector
to retrieve information about these processes.
For each process, the Process Selector describes
the most appropriate situations (e.g., the documentation
requirements, project deadline) under
which the process should be used.
2. Upon clicking on the menu item (i.e. a selected
process), the corresponding driver will be initi-
ated. Then, designers must enter the name of the
subsystem to be designed. This subsystem can be
assigned to them from a management process or
a high-level system decomposition process (e.g.,
in our case, it is on the whiteboard [SHDH
3. When the subsystem name has been entered, the
driver will check what design steps have been performed
on this subsystem, and then automatically
set the current sub-step in order to continue
with the design of this subsystem. (This is tantamount
to the process of restarting a suspended
execution of the process from a previously stored
checkpoint.) Then, the designer can click the Run
button to invoke the corresponding sub-driver or
atomic step support.
4. If a sub-driver is initiated, step 3 will be repeated
except designers will not need to enter the sub-system
name again.
5. If atomic step support is invoked, a panel appears
and designers can click on its tool-buttons
to invoke the tools to support the activities that
should be carried out in this atomic step.
6. Having finished this step, designers can click on
the next step using the Steps buttons of the driver
to move the process forward. If the guarding condition
(e.g., see Fig. 2) for the next step is true,
the move will succeed, otherwise, the move will
be rejected. After finishing the final step in the
elaborated process, the designer may go back to
the first step to start another iteration on the
same subsystem, reviewing and revising the artifacts
produced in the previous iteration. Thus,
Debus-Booch also provides supports for process
iteration.
As this scenario illustrates, Debus-Booch provides
different supports for users who are using different process
elaborations. For example, using the template oriented
process, the user will be guided by the driver,
(with enforcement provided by the guarding condi-
tion), to specify the module diagram as is useful when
Ada is used as the implementation language. In con-
trast, using the diagram oriented process, the user will
be directed to not define the module diagram as it is
not considered to be of value when an object-oriented
language is used.
4 Experience and Evaluation
In the past year, we have carried out two experiments
and one evaluation with Debus-Booch. In the
Figure
11: A Stack of Debus-Booch Windows Supporting the Booch Method
first experiment, we used the prototype to develop a
design example: an elevator control system. This is a
real-time system that controls the moving of elevators
in response to requests of users [RC92]. It was used as
an example for demonstrating how the Arcadia consortium
supports the whole software development life-
cycle. The system requires full documentation, and is
to be implemented in Ada. It is safety-critical and
device-embedded. The design team was to include
the lead author and students who had finished the
software design course. Thus, this project has most of
the characteristics described in the Template Oriented
Example (see Table 1).
Our experience with this experiment shows that the
Template Oriented Process (TOP) supported our design
development quite effectively. The process represented
through the drivers and panels guided us
to define the BOOD templates and the module di-
agrams. For example, the designers were guided to
define the problem boundary first and then identify
candidate classes such as Controller, Button, Floor,
and Door. In this experiment, we found that the Set-
Environment button was most frequently used and
was effective in guiding designers to define those required
diagrams and templates. The flexibility offered
by the process allowed the designers to modify some
intermediate design specifications. For example, the
designers often moved back to Step 1 from Step 2 (i.e.,
the Determine Semantics of Class step of Fig. 2) to
modify the candidate classes. However, to ensure system
safety, this process enforced stricter control over
the other backward changes which directly affect the
actual design documentation. For example, the transition
from Step 3 to Step 2 of Fig. 2 was more strictly
monitored. In using the prototype, we found the current
implementation to be too restrictive. Thus, we
think that Debus-Booch needs to provide a number
of, rather than one, methods that can be selected for
controlling the transition. Examples may include: 1)
The modification triggers revision history recording,
2) The modication triggers change notification mech-
anism, and 3) the modication triggers a change approval
process. These example methods support different
degrees of the control over the design process.
In the second experiment we used Debus-Booch to
develop a design for the problem reporting system as
described in [Boo91]. This project fits five characteristics
of the Diagram Oriented Example (See Table 1).
The system is to be coded in C++, has minimum document
requirements, and is not safety-critical. It is an
information processing system and well-understood.
The design team, including the lead author and a software
engineer, however, is more experienced than that
described in the Diagram Oriented Example.
In this experiment our experience were similar to
those in the first experiment. One additional, interesting
experience is that for this well-understood domain
(e.g., design of a relational database schema),
the process (the Diagram Oriented Process (DOP))
could have been designed to be even more specific and
therefore to provide more effective guidance. For ex-
ample, Steps 1.2.2 and 2 should provide guidance to
the normalization of the classes. This seems to indicate
that for building a large system, an SDM might
need to be tailored into a set of different processes,
each of which is most effective for designing certain
kinds of components of the system. For example, a
large system might contain both an embedded system
and a data processing system. That being the case,
both DOP and TOP processes might need to be applied
to developing this system.
We have installed a version of Debus-Booch at
Siemens Corporate Research (SCR). Some technologists
there have used the prototype and evaluated it.
These technologists are specialized and experienced in
evaluating CASE tools and making recommendations
to Siemens operating companies. During their evalu-
ation, the technologists executed the tool and examined
all its important features. Based upon their ex-
perience, the technologists believe that Debus-Booch
should be particularly useful for novice designers because
the tool explicitly supports BOOD's concepts
and processes. Their experience tells us that novice
designers are much more interested in using a well de-
fined, detailed process to guide their design. A tool,
such as Debus-Booch, that explicitly supports an SDM
process should help them to learn the SDM quickly.
Some experiences coming out of these experiments
and evaluation are:
1. Process execution hierarchy (the tree of drivers
and panels in Fig. 10) cannot be too deep: There
are two main reasons for having this sugges-
tion: 1) A deep execution hierarchy needs too
much effort in tracking the detailed process states.
This problem is similar to the "getting lost in
hyperspace" problem found in hypertext system
[Con87]. 2) Need to minimize the time overhead
from transiting between various tools that
support various design steps.
These suggestions clearly reinforce our observations
about the problem of mental and resource
overhead [SO93]. Novice designers are more willing
to accept the overhead to trade for more guidance
while skilled designers are not. However,
the evaluation seems to indicate that even for the
novice designers, the process execution tree cannot
be too deep. The evaluation suggested that
three levels seem to be maximal.
2. Designers had difficulty in selecting processes:
Users need stronger support for selecting pro-
cesses. The textual help message associated with
each process seems to be not sufficient. A more
readable and illustrative method must be developed
to help users to understand the process requirements
quickly, and thereby help users to select
appropriate processes.
3. Support the coordination of designers working at
different steps: Our model focuses on supporting
designers to work in parallel in designing different
software components, or supporting an individual
designer to work in parallel on multiple software
components. However, the current model
is weak in coordinating two designers working on
the same software component at different process
steps. For example, we found that a finished class
diagram might need to be passed to another designer
for defining its module diagram. This often
helps in utilizing the different skills of designers.
4. Need to have stronger support for tracking and
coordinating processes: This suggestion is closely
related to the first suggestion. The evaluation
indicates that the process tracking mechanism is
even more important when the process guides designers
at the relatively low levels of the process.
The process tracking must emphasize indicating
the current state of the process and help designers
understand the rationales and goal for performing
the step.
Summary
Our work in developing elaborations of the BOOD
architecture into more precise design process designs
and code has brought a number of technical issues into
sharpened focus. Generally, we have found that it is
quite feasible and rewarding to develop design processes
down to the level of executable code. Doing so
raises a number of key issues that are all too easily
swept under the rug by process architectures and process
models. Many of these issues have tended to be
resolved informally and in ad hoc ways in the past.
This has stood in the way of putting into widespread
practice superior software design processes. The following
summarizes some of the more important and
interesting findings of this work.
5.1 The Advantages of Detail in Process
Definition
Process modelers often struggle to choose between
general process definitions and specific process defini-
tions. Processes that are too general are often criticized
for providing no useful guidelines. Processes
that are too specific are often criticized as leaving no
freedom to designers. We found that starting with
a specific SDM such as BOOD, and then elaborating
it and making it more specific to the needs of a particular
situation represents a good blending of these
two strategies. Doing this serves to make the resulting
process sharper and more deterministic, and thus
helps to make it more systematic and susceptible to
computerized support. It seems worthwhile to note
that taking this approach is tantamount to pursuing
the process of developing a software design process as
a piece of software, guided by a set of process requirements
and an assumed architectural specification (in
this case the BOOD architecture)
We are therefore convinced of the importance of
dealing with the details when elaborating design process
architectures into designs and code. Here we summarize
these process design issues, and describe how
we addressed them in using our
1. Step selection: An SDM often describes many
"you could do" activities in its process descrip-
tion. In our work we turned many of them
into "you should/must do" or "you should not
do" activities in order to provide more effective
guidance. For example, BOOD suggests specifying
module diagrams. However, when using an
implementation language that directly supports
programming classes and objects, Debus-Booch
guides designers to not specify these diagrams
because they are useless in this specific application
(see Fig. 4).
With our process programming approach to the
elaboration of specific processes we also found it
straightforward to specify how to incorporate various
other related processes (e.g., reuse, proto-
typing) into the design process (see figures 3, 5
and 7 for example).
2. Refinement selection: An SDM generally provides
its guidance as a set of high-level steps.
Each high-level step has a set of guidelines. Designers
are often left free to follow the guidelines
closely or rely more upon their experience. Novice
would tend to follow guidelines while skilled designers
would rely more on their experience with
some support from the guidelines. With our ap-
proach, we provide both supports to novice and
skilled designers. Novices can use the detailed
process support to guide their design activities,
while more skilled designers use only high-level
process support.
3. Control condition selection: An SDM usually
does not specify strictly how design changes
should be managed. It usually does not specify
precisely the conditions under which a step can
be considered to be finished. With our approach
of tailoring SDMs for specific projects, we can define
the conditions quite precisely. For example,
for a medical system which is often safety-critical
and regulated by FDA, we decide to provide more
strict control (see Fig. 2) to ensure system consistency
and reliability. However, our experience in
using Debus-Booch shows that such control mechanism
should not be enabled until the specifications
(e.g., class diagrams) are stable and have
been used by other software components.
4. Control flow selection: An SDM usually does
not specify all the possible transitions between
steps, instead, it only specifies those that are
likely to be done most frequently. Transitions
that are the most crucial ones may also be the
most difficult to explain, and thus not specified
sufficiently precisely. Our approach makes it far
easier to add precision to the specification of tran-
sitions. For example, Fig. 8 shows the various
transitions needed for modifying classes.
5. Concurrency specification: As noted earlier,
most SDM's are intended only to specify how to
support the efforts of a single designer working
on one project at a time. It is clearly unrealistic
to assume that this is the mode in which most
designers work, and that, therefore, support for
this mode of work is sufficient. In our work we
adapted an architecture [SHDH + 91] that is capable
of supporting group development. The activities
which can be performed at each step allow
individual designers to work on the same design
in parallel.
5.2 Related Work
We have not seen any work that is similar to our approach
of developing design processes as software, then
analyzing and contrasting the elaborated processes,
and illustrating explicitly why currently existing SDM
descriptions cannot be taken directly as a completely
systematic process for specific projects. Our work is
unique in that it indicates how one might use the process
programming approach to modeling and coding
an SDM into a family of more systematic processes
used for a corresponding family of projects.
It demonstrates how SDM processes can be defined
more precisely. A more precisely defined SDM process
is more likely to be effectively supported and
thus provides more effective guidance. This experiment
encourages us to be more confident in using
the project-domain-specific process programming
approach to solving many problems in sharpening
and supporting software processes. Some work (e.g.,
studied mechanisms for supporting generic
software processes. However, without studying specific
generic and instantiated processes as we did in
this work, the value of these mechanisms is hard to
evaluate.
This work is related to other projects aimed at
developing a process-centered software environment,
like those reported in [MS92, KF87, MR88, Phi89,
ACM90, FO91, MGDS90, DG90]. The most significant
difference between these efforts and our work is
that our work, targeted at specific process require-
ments, provides very specific strategies for supporting
specific processes that emerge from the work of
other acknowledged experts (in this case, these experts
are in the domain of software design). For ex-
ample, we provide very specific interface architecture
and tool access methods for supporting SDMs and
their various users. In contrast, most work in developing
process-centered environments is aimed at developing
general-purpose software development envi-
ronments. For instance, [MR88] supports specifying
any software development rules. Marvel [KF87] is a
general purpose programming environment. It does
not describe specifically how to provide effective guidance
for using specific development method on specific
kinds of projects. Another difference is that our
work focuses on evaluating varied external behaviors
of the system while other work focuses on the study
of implementation mechanisms and process representation
formalisms (e.g., [FO91]). The study of these
mechanisms and formalisms is not the focus of our pa-
per. Comparisons of our formalisms (e.g., APPL/A)
to others can be found in [SHDH
6 Status and Future Work
The current prototype version of Debus-Booch
is implemented using C++, Guide (a user interface
development tool), and APPL/A. It incorporates
StP [AWM89] and Arcadia prototypes. The whole
prototype consists of about 34 UNIX processes. Each
of them supports a console, driver, panel, and other
tools. It was also demonstrated at the tools fair of the
Fifth International Conference on Software Development
Environments 1 . At present, this prototype is
being enhanced by the conversion of more of its code
to APPL/A and by the incorporation of new features,
new design process steps, and new design processes.
We plan to carry out the following future work:
1. Focusing on more specific project domains, to
elaborate still more specific process models and
support environments. This should help deepen
our understanding of the project domain's influences
on process requirements and SDM elaborations
2. Collecting data about how these elaborated processes
are used. Based on the analysis of these
data, we would be able to adjust the processes
more scientifically.
3. Developing a project-domain-specific process gen-
erator. With the specification of project proper-
ties, the corresponding process definitions and its
support environment might eventually be automatically
generated, at least in part.
Acknowledgments
We thank the members of the Arcadia software environment
research consortium for their comments,
particularly Stanley M. Sutton and Mark Maybee for
their useful comments on the APPL/A code.
We also thank those SCR researchers, particularly
Wenpao Liao, who experimented with and evaluated
our prototype. We are also very grateful to Tom Murphy
and Dan Paulish for supporting us to continue
this work at SCR. We thank Bill Sherman and Wen-
pao Liao for reviewing the final version of this paper.
--R
Software process enactment in Oikos.
Wasserman and R.
Mechanisms for generic process support.
The booch method: Process and pragmatics.
Process modeling.
An introduction and sur- vey
Managing software processes in the environment melmac.
Preproduction quality assurance planning: Recommendations for medical device manufacturers.
Integration needs in process enacted environments.
Formalizing specification modeling in ooa.
Statemate: a working environment for the development of complex reactive sys- tems
Using the personal software pro- cess
A formalization of a design process.
An architecture for intelligent assistence in software development.
Software process modeling.
Software development environment for law-governed systems
Process integration in CASE environments.
Software processes are software too.
State change architecture: A prototype for executable process models.
Object Oriented Modeling and Design.
Rebus requirements on elevator control system.
Object behavior analysis.
Real time recursive design.
Debus: a software design process program.
Towards objective
Challenges in executing design process.
--TR
--CTR
Stanley Y. P. Chien, An object pattern for computer user interface systems, Information processing and technology, Nova Science Publishers, Inc., Commack, NY, 2001 | software design process;design methodology;design methods |
288204 | Applications of non-Markovian stochastic Petri nets. | nets represent a powerful paradigm for modeling parallel and distributed systems. Parallelism and resource contention can easily be captured and time can be included for the analysis of system dynamic behavior. Most popular stochastic Petri nets assume that all firing times are exponentially distributed. This is found to be a severe limitation in many circumstances that require deterministic and generally distributed firing times. This has led to a considerable interest in studying non-Markovian models. In this paper we specifically focus on non-Markovian Petri nets. The analytical approach through the solution of the underlying Markov regenerative process is dealt with and numerical analysis techniques are discussed. Several examples are presented and solved to highlight the potentiality of the proposed approaches. | Introduction
Over the past decade, stochastic and timed Petri
nets of several kinds have been proposed to overcome
limitations on the modeling capabilities of
Petri nets (PNs). Although very powerful in capturing
synchronization of events and contention for
R.M. Fricks is with the SIMEPAR Laboratory and
Ponticia Universidade Catolica do Parana, Curitiba/PR,
Brazil. A. Puliato is with the Istituto di Informatica, Uni-
versita di Catania, Catania, Italy. M. Telek is with the
Department of Telecommunications, Technical University of
Budapest, Budapest, Hungary. K.S. Trivedi is with the
Department of Electrical and Computer Engineering, Duke
University, Durham/NC, USA. E-mails: fricks@simepar.br,
ap@iit.unict.it, telek@hit.bme.hu, and kst@ee.duke.edu.
system resources, the original paradigm was not
complete enough to capture other elements indispensable
for dependability and performance modeling
of systems. Thus, new extensions allowing
for time and randomness abstractions became nec-
essary. Despite the consensus on which elements
to add, a certain uncertainty existed on where to
aggregate the proposed extensions. From among
several alternatives, a dominant one was soon established
where the Petri nets could have transitions
that once enabled would re according to exponential
distributions with dierent rates (EXP
transitions). This led to well known net types:
Generalized Stochastic Petri Nets (GSPNs) [1] and
Stochastic Reward Nets (SRNs) [2].
The resulting modeling framework allowed the
denition and solution of stochastic problems enjoying
the Markov property [3]: the probability
of any particular future behavior of the process,
when its current state is known exactly, is not altered
by additional knowledge concerning its past
behavior. These Markovian stochastic Petri nets
(MSPNs) were very well accepted by the modeling
community since a wide range of real dependability
and performance models fall in the class of
Markov models. Besides the ability to capture various
types of system dependencies intrinsic to the
underlying Markov models, other advantages of the
Petri net framework also contributed to the popularity
of the MSPNs. Among these reasons, we
point out the power of concisely specifying very
large Markov models, and the equal ease with which
steady-state, transient, cummulative transient and
sensitivity measures could be computed. One of the
restrictions, however, is that only exponentially
distributed ring times are captured. This led to
the development of non-Markovian stochastic Petri
nets.
Non-Markovian stochastic Petri nets (NMSPNs)
were then proposed to allow for the high level description
of non-Markovian models. Likewise in
the original evolutive chain, several alternative approaches
to extend the Markovian Petri nets were
proposed. Their distinctive feature was the underlying
analytical technique used to solve the non-Markovian
models. Candidate solution methods
considered included the deployment of supplementary
variables [4], the use of phase-type expansions
approximations [5, 6], and the application of
Markov renewal theory [7, 8]. Representative non-Markovian
Petri nets proposed, listed according
to the underlying solution techniques, are the Extended
Stochastic Petri Nets (ESPNs) [9], the Deterministic
and Stochastic Petri Nets (DSPNs) [10],
the Stochastic Petri Nets with Phase-Type Distributed
Transitions (ESPs) [11], and the Markov
Regenerative Stochastic Petri Nets (MRSPNs) [12].
As a consequence of these evolutive steps, we observe
that the restriction imposed on the distribution
functions regulating the ring of timed transitions
was progressively relaxed from exponential
distributions to a combination of exponential and
deterministic distributions, then to any distribution
represented by phase type approximations,
and nally to any general distribution function
(GEN transitions).
However, this
exibility also brought a new requirement
with it. If an enabled GEN transition is
disabled before ring, a scheduling policy is needed
to complete the model denition. Consider the
generic client/server NMSPN model in Fig. 1 for
instance. Requests from clients arrive according to
a Poisson process (EXP transition t 1 ). Tokens in
place clients already in the system. In
a single server conguration only one of the queued
requests will be serviced at a given time. The service
requirement
g of each request is sampled from
a general distribution function G g (t) that coordinates
the ring of the GEN transition t 2 . An age
variables a g associated with a request keeps track
of the amount of service actually received by the
request. Service will be completed (i.e., transition
will re) as soon as the age variable a g of the active
request (the one receiving server's attention)
reaches the value of its service requirement
. After
that, the request leaves the system and its associated
age variable is destroyed.
Furthermore, suppose that the server is failure-
Figure
1: Fault-tolerant client/server model.
prone with constant failure and repair rates. A
token in place P 2 represents the active state of the
server while a token in place P 3 indicates server
being down (undergoing repair). Consequently, r-
ing of the EXP transitions t 3 and t 4 correspond
to the failure and end-of-repair events associated
with the server. Whenever down, the server cannot
service new clients or complete the service requirement
of the current request, as shown by the
inhibitor arc from place P 3 to transition t 2 . Clearly
a scheduling policy is then necessary to precisely
dene how the server must proceed when brought
up again. In MSPNs with EXP transitions this was
not a problem because of the memoryless property
of the exponential distributions [3] 1 . The remaining
processing time of an nterrupted request is also
represented by the EXP transition t 2 .
In the favorable case, the server is able to completely
service the current request before a failure
occurs (as shown in Fig. 2a). Otherwise the system
behavior depends on the amount of remaining service
at the time of the interruption, and whether
the service already received by the request will be
discarded. The service requirement
may increase
or decrease as an indirect consequence of system
events responsible by the server interruption. For
instance, the failure of the server in Fig. 1 may
render certain activities of the client unnecessary,
which would then reduce its service requirement
to a lower
value. Likewise, the age variable
a g related to the active request may also be affected
by the server interruption since the amount
of service already provided to the request may be
1 If the scheduling policy is non-work-conserving and the
service requirement of the client needs to be preserved then
even the EXP transition has to be dealt with like a GEN
transition.
preserved or lost. We distinguish both situations
calling the rst a work conserving scheme, and the
second non-work-conserving. With these four conditions
we constructed the table in Fig. 2b. Note
that, although the service requirement is shown to
be increasing after the interruption in the illustration
in the bottom row of the table, the situation
where
g is also possible 2 .
a)
a g
a g
work conserving
service
preserved
service
modified
a
a
a g
task
service started
service completed
prd
pri
prs
non-working-conserving
Figure
2: Dierent scheduling policies.
Fig. 2b can be interpreted from two distinct per-
spectives. From the clients' perspective, all curves
correspond to the same client whose service is momentarily
interrupted between times 2 and 3 .
From the server's perpective, clients requests live
only from interruption-to-interruption. There is a
single age variable associated with the server, and
what happens after interruptions is dened by the
scheduling policy which may be preemptive or non-
preemptive, depending on if the server swaps clients
before nishing service or not. Preemptive policies
are usually based on a hierarchical organization of
requests (e.g., priority scheduling) or on an allocation
of service based on time quotas (e.g., round-robin
scheduling). In this case, system behavior
is strongly aected by the preemptive policy and
Naturally,
ag at the time of the interruption needs
to be always imposed.
the overall performance will depend on the strategy
adopted to deal with the preempted requests,
as described in the following:
The work done on the request prior to interruption
is discarded so that the amount of
work a g is lost. The server starts processing
a new request which has a work requirement
0; i.e., a new sample is drawn from the service
time distribution of the client. The server then
starts serving this new request from the beginning
(i.e., a shown in the bottom-right
sketch in Fig. 2b.
The server returns back to the preempted request
with the original service requirement
.
No work is lost so that the age variable retains
its value a g prior to the interruption. The request
is resumed from the point of interruption
as shown in the top-left sketch in Fig. 2b.
The server also returns to the same request
with the original service requirement
. But
the work done prior to the interruption is lost
and the age variable a g is set to zero. The
request processing starts from the beginning
as shown in the top-right sketch in Fig. 2b.
As in [13], the above policies are referred to as
preemptive repeat dierent (prd), preemptive resume
(prs) and preemptive repeat identical (pri),
respectively 3 . The case shown in the bottom-left
sketch in Fig. 2b is not considered in the literature
as it is unrealistic. Note that in [15], the authors
indicated the prd and prs type policies as enabling
and age type. The pri policy of Petri net transitions
was introduced for the rst time in [16]. The prd
and prs (with phase-type distributed ring times)
policies are the only ones considered in the available
tools modeling NMSPNs [17, 11, 18, 19].
Note that when the scheduling is preemptive: (i)
the prs and prd policies produce the same results
with EXP transitions, but pri is dierent; (ii) The
prd and pri policies have the same eect for transitions
ring according to a deterministic random
variable, but prs is dierent; and (iii) otherwise,
all three policies will produce distinct results for
otherwise same NMSPNs [14].
3 The prd, prs and pri names were borrowed from queueing
theory [14].
In this paper, we deal with the general class of
non-Markovian Petri nets using examples of MR-
SPNs, which can be analyzed by means of Markov
regenerative processes. The remaining sections of
the paper are organized as follows. The next section
introduces Markov Regenerative Petri nets and
describes how to deal with the underlying Markov
Regenerative Process. Section 3 shows how to
model a failure/repair process in a parallel machine
through MRSPN. Section 4 further extends
this model by adopting a dierent repair facility
scheduling scheme. Preemption in a multi-tasking
environment is analyzed in Section 5 through the
WebSPN tool; the resulting model contains several
concurrently enabled general transitions and
dierent memory policies. Conclusions are nally
presented in Section 6.
Regenerative Petri
Nets
MRSPNs allow transitions with zero ring times
(immediate transitions), exponentially distributed
or generally distributed ring times. The dynamic
behavior of an MRSPN is modeled by the execution
of the underlying net, which is controlled by the position
and movement of tokens. At any given time,
the state of an MRSPN is dened by the number
of tokens in each of its places, and is represented
by a vector called its marking. The set of markings
reachable from a given initial marking (i.e., the initial
state of the system) by means of a sequence of
transition rings denes the reachability set of the
Petri net. This set together with arcs joining its
markings and indicating the transition that cause
the state transitions is called reachability graph.
Two types of markings can be distinguished in
the reachability graph. In a vanishing marking at
least one immediate transition is enabled to re,
while in a tangible marking no immediate transitions
are enabled. Vanishing markings are eliminated
before analysis of the MRSPN using elementary
probability theory [12]. The resultant
reduced reachability graph is a right-continuous,
piecewise constant, continuous-time stochastic process
represents the tangible
marking of the MRSPN at time t. Choi, Kulkarni,
and Trivedi [12] showed that this marking process
is a Markov Regenerative Process (MRGP) (if the
GEN transitions are of prd type and at most one
GEN transition is enabled at a time), a member of
a powerful paradigm generally grouped under the
name Markov renewal theory [7, 8]. Mathematical
denition and solution techniques for MRGP are
summarized next.
2.1 Markov Renewal Sequence
Assume a given system we are modeling is described
by a stochastic process Z d
taking values in a countable set . Suppose we
are interested in a single event related with the
system (e.g., when all system components fail).
Additionally, assume the times between successive
occurrences of this type of event are independent
and identically distributed (i:i:d:) random
variables. be the
time instants of successive events to occur. The
sequence of non-negative i:i:d: random variables,
:::gg is a renewal
process [20, 21]. Otherwise, if we do not start observing
the system at the exact moment an event
has occurred (i.e., S 0 6= 0) the stochastic process S
is a delayed renewal process.
However, suppose instead of a single event, we
observe that certain transitions between identi-
able system states Xn of a
subset
of
,
also resemble the behavior just described, when
considered in isolation. Successive times Sn at
which a xed state Xn is entered form a (possi-
bly delayed) renewal process 4 . Additionally, when
studying the system evolution we observe that at
these particular times the stochastic process Z exhibits
the Markov property, i.e., at any given moment
Sn , n 2 N , we can forget the past history of
the process. The future evolution of the process depends
only on the current state at these embedded
time points. In this scenario we are dealing with a
countable collection of renewal processes progressing
simultaneously such that successive states visited
form an embedded discrete-time Markov chain
(EMC) with state
space
The superposition of
all the identied renewal processes gives the points
known as Markov regeneration epochs
(also called Markov renewal moments 5 ), and to-
4 We are assuming Xn is the system state at time Sn .
5 Note that these instants Sn are not renewal moments
gether with the states of the EMC dene a Markov
renewal sequence.
In mathematical terms, the bivariate stochastic
process (X; S) d
is a Markov renewal
sequence (MRS) provided that
for all n 2 N ,
and t 0. We will always
assume time-homogeneous MRS's; that is, the conditional
transition probabilities
are independent of n for any
fore, we can always write
The matrix of transition probabilities K(t) d
is called the kernel of the MRS.
2.2 Markov Regenerative Processes
A stochastic process fZ t ; t 0g is a Markov regenerative
process i it exhibits an embedded MRS
(X,S) with the additional property that all conditional
nite distributions of fZSn+t ; t 0g given
are the same as
those of fZ As a special
case, the denition implies that [8]
This means that the MRGP fZ
does not have the Markov property in gen-
eral, but there is a sequence of embedded time
points such that the states
respectively of the process at
these points satisfy the Markov property. It also
implies that the future of the process Z from
onwards depends on the past fZ
only through
The stochastic process between consecutive
Markov regeneration epochs, usually refered to
as described in renewal theory, since the distributions of the
time interval between consecutive moments are not necessarily
i.i.d.
as subordinated process, can be any continuous-time
discrete-state stochastic process over the same
probability space. Recently published examples
considered subordinated homogeneous CTMCs [12,
22], non-homogeneous CTMCs [23], semi-Markov
processes (SMPs) [24], and MRGPs [25].
2.3 Solution of Problems
0g be a stochastic process with
discrete state space and embedded MRS
K(t). For such
a process we can dene a matrix of conditional transition
probabilities as:
In many problems involving Markov renewal pro-
cesses, our primary concern is nding ways to effectively
since several measures of
interest (e.g., reliability and availability) are related
to the conditional transition probabilities of
the stochastic process.
At any instant t, the conditional transition probabilities
of Z can be written as [7, 8]:
for all i 2
we construct a
then the set of integral equations V ij (t) denes a
Markov renewal equation, and can be expressed in
matrix form as
Z tdK(u)V(t u); (1)
where the Lebesgue-Stieltjes integral 6 is taken term
by term.
To better distinguish the roles of matrices E(t)
and K(t) in the description of the MRGP we callR tdK(u)V (t
a density function
dt .
the matrix E(t) as the local kernel of the MRGP,
since it describes the state probabilities of the subordinated
process during the interval between successive
Markov regeneration epochs. Since matrix
K(t) describes the evolution of the process from
the Markov regeneration epoch perspective, without
describing what happens in between these moments
we call it the global kernel of the MRGP.
In the special case when the stochastic process Z
does not experience state transitions between successive
Markov regeneration epochs; i.e.,
Z is called a semi-Markov process and E(t) is a
diagonal matrix with elements
where
is the sojourn time distribution in state i. Hence,
the global kernel matrix alone (which in this case is
usually denoted as Q(t)) completely describes the
stochastic behavior of the SMP.
The Markov renewal equation represents a set
of coupled Volterra integral equations of the second
kind [26] and can be solved in time-domain
or in Laplace-Stieltjes domain. One possible time
domain solution is based on a discretization approach
to numerically evaluate the integrals presented
in the Markov renewal equation. The integrals
in Eqn. 1 are solved using some approximation
rule such as trapezoidal rule, Simpson's rule
or other higher order quadrature methods. Another
time domain alternative is to construct a system
of partial dierential equations (PDEs), using
the method of supplementary variables [4]. This
method has been considered for steady-state analysis
of DSPNs in [22] and subsequently extended to
the transient case in [27].
An alternative to the direct solution of the
Markov renewal equation in time-domain is the
use of transform methods. In particular, if we
st dE(t) and V
st dV(t), the Markov renewal equation become
After solving the linear system for V (s), transform
inversion is required 7 . In very simple cases,
a closed-form inversion might be possible but in
most cases of interest, numerical inversion will be
necessary. The transform inversion however can
encounter numerical di-culties especially if V
has poles in the positive half of the complex plane.
For a thorough discussion of Markov renewal
equation solution techniques see [28, 29], and for
generic Volterra integral equations numerical methods
see [30, 31]. References for the application
of Markov renewal theory in the solution of
performance and reliability/availability models see
[16, 32, 23, 28, 33, 34, 35, 36, 37].
Modeling Failure/Repair
Activities in a Parallel Machine
Conguration
The use and analysis of MRSPNs is initially demonstrated
using a computer system performability
model. Two machines (a and b) are working in a
parallel conguration sharing a single repair facility
with a First-Come First-Served scheduling
discipline. Due to the non-preemptive nature
of this discipline, we do not need age variables in
this case (once enabled all GEN transitions in the
model will never be disabled until ring). We assume
that both machines have exponential lifetime
distributions with constant parameters a and b
respectively. Whenever one of the machines fails it
immediately requests repair. When the single repair
facility is busy and a second failure occurs, the
second machine to fail waits in a repair queue until
the rst machine is put back into service. The
repair-time of the machines is dened by the general
distribution functions G a (t) and G b (t).
The overall behavior of the system can be understood
from the MRSPN illustrated in Fig. 3a.
Machine a is working whenever there is a token in
place P 1 . The EXP transition f a with rate a represents
the failure of machine a. When machine a
fails, a token is deposited in place P 6 and its repair
is requested. If the repair facility is available (i.e.,
7 This being the approach addopted in the solution of all
examples presented in this paper.
a
f a
f a f b
a
r
f a f b
f a
f bc)
a
a
r
a b
a
f
f
r
b a
2: 3:
4: 5:
7:
a)
b)r
rb
a
Figure
3: Parallel system model: a) MRSPN; b)
reachability graph; and c) state transition diagram.
there is a token in place P 5 ), it is appropriated with
the ring of immediate transition i a . The GEN
transition r a , ring according to the distribution
function G a (t), represents the random duration of
repair. A token in place P 3 means that machine a
is queued waiting for the availability of the single
repair facility while machine b is undergoing repair
(there is a token in place P 7 ). A symmetrical set
of places and transitions describes the behavior of
machine b. The system is down whenever there are
no tokens in both the places P 1 and P 2 .
The reachability graph corresponding to the
Petri net is shown in Fig. 3b. Each marking in
the graph is a 7-tuple keeping track of the number
of tokens in places P 1 through P 7 . In the
graph, solid arcs represent state changes due to
the ring of immediate transitions or EXP tran-
sitions, while dotted arcs denote the ring of GEN
transitions. The vanishing markings (enclosed by
dashed ellipses in the diagram) are eliminated when
the reduced reachability graph is constructed (not
shown), and based on the reduced version we constructed
the state transition diagram of Fig. 3c.
Dene the stochastic process to
represent the system state at any instant, where
both machines are working at t
machine a is under repair while
machine b is working at t
3 if machine b is under repair while
machine a is working at t
4 if machine a is under repair while
machine b is waiting for repair at t
5 if machine b is under repair while
machine a is waiting for repair at t
Note that possible values of Z t are the labels corresponding
tangible markings in Fig. 3b. We are interested
in computing performability measures associated
with the system. To do so, we need to
determine the conditional probabilities PrfZ
5g. Analysis of the
resultant (reduced) reachability graph shows that
Z is an MRGP with an EMC dened by the states
1, 2, and 3;
i.e.,
3g. We can observe that
transitions to states 4 and 5 do not correspond to
Markov renewal epochs because they occur while
GEN transitions are enabled. An additional step
adopted before starting the synthesis of the kernel
matrices was the construction of a simplied state
transition diagram. Fig. 3c shows a simplied version
of the reduced reachability graph where the
markings were replaced by the corresponding state
indices. We preserved the convention for the arcs
and extended the notation by representing states
of the EMC by circles, and other states by squares.
The construction of kernel matrices can proceed
with the analysis of possible state transitions. The
only non-zero elements in global kernel matrix K(t)
correspond to the possible single-step transitions
between states of the EMC. Consequently, we have
the following structure of the matrix (identied directly
from Fig. 3c):
Let the random variables L a and L b be the respective
time-to-failure of the two machines, we can
determine K 1;2 (t) in the following way:
is the first one to failg
d
Z te b a e a d
Similarly,
is the first one to failg
Determination of the elements K 2;1 (t) and
K 2;3 (t) is quite alike, so we only show how K 2;1 (t) is
determined. The third row is completelly symmetrical
to the second, so it can be easily undestood
once K 2;1 (t) is understood. We need some auxiliary
variables to help in the explanation of the constructive
process of K 2;1 (t). Hence, we dene the
random variables R a and R b to respectively represent
times necessary to repair machines a and b.
The distribution function of R a (R b ) is G a (G b ).
Using this new variables we can compute K 2;1 (t):
of a is finished by time t
and b has not failed during the
repair of ag
Z tP rfL b > gdG a ()
dG a ()
Z te b dG a ():
To summarize, the elements of the global kernel
matrix are:
Z te b dG a ();
Z t1 e b
dG a ();
Z te a dG b (); and
Z t1 e a
Note that the global kernel will always be a
square matrix. In this case with dimensions 3 3,
since we have 3 states in the embedded Markov
chain. However, the local kernel matrix is not necessarily
a square matrix, since the cardinality of the
state space of Z can be larger than the cardinality
of the state space of the embedded Markov chain.
This can be seen, for instance, in this system since
the embedded Markov chain has only 3 states while
the MRGP has 5 possible states.
We construct the local kernel matrix E(t) following
a similar inductive procedure. In this case we
are looking for the probability that the MRGP will
move to a given state before the next Markov renewal
moment. Careful analysis of Fig. 3c reveals
the structure of the local kernel matrix E(t):4 E 1;1 (t)
Since in a single step the system can only go from
state 1 to the other two states of the EMC then E 1;1
should be the complementary sojourn time distribution
function in state 1, that is,
The di-culty comes with the induction of E 2;2 (t)
(complement of E 2;2 (t)). Once we solve
for these, we have the solution for the remaining
components of the matrix due to the symetry of
the problem. Therefore, we explain the induction
process that leads to E 2;2 (t):
of a is not finished up to t
and b has not failed until tg
of a is not finished up to t
P rfb has not failed until tg
We can now express the remaining non-zero elements
of the local kernel matrix as
G c
a (t)
G c
with
G c
a
G c
We can always verify our answers by summing the
elements in each row of both kernel matrices. Corresponding
row-sums of the two matrices must add
to unity, condition that is easily veried to hold in
the example.
time (hours)0.9981.000
availability
instantaneous
interval
time (hours)1.921.962.00
interval
power
time (hours)0.9981.000
availability
instantaneous
interval
time (hours)1.921.962.00
interval
power
results:
results:
Figure
4: Numerical results for the parallel system
with non-premptive repair.
The kernel matrices determined can then be substituted
in Equation (1) and the resultant system
of coupled integral equations solved using one of
the approaches described in [28, 29]. The resultant
plots, labelled LST in Fig. 4, report system availability
and performability computed when time to
repair is deterministic; i.e.,
G a
where U(t) is the unit step function; the failure
rates (parameters a and b ) are identical b ) takes
5 hours. The interval availability is the expected
proportion of time the system is operational during
the period [0; t]:
Z tE[X()]d;
when the discrete random variable X represents the
operational status of the system; i.e.,
the system is operatinal at time t, and 0 if it is not.
The performability measure plotted in the gure
corresponds to the interval processing capacity of
the system, with the convention that a unit of computing
capacity corresponds to that of one active
machine.
Following the approach used in [34], we also plotted
corresponding Markovian system results, where
each DET transition was replaced by an equivalent
25-stage Erlang subnet. The Markovian models
were solved using the Stochastic Petri Net Package
(SPNP) introduced in [38].
4 Preemptive LCFS repair
Fig. 5 shows the PN which describes the behavior of
the system containing the same machines a and b
of the previous example and applies the preemptive
LCFS scheduling scheme. The repair of machine a
(b), represented by a token at P 6 (P 7 ) is preempted
as soon as machine b (a) fails, i.e., transition f b (f a )
res. In this case the repair facility is assigned to
the machine which failed later (i 0
a or i 0
b res and a
token is placed to P 8 or P 9 ). After the repair of
the last failed machine (ring of r 0
a or r 0
b ) the repair
facility returns to the completion of preempted
repair action. Dierent memory policies can be
considered depending on whether the repairman is
able to \remember" the work already performed on
the machine before preemption or not. In the case
r r
r
6
?/
R
f a
i a
r a
a
a
Figure
5: Preemptive LCFS repair with non-identical
machines
that the prior work is lost due to the interruption
and the repair must be repeated from scratch with
an identical repair time requirement (pri policy) or
with a repair time resampled from the original cumulative
distribution function (prd policy). In the
case that the prior work is not lost and the time
to complete the preempted repair equals the residual
repair time given the portion of work already
completed before preemption (prs policy). The PN
on Fig. 5 captures the dierent memory policies for
repair by assigning transitions r a and r b the appropriate
preemption policies. (The preemption policies
of transitions r 0
a and r 0
b are not relevant since
a and r 0
b cannot be preempted.)
We analyze a simplied version of the two machine
system with preemptive LCFS repair and
with prs policy. We assume that the two machines
are statistically identical, i.e., their failure and repair
time distributions are the same. Fig. 6a shows
a PN which describes the behavior of the system of
two identical machines with LCFS scheduling. Tokens
in place tokens
in P 2 count failed machines (including the one
under repair), and a token in place P 4 the availability
of the single repair facility. In the initial marking
is the only enabled
transition. Firing of t 1 represents the failure of the
rst machine and leads to state M In
are competing. The GEN
transition t 2 represents the repair of the failed machine
and its ring returns the system to the initial
state M 1 . The EXP transition t 3 represents the
failure of the second machine and its ring disables
q
A
A AU
A
A AU
A
A AU
A
A AU
A
A AU
Figure
Preemptive LCFS repair with identical
machines
removing one token from P 3 (the rst repair
becomes dormant). In M
is under repair and the other repair is dormant,
and the only enabled transition is the repair of the
last failed machine. Firing of the GEN transition t 4
leads the system again to M 2 , where the dormant
repair is resumed. Assume that the failure times of
both machines are exponentially distributed with
parameter so that the EXP transitions t 1 and t 3
have ring rates 2 and , respectively.
The preemptive policy of transition t 2 has to be
assigned based on the system behavior to be eval-
uated. (The preemptive policy of transition t 4 is
irrelevant since t 4 can not be preempted.) Assigning
a prd policy to t 2 means that each time t 2 is
disabled by the failure of the second machine (t 3
res before t 2 ), the corresponding age variable a 2
is reset. As soon as t 2 becomes enabled again (the
second repair completes and t 4 res) no memory is
kept of the prior repair period, and the execution
of the repair restarts from scratch. The prd service
policies, like this one, are covered by the model definition
in [39, 40].
The case when a pri policy is assigned to t 2 is
very similar to the previous one except that as soon
as t 2 becomes reenabled (the second repair completes
and t 4 res), the same repair (same ring
time sample) has to be completed from the begin-
ning. This type of pri memory policy is covered by
the model denition in [16], and can be analyzed
by the transform domain method discussed there.
Hereafter we assume that a prs policy is assigned
to t 2 . When a prs policy is assigned to t 2 , each time
t 2 is disabled without ring (t 3 res before t 2 ) the
age variable a 2 is not reset. Hence, as the second
repair completes (t 4 res), the system returns to
keeping the value of a 2 , so that the time to
complete the interrupted repair can be evaluated
as the original repair requirement minus the current
value of a 2 . The age variable a 2 counts the
total time during which t 2 is enabled before r-
ing, and is equal to the cumulative sojourn time in
. The Markov renewal moments in the marking
process correspond to the epochs of entrance
to markings in which the age variables associated
with all the transitions are equal to zero. By inspecting
Fig. 6b, the Markov renewal moments are
the epochs of entering M 1 and of entering M 2 from
The subordinated process starting from marking
1 is a single step CTMC (since t 1 the only enabled
EXP transition) and includes the only immediately
reachable state M 2 (Markovian regeneration
period).
The subordinated process starting from marking
includes all the states reachable from M 2 before
ring of t is the
only state in which t 2 is enabled, the age variable
a 2 increases only in marking M 2 and maintains its
value in M 3 . The ring of t 2 can only occur from
leading to marking M 1 .
Notice that the subordinated process starting
from M 2 is semi-Markov since the ring time of t 4
is generally distributed. The age variable a 2 grows
whenever the MRSPN is in marking M 2 , and the
ring of t 2 occurs when a 2 reaches the actual value
of the ring time (which is generally distributed
with cumulative distribution function G(t)). If we
condition that the ring time of t 2 to w, w acts an
absorbing barrier for the accumulation functional
represented by the age variable a 2 , the ring time
of t 2 is determined by the rst passage time of a 2
across the absorbing barrier w.
The closed form Laplace-Stieltjes transform expressions
of the kernel matrices of the LCFS repair
prs case are derived here in detail, applying
the technique based on the Markov renewal theory.
We build up the K row
by row by considering separately all the states that
can be regeneration states and can originate a subordinated
process. M 3 can never be a regeneration
state since t 2 is always active when entering to M 3 ,
g. The fact that M 3 is not a regeneration
marking, means that the process can stay
in M 3 only between two successive Markov renewal
moments.
The starting regeneration state is M 1 - (Markovian
regeneration period) No general transition is enabled
and the next regeneration state can only be
state M 2 . The non-zero elements of the rst row of
the kernel matrices are
The starting regeneration state is M 2 - Transition t 2
is GEN so that the next regeneration time point is
the epoch of ring of t 2 . The subordinated process
starting from M 2 comprises states M 2 and M 3 and
is an SMP (since t 4 is GEN) whose kernel is:
G
where G (s) is the LST of the distribution function
of the ring time of t 4 .
The transition t 2 res when the age variable a 2
reaches actual sample of the ring time
2 . In gen-
eral, when a GEN transition is active the occurence
of a Markov renewal epoch in the marking process
of an NMSPN is due to one of the following two
reasons:
the GEN transition res,
the GEN transition of prd type becomes disabled
For the analysis of subordinated processes of this
kind three matrix functions F i (t; w), D i (t; w) and
denotes the time, w a xed r-
ing time sample, and the superscript i refers to the
initial (regeneration) state of the subordinated pro-
cess) were introduced in [24]. F i (t; w) refers to the
case when the next regeneration moment is because
of the ring of the GEN transition with the (xed)
ring time sample w. For the analysis of this case
an additional matrix ( i referred to as branching
probability matrix) is introduced, as well, to describe
the state transition subsequent to the ring
of the GEN transition. D i (t; w) captures the case
when the next regeneration moment is caused by
the disabling of the prd type GEN transition. And
describes the state transition probabilities
inside the regeneration period.
Since transition t 2 is of prs type the matrix function
does not play a role in the analysis
of the subordinated process starting from marking
. The remaining functions can be evaluated
based on the kernel of the subordinated SMP
u' (s; v)
s
u' (s; v)
k' (t); s is the time variable and
v is the barrier level variable in transform domain;
r k is the indicator that the active GEN transition
is enabled in state k; R i is the part of the state
space reachable during the subordinated process;
and the superscript () refers to Laplace-Stieltjes
(Laplace) transform.
Given that G g (t) is the distribution function of
the ring time of the GEN transition, the elements
of the i-th row of matrices K(t) and E(t) can be
expressed as follows, as a function of the matrices
dG g (w)
To evaluate the 2nd row of the kernel matrices
we are applying these results for the subordinated
process starting from regeneration state M 2 . Doing
so we obtain the following expressions for the non-zero
matrix entries:
F
22 (s;
22 (s;
Unconditioning with respect to the ring time distribution
of t 2 , and after inverting the Laplace
transform (LT) with respect to v, the non-zero entries
of the 2nd row of the LST matrix functions
K
e w(s+ G (s)) dG(w)
22
The LST of the state probabilities are obtained
by solving the Markov renewal equation in transform
domain. The time domain probabilities are
calculated by numerically inverting the result by
resorting to the Jagerman method [41].
To evaluate the performance of the dierent
scheduling schemes, we compared the availability
and processing power of the FCFS and the LCFS
repair schemes with two dierent repair time dis-
tributions. The FCFS scheme was evaluated by
the time domain method introduced in the previous
section and the LCFS scheme was evaluated
by the above transform domain method. It is assumed
that the system is available when at least
one machine is working (marking M 1 and M 2 ) and
that the system performance doubles when both
machines are working. The failure times of both
machines are exponentially distributed with rates
0:01. The repair times of both
machines are assumed to be:
deterministic
G
hyperexponentially distributed with
G
The mean repair time is 5 in both cases. Fig. 7a
and 7b show the instantaneous and the interval
measures of availability and processing power with
deterministic repair time, respectively. The dotted
line shows the instantaneous and the short
dashed line shows the interval availability/power
with LCFS repair, while the long dashed line shows
the instantaneous and the solid line shows the interval
availability/power with FCFS repair. It can be
observed that the FCFS scheduling performs better
in this case. The availability and processing power
results for the hyperexponential repair time distribution
are plotted on Fig. 7c and 7d, respectively.
In these gures the dotted line shows the instantaneous
availability/power with LCFS repair, while
the dashed line shows the instantaneous availabil-
ity/performability with FCFS repair. As can be
seen from these gures, in contrast with the deterministic
repair time the LCFS scheduling performs
better with the hyperexponential repair time distribution
Modeling preemption in a
multi-tasking environment
NMSPN require complex solution techniques
mainly based on theory of Markov regenerative pro-
cesses. Software packages are then required which
can hide solution and implementation details. A
big boost in this direction came from two well-known
tools, DSPNexpress [42] and TimeNET [43,
44]. Recently, a new software package for non-Markovian
Petri nets has been developed in a joint
eort between the Universities of Catania and Bu-
dapest. This tool, named WebSPN [45],
provides a discrete time approximation of the
stochastic behaviour of the marking process which
results in the possibility to analyze a wider class
of PN models with prd, prs and pri concurrently
enabled generally distributed transitions. The approximation
of the continuous time model at equispaced
discrete time points involves the analysis
of the system behavior over a time interval based
on the system state at the beginning of the interval
and the past history of the system. A Web-
centered view has been adopted in its development
in order to make it easily accessible from
any node connected with the Internet as long as
it possesses a Java-enabled Web browser. Sophisticated
security mechanisms have also been implemented
to regulate the access to the tool which are
based on the use of public and private electronic
keys. WebSPN is available at the following site:
http://sun195.iit.unict.it/webspn/webspn2/
5.1 Model description
In this section we describe and solve a model of
Petri net with several concurrently enabled GEN
transitions and dierent memory policies. The system
moves between an operative phase, where useful
work is produced, and a phase of maintenance
where the processing is temporarily interrupted.
The Petri net shown in Fig. 8 represents the
model of the system that consists of three functional
blocks generically referred to as Block1,
Block2 and Block3. Block1 models the alternation
of the system between the operative phase and the
maintenance phase. Block2 models the two sequential
phases of processing of jobs. Finally, Block3
models the alternation of the system during the operative
phase between the phase of pre-processing
and the one of processing of jobs.
Within Block1, the two states of operation where
the system can be are represented by places user
and system and by transitions U time and S time.
A token in place user denotes the operative state,
while a token in place system denotes the maintenance
one. The duration of the operative phase
is denoted by transition U time, while the maintenance
one is denoted by transition S time. The inhibitor
arcs outgoing from place system and leading
to the timed and immediate transitions contained
in Block2 and Block3 producer, cons1, busy prod,
idle prod, busy2, idle2 are used for interrupting the
activity of the system during the phase of maintenance
Block2 models the processing of jobs. In partic-
ular, the number of jobs to be processed is denoted
by the number of tokens contained in place work,
while the time of pre-processing of each job is represented
by transition producer. Pre-processed jobs
are queued in a buer (place bu1) waiting for the
second phase of processing (transition cons1).
In Block3, the alternation between the phases
of pre-processing and processing of jobs is represented
through places slot1 and slot2 and transitions
busy brod, busy2, idle prod, idle2. A token in
place slot1 denotes that the system is executing the
pre-processing of a job, while a token in place slot2
denotes the execution of a phase of processing. An
inhibitor arc between slot1 and cons1 deactivates
the phase of processing when the pre-processing
one is active. In the same way, the inhibitor arc
between slot2 and producer deactivates the phase
of pre-processing when the processing one is ac-
tive. The time that the system alternately spends
for these two activities is represented by transitions
busy prod and busy2. The immediate transition
idle prod (idle2) prevents the system to remain in
phase 1 (2), even if no job is to be processed. The
function of the inhibitor arcs from place work to
transition idle prod and from place bu to transition
idle2 is to enable such transitions when no job
is to be processed in the corresponding phase of
processing.
Immediate transition end and place Stop are used
for modeling the processing of all the jobs assigned
to the system at the beginning. In fact, transition
end is inhibited until at least one token is present
in places work and bu. When all the jobs have
been processed, transition end res, and immediately
moves a token to place Stop. All the activities
of the system are thus interrupted through the
inhibitor arcs outgoing from place Stop.
The measure that we evaluate from this model is
the distribution of the time required for completing
the set of jobs assigned to the system at the
beginning. It can be obtained as the distribution
of having a token in place Stop.
With regard to the distributions of the ring
times to be assigned to timed transitions, we assume
that the ring times of transitions U time,
S time, busy brod, busy2 are deterministic. We
assume that the ring times of transitions producer
and cons1 are respectively distributed uniformly
and exponentially. The measures considered
can therefore be evaluated by changing the memory
policy associated with transitions producer and
cons1.
In the case of prd policy, the temporary interruption
of the processing of a job (either because
the whole system enters the phase of maintenance,
or because, even if the system is in the production
phase, it interrupts the pre-processing phase
for changing to the processing one or vice versa)
causes the interrupted job to be discarded. A new
job is executed when the system is available again.
The correspondence with a real system is perhaps
hard to nd; however, we note that prd policy is
the most commonly used one in the literature.
Conversely, by adopting prs policy, we keep a
memory of the work that we were executing. In
this case, when transition producer is disabled, we
keep a memory of the work that has already been
executed on the job considered. When the system
enters the operative state again, the pre-processing
of the job continues from the point we had reached.
In this case, the model can represent a system of
manufacturing, where a machine used for production
alternates cycles of production and cycles of
maintenance, and production takes place in two sequential
phases. We note that prd and prs policies
are equivalent for transition cons1, since this one is
and EXP transition.
With pri policy, when transition producer is dis-
abled, the work that had already been produced
is lost, but we keep a memory of the job that we
were processing. When the transition is enabled
again we start from zero, but the amount of work
to be produced on the job remains the same, because
the job has not been changed. Such a behavior
can be easily noted when accessing transactional
databases, where each transaction is atomic
(i.e., has to be processed with no interruption). If
an interruption occurs, the transaction is entirely
processed again. If we assume a memory policy like
prs for transition cons1, the model could represent a
client/server system where the accesses to database
(transition producer) take place atomically, and the
phase of processing of the query (transition cons1)
requires a variable time, distributed exponentially.
5.2 Numerical Results
For the solution of the model we assume that the
ring time of transition producer is distributed uniformly
between 0.5 and 1.5; the ring time of transitions
time and S time are deterministic, with
a ring time of 1; the ring time of transitions
busy prod and busy2 are deterministic, with a r-
ing time of 0.1; the ring time of transition cons1
is distributed exponentially, with a ring rate of
transition end is immediate and has a priority
of 2; transitions idle prod and idle2 are immediate
and have a priority of 1; the total number of jobs
to be processed is 3.
In Fig. 9 we show the distribution of completion
time for dierent memory policies assigned to
transitions producer and cons1. The behavior of
the system changes signicantly depending upon
the memory policy adopted. The prs policy accrues
the highest probability of completion within
a given time. Both the prd and the prs policies
accomplish the completion of jobs. In fact, curves
eventually reach the value 1. Conversely, a dier-
ent behavior can be observed if we assume a policy
like pri. In fact, in that case, the resulting distribution
is defective, since the unit value is never
reached for t ! 1. This is closely connected with
the choice of the parameters associated with transitions
producer and U time. As we note in Fig. 10,
when the ring time of transition U time is lower
than 1.5, transition producer has a positive probability
(50%)of not completing its work. Since in
the case of pri policy the job is processed with the
same work requirement, this causes a situation of
impasse, which prevents the work assigned to the
system to be completed.
Fig. 11 shows how the overall system behavior
changes if transition U time is assigned a ring time
higher than 1.5 (for example 2.0). In such case,
transition producer has a nite probability of ring
before the system enters the phase of maintenance,
and therefore the distribution of completion time
with pri policy reaches the value 1.
6 Conclusion
We discussed the need for more advanced techniques
to capture generally distributed events
which occur in everyday life. Among the dier-
ent approaches proposed in the literature, non-Markovian
nets represent a valid analytical
alternative to numerical simulation. An approach
based on the analysis of the underlying Markov Regenerative
Process has been presented. Advanced
preemption policies were introduced and several examples
solved in detail.
--R
Modeling and Analysis of Stochastic Systems
Renewal Theory
An Introduction to Probability Theory and Its Applications
The Theory of The Volterra Integral Equation of the Second Kind
The Numerical Solution of Volterra Equations
Analytical and Numerical Methods for Volterra Equations
--TR
--CTR
Giacomo Bucci , Andrea Fedeli , Luigi Sassoli , Enrico Vicario, Timed State Space Analysis of Real-Time Preemptive Systems, IEEE Transactions on Software Engineering, v.30 n.2, p.97-111, February 2004 | stochastic Petri nets;numerical analysis;markov regenerative processes;preemption policies |